# Code of conduct Source: https://docs-3.prefect.io/contribute/code-of-conduct Learn about the standards we hold ourselves and our community to. ## Our pledge In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone. This is regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. ## Our standards Examples of behavior that contribute to creating a positive environment include: * Using welcoming and inclusive language * Being respectful of differing viewpoints and experiences * Gracefully accepting constructive criticism * Focusing on what is best for the community * Showing empathy towards other community members Examples of unacceptable behavior by participants include: * The use of sexualized language or imagery and unwelcome sexual attention or advances * Trolling, insulting/derogatory comments, and personal or political attacks * Public or private harassment * Publishing others' private information, such as a physical or electronic address, without explicit permission * Other conduct which could reasonably be considered inappropriate in a professional setting ## Our responsibilities Project maintainers are responsible for clarifying the standards of acceptable behavior. They are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct. They may banβ€”temporarily or permanentlyβ€”any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. ## Scope This Code of Conduct applies within all project spaces. It also applies when an individual represents the project or its community in public spaces. Examples of representing a project or community include using an official project email address, posting through an official social media account, or acting as an appointed representative at an online or offline event. Project maintainers may further clarify what "representation of a project" means. ## Enforcement Report instances of abusive, harassing, or otherwise unacceptable behavior by contacting Chris White at [chris@prefect.io](mailto:chris@prefect.io). All complaints are reviewed and investigated. Each complaint will receive a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions, as determined by other members of the project's leadership. ## Attribution This Code of Conduct is adapted from the [Contributor Covenant, version 1.4](https://www.contributor-covenant.org/version/1/4/code-of-conduct.html). See the [Contributor Covenant FAQ](https://www.contributor-covenant.org/faq) for more information. # Contribute to integrations Source: https://docs-3.prefect.io/contribute/contribute-integrations Prefect welcomes contributions to existing integrations. Thinking about making your own integration? Feel free to [create a new discussion](https://github.com/PrefectHQ/prefect/discussions/new?category=ideas) to flesh out your idea with other contributors. ## Contributing to existing integrations All integrations are hosted in the [Prefect GitHub repository](https://github.com/PrefectHQ/prefect) under `src/integrations`. To contribute to an existing integration, please follow these steps: Fork the [Prefect GitHub repository](https://github.com/PrefectHQ/prefect) ```bash git clone https://github.com/your-username/prefect.git ``` ```bash git checkout -b my-new-branch ``` Move to the integration directory and install the dependencies: ```bash cd src/integrations/my-integration uv venv --python 3.12 source .venv/bin/activate uv pip install -e ".[dev]" ``` Make the necessary changes to the integration code. If you're adding new functionality, please add tests. You can run the tests with: ```bash pytest tests ``` ```bash git add . git commit -m "My new integration" git push origin my-new-branch ``` Submit your pull request upstream through the GitHub interface. # Develop on Prefect Source: https://docs-3.prefect.io/contribute/dev-contribute Learn how to set up Prefect for development, experimentation and code contributions. ## Make a code contribution We welcome all forms of contributions to Prefect, whether it's small typo fixes in [our documentation](/contribute/docs-contribute), bug fixes or feature enhancements! If this is your first time making an open source contribution we will be glad to work with you and help you get up to speed. For small changes such as typo fixes you can simply open a pull request - we typically review small changes like these within the day. For larger changes including all bug fixes, we ask that you first open [an issue](https://github.com/PrefectHQ/prefect/issues) or comment on the issue that you are planning to work on. ## Fork the repository All contributions to Prefect need to start on [a fork of the repository](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/fork-a-repo). Once you have successfully forked [the Prefect repo](https://github.com/PrefectHQ/prefect), clone a local version to your machine: ```bash git clone https://github.com/GITHUB-USERNAME/prefect.git cd prefect ``` Create a branch with an informative name: ``` git checkout -b fix-for-issue-NUM ``` After committing your changes to this branch, you can then open a pull request from your fork that we will review with you. ## Install Prefect for development Once you have cloned your fork of the repo you can install [an editable version](https://setuptools.pypa.io/en/latest/userguide/development_mode.html) of Prefect for quick iteration. We recommend using `uv` for dependency management when developing. Refer to the [`uv` docs for installation instructions](https://docs.astral.sh/uv/getting-started/installation/). To set up a virtual environment and install a development version of `prefect`: ```bash uv uv sync ``` ```bash pip and venv python -m venv .venv source .venv/bin/activate # Installs the package with development dependencies pip install -e ".[dev]" ``` To verify `prefect` was install correctly: ```bash uv uv run prefect --version ``` ```bash pip and venv prefect --version ``` To ensure your changes comply with our linting policies, set up `pre-commit` and `pre-push` hooks to run with every commit: ```bash uv run pre-commit install ``` To manually run the `pre-commit` hooks against all files: ```bash uv run pre-commit run --all-files ``` To manually run the pre-push hooks: ```bash uv run pre-commit run --hook-stage pre-push --all-files ``` If you're using `uv`, you can run commands with the project's dependencies by prefixing the command with `uv run`. ## Write tests Prefect relies on unit testing to ensure proposed changes don't negatively impact any functionality. For all code changes, including bug fixes, we ask that you write at least one corresponding test. One rule of thumb - especially for bug fixes - is that you should write a test that fails prior to your changes and passes with your changes. This ensures the test will fail and prevent the bug from resurfacing if other changes are made in the future. All tests can be found in the `tests/` directory of the repository. You can run the test suite with `pytest`: ```bash # run all tests pytest tests # run a specific file pytest tests/test_flows.py # run all tests that match a pattern pytest tests/test_tasks.py -k cache_policy ``` ## Working with a development UI If you plan to use the UI during development, you will need to build a development version of the UI first. Using the Prefect UI in development requires installation of [npm](https://github.com/npm/cli). We recommend using [nvm](https://github.com/nvm-sh/nvm) to manage Node.js versions. Once installed, run `nvm use` from the root of the Prefect repository to initialize the proper version of `npm` and `node`. Start a development UI that reloads on code changes: ```bash prefect dev ui ``` This command is most useful if you are working directly on the UI codebase. Alternatively, you can build a static UI that will be served when running `prefect server start`: ```bash prefect dev build-ui ``` ## Working with a development server The Prefect CLI provides several helpful commands to aid development of server-side changes. You can start all services with hot-reloading on code changes (note that this requires installation of UI dependencies): ```bash prefect dev start ``` Start a Prefect API that reloads on code changes: ```bash prefect dev api ``` ## Add database migrations If your code changes necessitate modifications to a database table, first update the SQLAlchemy model in `src/prefect/server/database/orm_models.py`. For example, to add a new column to the `flow_run` table, add a new column to the `FlowRun` model: ```python # src/prefect/server/database/orm_models.py class FlowRun(Run): """SQLAlchemy model of a flow run.""" ... new_column: Mapped[Union[str, None]] = mapped_column(sa.String, nullable=True) # <-- add this line ``` Next, generate new migration files. Generate a new migration file for each database type. Migrations are generated for whichever database type `PREFECT_API_DATABASE_CONNECTION_URL` is set to. See [how to set the database connection URL](/v3/api-ref/settings-ref#connection-url) for each database type. To generate a new migration file, run: ```bash prefect server database revision --autogenerate -m "" ``` Make the migration name brief but descriptive. For example: * `add_flow_run_new_column` * `add_flow_run_new_column_idx` * `rename_flow_run_old_column_to_new_column` The `--autogenerate` flag automatically generates a migration file based on the changes to the models. **Always inspect the output of `--autogenerate`** `--autogenerate` generates a migration file based on the changes to the models. However, it is not perfect. Check the file to ensure it only includes the desired changes. The new migration is in the `src/prefect/server/database/migrations/versions/` directory. Each database type has its own subdirectory. For example, the SQLite migrations are stored in `src/prefect/server/database/migrations/versions/sqlite/`. After inspecting the migration file, apply the migration to the database by running: ```bash prefect server database upgrade -y ``` After successfully creating migrations for all database types, update `MIGRATION-NOTES.md` to document the changes. # Contribute to documentation Source: https://docs-3.prefect.io/contribute/docs-contribute Learn how to contribute to the Prefect docs. We use [Mintlify](https://mintlify.com/) to host and build the Prefect documentation. The main branch of the [prefecthq/prefect](https://github.com/PrefectHQ/prefect) GitHub repository is used to build the Prefect 3.0 docs at [docs.prefect.io](https://docs.prefect.io). The 2.x docs are hosted at [docs-2.prefect.io](https://docs-2.prefect.io) and built from the 2.x branch of the repository. ## Fork the repository All contributions to Prefect need to start on [a fork of the repository](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/fork-a-repo). Once you have successfully forked [the Prefect repo](https://github.com/PrefectHQ/prefect), clone a local version to your machine: ```bash git clone https://github.com/GITHUB-USERNAME/prefect.git cd prefect ``` Create a branch with an informative name: ``` git checkout -b fix-for-issue-NUM ``` After committing your changes to this branch, you can then open a pull request from your fork that we will review with you. ## Set up your local environment We provide a `justfile` with common commands to simplify development. We recommend using [just](https://just.systems/) to run these commands. **Installing just** To install just: * **macOS**: `brew install just` or `cargo install just` * **Linux**: `cargo install just` or check your package manager * **Windows**: `scoop install just` or `cargo install just` For more installation options, see the [just documentation](https://github.com/casey/just#installation). ### Using just (recommended) 1. Clone this repository. 2. Run `just docs` to start the documentation server. Your docs should now be available at `http://localhost:3000`. ### Manual setup If you prefer not to use just, you can set up manually: 1. Clone this repository. 2. Make sure you have a recent version of Node.js installed. We recommend using [nvm](https://github.com/nvm-sh/nvm) to manage Node.js versions. 3. Run `cd docs` to navigate to the docs directory. 4. Run `nvm use node` to use the correct Node.js version. 5. Run `npm i -g mintlify` to install Mintlify. 6. Run `mintlify dev` to start the development server. Your docs should now be available at `http://localhost:3000`. See the [Mintlify documentation](https://mintlify.com/docs/development) for more information on how to install Mintlify, build previews, and use Mintlify's features while writing docs. All documentation is written in `.mdx` files, which are Markdown files that can contain JavaScript and React components. ## Contributing examples Examples are Python files that demonstrate Prefect concepts and patterns and how they work together with other tools to solve real-world problems. They live in the `examples/` directory and are used to automatically generate documentation pages. ### Example structure Each example should be a standalone Python file with: 1. **YAML frontmatter** (in Python comments) at the top with metadata: ```python # --- # title: Your Example Title # description: Brief description of what this example demonstrates # icon: play # Choose from available icons (play, database, globe, etc.) # dependencies: ["prefect", "pandas", "requests"] # Required packages # cmd: ["python", "path/to/your_example.py"] # How to run it # keywords: ["getting_started", "etl", "api"] # Keywords to help with search and filtering # draft: false # Set to true to hide from docs # --- ``` 2. **Explanatory comments** throughout the code which will be used to generate the body of the documentation page. 3. **Runnable code** that works out of the box with the specified dependencies. See the [hello world example](https://github.com/PrefectHQ/prefect/blob/main/examples/hello_world.py) as a guide. ### Adding an example To add an example, follow these steps: 1. Create your Python file in the `examples/` directory 2. Follow the structure above with frontmatter and comments 3. Test that your example runs successfully 4. Run `just generate-examples` to update the documentation pages 5. Review the generated documentation to ensure it renders correctly Once it all looks good, commit your changes and open a pull request. ## Considerations Keep in mind the following when writing documentation. ### External references Prefect resources can be managed in several ways, including through the CLI, UI, Terraform, Helm, and API. When documenting a resource, consider including external references that describe how to manage the resource in other ways. Snippets are available to provide these references in a consistent format. For example, the [Deployment documentation](/v3/deploy) includes a snippet for the Terraform provider: ```javascript import { TF } from "/snippets/resource-management/terraform.mdx" import { deployments } from "/snippets/resource-management/vars.mdx" ``` For more information on how to use snippets, see the [Mintlify documentation](https://mintlify.com/docs/reusable-snippets). # Contribute Source: https://docs-3.prefect.io/contribute/index Join the community, improve Prefect, and share knowledge We welcome all forms of engagement, and love to learn from our users. There are many ways to get involved with the Prefect community: * Join nearly 30,000 engineers in the [Prefect Slack community](https://prefect.io/slack) * [Give Prefect a ⭐️ on GitHub](https://github.com/PrefectHQ/prefect) * Make a contribution to [Prefect's documentation](/contribute/docs-contribute) * Make a code contribution to [Prefect's open source libraries](/contribute/dev-contribute) * Support or create a new [Prefect integration](/contribute/contribute-integrations) ## Report an issue To report a bug, make a feature request, and more, visit our [issues page on GitHub](https://github.com/PrefectHQ/prefect/issues/new/choose). ## Code of conduct See our [code of conduct](/contribute/code-of-conduct) for becoming a valued contributor. # Code and development style guide Source: https://docs-3.prefect.io/contribute/styles-practices Generally, we follow the [Google Python Style Guide](https://google.github.io/styleguide/pyguide.html). This document covers Prefect-specific styles and practices. ## Imports This is a brief collection of rules and guidelines for handling imports in this repository. ### Imports in `__init__` files Leave `__init__` files empty unless exposing an interface. If you must expose objects to present a simpler API, please follow these rules. #### Exposing objects from submodules If importing objects from submodules, the `__init__` file should use a relative import. This is [required for type checkers](https://github.com/microsoft/pyright/blob/main/docs/typed-libraries.md#library-interface) to understand the exposed interface. {/* pmd-metadata: notest */} ```python # Correct from .flows import flow ``` ```python # Wrong from prefect.flows import flow ``` #### Exposing submodules Generally, submodules should *not* be imported in the `__init__` file. You should only expose submodules when the module is designed to be imported and used as a namespaced object. For example, we do this for our schema and model modules. This is because it's important to know if you are working with an API schema or database modelβ€”both of which may have similar names. ```python import prefect.server.schemas as schemas # The full module is accessible now schemas.core.FlowRun ``` If exposing a submodule, use a relative import like when you're exposing an object. {/* pmd-metadata: notest */} ```python # Correct from . import flows ``` ```python # Wrong import prefect.flows ``` #### Importing to run side-effects Another use case for importing submodules is to perform global side-effects that occur when they are imported. Often, global side-effects on import are a dangerous pattern. But there are a couple acceptable use cases for this: * To register dispatchable types, for example, `prefect.serializers`. * To extend a CLI app, for example, `prefect.cli`. ### Imports in modules #### Importing other modules The `from` syntax is recommended for importing objects from modules. You should not import modules with the `from` syntax. ```python # Correct import prefect.server.schemas # use with the full name import prefect.server.schemas as schemas # use the shorter name ``` ```python # Wrong from prefect.server import schemas ``` You should not use relative imports unless it's in an `__init__.py` file. {/* pmd-metadata: notest */} ```python # Correct from prefect.utilities.foo import bar ``` {/* pmd-metadata: notest */} ```python # Wrong from .utilities.foo import bar ``` You should never use imports that are dependent on file location without explicit indication it is relative. This avoids confusion about the source of a module. {/* pmd-metadata: notest */} ```python # Correct from . import test ``` #### Resolving circular dependencies Sometimes, you must defer an import and perform it *within* a function to avoid a circular dependency: ```python ## This function in `settings.py` requires a method from the global `context` but the context ## uses settings def from_context(): from prefect.context import get_profile_context ... ``` Avoid circular dependencies. They often reveal entanglement in the design. Place all deferred imports at the top of the function. If you are just using the imported object for a type signature, use the `TYPE_CHECKING` flag: ```python # Correct from typing import TYPE_CHECKING if TYPE_CHECKING: from prefect.server.schemas.states import State def foo(state: "State"): pass ``` Usage of the type within the module requires quotes; for example, `"State"`, since it is not available at runtime. #### Importing optional requirements We do not have a best practice for this yet. See the `kubernetes`, `docker`, and `distributed` implementations for now. #### Delaying expensive imports Sometimes imports are slow, but it's important to keep the `prefect` module import times fast. In these cases, lazily import the slow module by deferring import to the relevant function body. For modules consumed by many functions, use the optional requirements pattern instead. ## Command line interface (CLI) output messages When executing a command that creates an object, the output message should offer: * A short description of what the command just did. * A bullet point list, rehashing user inputs, if possible. * Next steps, like the next command to run, if applicable. * Other relevant, pre-formatted commands that can be copied and pasted, if applicable. * A new line before the first line, and after the last line. Output Example: ```js $ prefect work-queue create testing Created work queue with properties: name - 'abcde' uuid - 940f9828-c820-4148-9526-ea8107082bda tags - None deployment_ids - None Start an agent to pick up flows from the created work queue: prefect agent start -q 'abcde' Inspect the created work queue: prefect work-queue inspect 'abcde' ``` Additionally: * Wrap generated arguments in apostrophes (') to ensure validity by using suffixing formats with `!r`. * Indent example commands, instead of wrapping in backticks (\`). * Use placeholders if you cannot completely pre-format the example. * Capitalize placeholder labels and wrap them in less than (\<) and greater than (>) signs. * Utilize `textwrap.dedent` to remove extraneous spacing for strings with triple quotes ("""). Placeholder Example: ``` Create a work queue with tags: prefect work-queue create '' -t '' -t '' ``` Dedent Example: {/* pmd-metadata: notest */} ```python from textwrap import dedent ... output_msg = dedent( f""" Created work queue with properties: name - {name!r} uuid - {result} tags - {tags or None} deployment_ids - {deployment_ids or None} Start an agent to pick up flows from the created work queue: prefect agent start -q {name!r} Inspect the created work queue: prefect work-queue inspect {name!r} """ ) ``` ## API versioning ### Client and server communication You can run the Prefect client separately from Prefect server, and communicate entirely through an API. The Prefect client includes anything that runs task or flow code, (for example, agents and the Python client); or any consumer of Prefect metadata (for example, the Prefect UI and CLI). Prefect server stores this metadata and serves it through the REST API. ### API version header Sometimes, we have to make breaking changes to the API. To check a Prefect client's compatibility with the API it's making requests to, every API call the client makes includes a three-component `API_VERSION` header with major, minor, and patch versions. For example, a request with the `X-PREFECT-API-VERSION=3.2.1` header has a major version of `3`, minor version `2`, and patch version `1`. Change this version header by modifying the `API_VERSION` constant in `prefect.server.api.server`. ### Breaking changes to the API A breaking change means that your code needs to change to use a new version of Prefect. We avoid breaking changes whenever possible. When making a breaking change to the API, we consider if the change is *backwards compatible for clients*. This means that the previous version of the client can still make calls against the updated version of the server code. This might happen if the changes are purely additive, such as adding a non-critical API route. In these cases, we aim to bump the patch version. In almost all other cases, we bump the minor version, which denotes a non-backwards-compatible API change. We have reserved the major version changes to denote a backwards compatible change that is significant in some way, such as a major release milestone. ### Version composition Versions are composed of three parts: MAJOR.MINOR.PATCH. For example, the version 2.5.0 has a major version of 2, a minor version of 5, and patch version of 0. Occasionally, we add a suffix to the version such as `rc`, `a`, or `b`. These indicate pre-release versions that users can opt into for testing and experimentation prior to a generally available release. Each release will increase one of the version numbers. If we increase a number other than the patch version, the versions to the right of it reset to zero. ## Prefect's versioning scheme Prefect increases the major version when significant and widespread changes are made to the core product. Prefect increases the minor version when: * Introducing a new concept that changes how to use Prefect * Changing an existing concept in a way that fundamentally alters its usage * Removing a deprecated feature Prefect increases the patch version when: * Making enhancements to existing features * Fixing behavior in existing features * Adding new capabilities to existing concepts * Updating dependencies ## Deprecation At times, Prefect will deprecate a feature. A feature is deprecated when it will no longer be maintained. Frequently, a deprecated feature will have a new and improved alternative. Deprecated features will be retained for at least **3** minor version increases or **6 months**, whichever is longer. We may retain deprecated features longer than this time period. Prefect will sometimes include changes to behavior to fix a bug. These changes are not categorized as breaking changes. ## Client compatibility with Prefect When running a Prefect server, you are in charge of ensuring the version is compatible with those of the clients that are using the server. Prefect aims to maintain backwards compatibility with old clients for each server release. In contrast, sometimes you cannot use new clients with an old server. The new client may expect the server to support capabilities that it does not yet include. For this reason, we recommend that all clients are the same version as the server or older. For example, you can use a client on 2.1.0 with a server on 2.5.0. You cannot use a client on 2.5.0 with a server on 2.1.0. ## Client compatibility with Cloud Prefect Cloud targets compatibility with all versions of Prefect clients. If you encounter a compatibility issue, please [file a bug report](https://github.com/prefectHQ/prefect/issues/new/choose). # null Source: https://docs-3.prefect.io/integrations/integrations Prefect integrations are PyPI packages you can install to help you build integrate your workflows with third parties. prefect-aws Maintained by Prefect prefect-azure Maintained by Prefect prefect-bitbucket Maintained by Prefect coiled Maintained by Coiled prefect-dask Maintained by Prefect prefect-databricks Maintained by Prefect prefect-dbt Maintained by Prefect prefect-docker Maintained by Prefect prefect-email Maintained by Prefect prefect-fivetran Maintained by Fivetran prefect-gcp Maintained by Prefect prefect-github Maintained by Prefect prefect-gitlab Maintained by Prefect prefect-kubernetes Maintained by Prefect prefect-ray Maintained by Prefect prefect-shell Maintained by Prefect prefect-slack Maintained by Prefect prefect-snowflake Maintained by Prefect prefect-sqlalchemy Maintained by Prefect # Get to know the ECS worker Source: https://docs-3.prefect.io/integrations/prefect-aws/ecs-worker/index Deploy production-ready Prefect workers on AWS Elastic Container Service (ECS) for scalable, containerized flow execution. ECS workers provide robust infrastructure management with automatic scaling, high availability, and seamless AWS integration. ## Why use ECS for flow run execution? ECS (Elastic Container Service) is an excellent choice for executing Prefect flow runs in production environments: * **Production-ready scalability**: ECS automatically scales your infrastructure based on demand, efficiently managing container distribution across multiple instances * **Flexible compute options**: Choose between AWS Fargate for serverless execution or Amazon EC2 for faster job start times and additional control * **Native AWS integration**: Seamlessly connect with AWS services like IAM, CloudWatch, Secrets Manager, and VPC networking * **Containerized reliability**: Docker container support ensures reproducible deployments and consistent runtime environments * **Cost optimization**: Pay only for the compute resources you use with automatic scaling and spot instance support ## Architecture Overview ECS workers operate within your AWS infrastructure, providing secure and scalable flow execution. Prefect enables remote flow execution via workers and work pools - to learn more about these concepts see the [deployment docs](/v3/deploy/infrastructure-concepts/work-pools/). ```mermaid %%{ init: { 'theme': 'neutral', 'themeVariables': { 'margin': '10px' } } }%% flowchart TB subgraph ecs_cluster[ECS Cluster] subgraph ecs_service[ECS Service] td_worker[Worker Task Definition] --> |defines| prefect_worker[Prefect Worker] end prefect_worker -->|kicks off| ecs_task fr_task_definition[Flow Run Task Definition] subgraph ecs_task[ECS Task Execution] flow_run((Flow Run)) end fr_task_definition -->|defines| ecs_task end subgraph prefect_cloud[Prefect Cloud] work_pool[ECS Work Pool] end subgraph github[ECR] flow_code["Flow Code"] end flow_code --> |pulls| ecs_task prefect_worker -->|polls| work_pool work_pool -->|configures| fr_task_definition ``` ### Key Components * **ECS Worker**: Long-running service that polls work pools and manages flow run execution. Runs as an ECS Service for auto-recovery in case of failure * **Task Definitions**: Blueprint for ECS tasks that describes which Docker containers to run and their configuration * **ECS Cluster**: Provides the underlying compute capacity with auto-scaling capabilities * **Work Pools**: Typed according to infrastructure - flow runs in `ecs` work pools are executed as ECS tasks * **Flow Run Tasks**: Ephemeral ECS tasks that execute individual Prefect flows until completion ### How It Works 1. **Continuous Polling**: The ECS worker continuously polls your Prefect server or Prefect Cloud for scheduled flow runs 2. **Task Creation**: When work is available, the worker creates ECS task definitions based on work pool configuration 3. **Flow Execution**: Flow runs are launched as ECS tasks with appropriate resource allocation and configuration 4. **Auto-scaling**: ECS automatically manages container distribution and scaling based on demand 5. **Cleanup**: After flow completion, containers are cleaned up while the worker continues polling **ECS tasks β‰  Prefect tasks** An ECS task is **not** the same as a [Prefect task](/v3/develop/write-tasks). ECS tasks are groupings of containers that run within an ECS Cluster, defined by task definitions. They're ideal for ephemeral processes like Prefect flow runs. ## Deployment options ### With the `prefect-aws` CLI The fastest way to deploy production-ready ECS workers is by using the `prefect-aws` CLI: ```bash prefect-aws ecs-worker deploy-service \ --work-pool-name my-ecs-pool \ --stack-name prefect-ecs-worker \ --existing-cluster-identifier my-ecs-cluster \ --existing-vpc-id vpc-12345678 \ --existing-subnet-ids subnet-12345,subnet-67890 \ --prefect-api-url https://api.prefect.cloud/api/accounts/.../workspaces/... \ --prefect-api-key your-api-key ``` This command creates a CloudFormation stack that provisions all the infrastructure required for a production-ready ECS worker service. **Key benefits:** * **One-command deployment**: Provisions complete infrastructure with a single command * **CloudFormation managed**: Infrastructure as code with rollback capabilities * **Auto-scaling configured**: Built-in scaling policies for production workloads * **Monitoring included**: CloudWatch logs and alarms pre-configured * **Production defaults**: Secure, optimized settings out of the box **Additional CLI commands:** * `prefect-aws ecs-worker list` - View all deployed stacks * `prefect-aws ecs-worker status ` - Check deployment status * `prefect-aws ecs-worker delete ` - Clean up infrastructure * `prefect-aws ecs-worker export-template` - Export CloudFormation templates for customization For detailed CLI options run `prefect-aws ecs-worker deploy-service --help`. ### Manual deployment For users who want full control over their ECS infrastructure setup: **[Deploy manually β†’](deploy_manually)** Step-by-step guide for creating ECS clusters, task definitions, and configuring workers from scratch. ## Prerequisites Before deploying ECS workers, ensure you have: * **AWS Account**: Active AWS account with appropriate permissions * **IAM Permissions**: Rights to create ECS clusters, task definitions, and IAM roles * **Docker Knowledge**: Basic understanding of containerization concepts * **Prefect Setup**: Active Prefect server or Prefect Cloud workspace ## Getting started 1. **Choose your deployment method**: Manual setup provides maximum flexibility, while infrastructure as code offers reproducible deployments 2. **Configure AWS credentials**: Set up IAM roles and permissions for secure AWS service access 3. **Create work pools**: Define work pool configurations that match your ECS infrastructure 4. **Deploy workers**: Launch ECS workers that will poll for and execute flow runs 5. **Monitor and scale**: Use CloudWatch and ECS metrics to optimize performance ## Next steps * **[Manual Deployment Guide](deploy_manually)** - Complete walkthrough for setting up ECS workers step-by-step * **[Work Pool Configuration](/v3/deploy/infrastructure-concepts/work-pools/)** - Learn about Prefect work pools and worker concepts * **[AWS ECS Documentation](https://docs.aws.amazon.com/ecs/)** - Official AWS documentation for ECS services * **[Prefect Cloud Push Work Pools](/v3/how-to-guides/deployment_infra/serverless)** - Serverless alternative to self-managed workers # How to manually deploy an ECS worker to an ECS cluster Source: https://docs-3.prefect.io/integrations/prefect-aws/ecs-worker/manual-deployment Step-by-step guide for manually setting up ECS infrastructure to run Prefect workers with full control over cluster configuration, IAM roles, and task definitions. This guide is valid for users of self-hosted Prefect server or Prefect Cloud users with a tier that allows hybrid work pools. This guide walks you through manually setting up ECS infrastructure to run Prefect workers. For architecture concepts and overview, see the [ECS Worker overview](/integrations/prefect-aws/ecs-worker). ## Prerequisites You will need the following to successfully complete this guide: * A Prefect server. You will need either: * [Prefect Cloud](https://app.prefect.cloud) account on Starter tier or above * [Prefect self-managed instance](/v3/concepts/server) * An AWS account with permissions to create: * IAM roles * IAM policies * Secrets in [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) or [Systems Manager Parameter Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html) * ECS task definitions * ECS services * The AWS CLI installed on your local machine. You can [download it from the AWS website](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). * An existing [ECS Cluster](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/clusters.html) * An existing [VPC](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) - this guide assumes the use the default VPC. You can create an ECS cluster using the AWS CLI or the AWS Management Console. To create an ECS cluster using the AWS CLI, run the following command: ```bash wrap aws ecs create-cluster --cluster-name my-ecs-cluster ``` No further configuration is required for this guide, as we will use the Fargate launch type and the default VPC. For production deployments, it is recommended that you create your own VPC with appropriate security policies based on your organization's recommendations. If you want to create a new VPC for this guide, follow the [VPC creation guide](https://docs.aws.amazon.com/vpc/latest/userguide/create-vpc.html). ## Create the Prefect ECS work pool First, create an ECS [work pool](/v3/deploy/infrastructure-concepts/work-pools/) for your deployments to use. You can do this either from the CLI or the Prefect UI. If doing so from the CLI, be sure to [authenticate with Prefect Cloud](/v3/how-to-guides/cloud/connect-to-cloud) or run a local Prefect server instance. Run the following command to create a new ECS work pool named `my-ecs-pool`: ```bash prefect work-pool create --type ecs my-ecs-pool ``` 1. Navigate to the **Work Pools** page in the Prefect UI. 2. Click the `+` button to the right of the **Work Pool** page header. 3. Select **AWS Elastic Container Service**. In Prefect Cloud, this will be under the **Hybrid** section. {"Work Because this guide uses Fargate as the capacity provider, this step requires no further action. ## Create a Secret for the Prefect API key If you are using a Prefect self-hosted server and have authentication disabled, you can skip this step. The Prefect worker needs to authenticate with your Prefect server to poll the work pool for flow runs. For authentication, you must provide a Bearer token (`PREFECT_API_KEY`) or Basic Auth string (`PREFECT_API_AUTH_STRING`) to the Prefect API. As a security best practice, we recommend you store your Prefect API key in [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) or [Systems Manager Parameter Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-paramstore.html). You can find your Prefect API key several ways: If you are on a paid plan you can create a [service account](/v3/how-to-guides/cloud/manage-users/service-accounts) for the worker. If you are on a free plan, you can use a user's API key. To find your API key, use the Prefect CLI: ```bash wrap # If not already authenticated, log in first prefect cloud login prefect config view --show-secrets ``` There is no concept of a `PREFECT_API_KEY` in a self-hosted Prefect server. Instead, you use the `PREFECT_API_AUTH_STRING` containing your basic auth credentials (if your server uses [basic authentication](/v3/advanced/security-settings#basic-authentication)). You can find this information on the Settings page for your Prefect server. Choose between AWS Secrets Manager or Systems Manager Parameter Store to store your Prefect API key. Both services allow you to securely store and manage sensitive information such as API keys, passwords, and other secrets. To create a Secret in AWS Secrets Manager, use the [`aws secretsmanager create-secret`](https://docs.aws.amazon.com/cli/latest/reference/secretsmanager/create-secret.html) command: ```bash wrap aws secretsmanager create-secret --name PrefectECSWorkerAPIKey --secret-string '' ``` Make a note of the Amazon Resource Name (ARN) of the secret that is returned in the command output. You will need it later when configuring the ECS worker task definition. To create a SecureString parameter in AWS Systems Manager Parameter Store, use the [`aws ssm put-parameter`](https://docs.aws.amazon.com/cli/latest/reference/ssm/put-parameter.html) command: ```bash wrap aws ssm put-parameter --name "/prefect/my-ecs-pool/api/key" --value "" --type "SecureString" ``` You may customize the parameter hierarchy and name to suit your needs. In this example we've used, `/prefect/my-ecs-pool/api/key` but any parameter name works. Your ECS task execution role will need to be able to read this value. Make a note of the name you specified for the parameter, as you will need it later when configuring the ECS worker. ## Create the AWS IAM resources We will create two [IAM roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-custom.html#roles-creatingrole-custom-trust-policy-console): 1. `ecsTaskExecutionRole`: This role will be used by ECS to start ECS tasks. 2. `ecsTaskRole`: This role will contain the permissions required by Prefect ECS worker in order to run your flows as ECS tasks. The role permissions are based on the principle of [least-privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-started-reduce-permissions.html), meaning that each role will only have the permissions it needs to perform its job. ### Create a trust policy The trust policy will allow the ECS service containing the Prefect worker to assume the role required for calling other AWS services. This is called a [service-linked role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create-service-linked-role.html). The trust policy is a JSON document that specifies which AWS service can assume the role. Save this policy to a file, such as `trust-policy.json`: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "ecs-tasks.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } ``` Alternately, you can download this file using the following command: ```bash curl wrap curl -O https://raw.githubusercontent.com/PrefectHQ/prefect/refs/heads/main/docs/integrations/prefect-aws/ecs/iam/trust-policy.json ``` ```bash wget wrap wget https://raw.githubusercontent.com/PrefectHQ/prefect/refs/heads/main/docs/integrations/prefect-aws/ecs/iam/trust-policy.json ``` ### Create the IAM roles Now, we will create the IAM roles that will be used by the ECS worker. #### Create the ECS task execution role The ECS task execution role will be used to start the ECS worker task. We will assign it a minimal set of permissions to allow the worker to pull images from ECR and publish logs to CloudWatch. Create the role using the [`aws iam create-role`](https://docs.aws.amazon.com/cli/latest/reference/iam/create-role.html) command: ```bash wrap aws iam create-role --role-name ecsTaskExecutionRole --assume-role-policy-document file://trust-policy.json ``` Make a note of the ARN (Amazon Resource Name) of the role that is returned in the command output. You will need it later when creating the ECS task definition. The following is a minimal policy that grants the necessary permissions for ECS to obtain the current value of the secret and inject it into the ECS task. Save this policy to a file, such as `secret-policy.json`: ```json { "Version": "2012-10-17", "Statement": [ { "Action": [ "secretsmanager:GetSecretValue", ], "Effect": "Allow", "Resource": "arn:aws:secretsmanager:::secret:PrefectECSWorkerAPIKey" } ] } ``` Alternately, you can download this file using the following command: ```bash curl wrap curl -O https://raw.githubusercontent.com/PrefectHQ/prefect/refs/heads/main/docs/integrations/prefect-aws/ecs/iam/secrets/secrets-manager/secret-policy.json ``` ```bash wget wrap wget https://raw.githubusercontent.com/PrefectHQ/prefect/refs/heads/main/docs/integrations/prefect-aws/ecs/iam/secrets/secrets-manager/secret-policy.json ``` The following is a minimal policy that grants the necessary permissions for ECS to obtain the current value of the parameter and inject it into the ECS task. Save this policy to a file, such as `secret-policy.json`: ```json { "Version": "2012-10-17", "Statement": [ { "Action": [ "ssm:GetParameters" ], "Effect": "Allow", "Resource": "arn:aws:ssm:::parameter/prefect/my-ecs-pool/api/key" } ] } ``` Alternately, you can download this file using the following command: ```bash curl wrap curl -O https://raw.githubusercontent.com/PrefectHQ/prefect/refs/heads/main/docs/integrations/prefect-aws/ecs/iam/secrets/ssm-parameter-store/secret-policy.json ``` ```bash wget wrap wget https://raw.githubusercontent.com/PrefectHQ/prefect/refs/heads/main/docs/integrations/prefect-aws/ecs/iam/secrets/ssm-parameter-store/secret-policy.json ``` If your secret is encrypted with a customer-managed key (CMK) in AWS Key Management Service (KMS), you will also need to add the `kms:Decrypt` permission to the policy. For example: ```json focus={11-17} { "Version": "2012-10-17", "Statement": [ { "Action": [ "secretsmanager:GetSecretValue", ], "Effect": "Allow", "Resource": "arn:aws:secretsmanager:::secret:PrefectECSWorkerAPIKey" }, { "Action": [ "kms:Decrypt" ], "Effect": "Allow", "Resource": "arn:aws:kms:::key/" } ] } ``` Create a new IAM policy named `ecsTaskExecutionPolicy` using the policy document you just created. ```bash wrap aws iam create-policy --policy-name ecsTaskExecutionPolicy --policy-document file://secret-policy.json ``` The `AmazonECSTaskExecutionRolePolicy` managed policy grants the minimum permissions necessary for starting ECS tasks. [See here](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_execution_IAM_role.html) for other common execution role permissions. Attach this policy to your task execution role using the [`aws iam attach-role-policy`](https://docs.aws.amazon.com/cli/latest/reference/iam/attach-role-policy.html): ```bash wrap aws iam attach-role-policy --role-name ecsTaskExecutionRole --policy-arn arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy ``` Attach the custom policy you created in the previous step so that the ECS task can access the Prefect API key stored in AWS Secrets Manager or Systems Manager Parameter Store: ```bash wrap aws iam put-role-policy --role-name ecsTaskExecutionRole --policy-name PrefectECSWorkerSecretPolicy --policy-document file://secret-policy.json ``` #### Create the worker ECS task role The worker ECS task role will be used by the Prefect worker to interact with the AWS API to run flows as ECS containers. This role will require the ability to describe, register, and deregister ECS task definitions, as well as the ability to start and stop ECS tasks. Use the following command to create the role. The same trust policy is also used for this role. ```bash wrap aws iam create-role --role-name ecsTaskRole --assume-role-policy-document file://trust-policy.json ``` The following is a minimal policy that grants the necessary permissions for the Prefect ECS worker to run your flows as ECS tasks. Save this policy to a file, such as `worker-policy.json`: ```json { "Version": "2012-10-17", "Statement": [ { "Action": [ "ec2:DescribeSubnets", "ec2:DescribeVpcs", "ecs:DeregisterTaskDefinition", "ecs:DescribeTaskDefinition", "ecs:DescribeTasks", "ecs:RegisterTaskDefinition", "ecs:RunTask", "ecs:StopTask", "ecs:TagResource", "iam:PassRole", "logs:GetLogEvents", "logs:PutLogEvents" ], "Effect": "Allow", "Resource": "*" } ] } ``` Alternately, you can download this file using the following command: ```bash curl wrap curl -O https://raw.githubusercontent.com/PrefectHQ/prefect/refs/heads/main/docs/integrations/prefect-aws/ecs/iam/worker-policy.json ``` ```bash wget wrap wget https://raw.githubusercontent.com/PrefectHQ/prefect/refs/heads/main/docs/integrations/prefect-aws/ecs/iam/worker-policy.json ``` Create a new IAM policy named `ecsTaskPolicy` using the policy document you just created. ```bash wrap aws iam create-policy --policy-name ecsTaskPolicy --policy-document file://worker-policy.json ``` Attach the custom `ecsTaskPolicy` to the `ecsTaskRole` so that the Prefect worker can dispatch flows to ECS: ```bash wrap aws iam attach-role-policy --role-name ecsTaskRole --policy-arn arn:aws:iam:::policy/ecsTaskPolicy ``` Replace `` with your AWS account ID. #### Create an ECS task role for Prefect flows This step is optional, but recommended if your flows require access to other AWS services. Depending on the requirements of your flows, it is advised to create a [separate role for your ECS tasks](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html). This role will contain the permissions required by the ECS tasks in which your flows will run. For example, if your workflow loads data into an S3 bucket, you would need a role with additional permissions to access S3. Use the following command to create the role: ```bash wrap aws iam create-role --role-name PrefectECSRunnerTaskRole --assume-role-policy-document file://trust-policy.json ``` The following is an example policy that allows reading/writing to an S3 bucket named `prefect-demo-bucket`. Save this policy to a file, such as `runner-task-policy.json`: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation" ], "Resource": "arn:aws:s3:::prefect-demo-bucket" }, { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:PutObjectAcl", "s3:GetObject", "s3:GetObjectAcl", "s3:DeleteObject" ], "Resource": "arn:aws:s3:::prefect-demo-bucket/*" } ] } ``` Create a new IAM policy named `PrefectECSRunnerTaskPolicy` using the policy document you just created: ```bash wrap aws iam create-policy --policy-name PrefectECSRunnerTaskPolicy --policy-document file://runner-task-policy.json ``` Attach the new `PrefectECSRunnerTaskPolicy` IAM policy to the `PrefectECSRunnerTaskRole` IAM role: ```bash wrap aws iam attach-role-policy --role-name PrefectECSRunnerTaskRole --policy-arn arn:aws:iam:::policy/PrefectECSRunnerTaskPolicy ``` Replace `` with your AWS account ID. Finally, add the ARN of the `PrefectECSRunnerTaskRole` to your ECS work pool. This can be configured two ways: 1. Globally for all flows in the work pool by setting the **Task Role ARN (Optional)** field in the work pool configuration. 2. On a per-deployment basis by specifying the `task_role_arn` job variable in the deployment configuration. ## Configure event monitoring infrastructure To enable the ECS worker to monitor and update the status of flow runs, we need to set up SQS queues and EventBridge rules that capture ECS task state changes. This infrastructure allows the worker to: * Track when ECS tasks (flow runs) start, stop, or fail * Update flow run states in real-time based on ECS task events * Provide better observability and status reporting for your workflows This step sets up the same event monitoring infrastructure that the `prefect-aws ecs-worker deploy-events` command creates automatically. The worker will use the environment variable `PREFECT_INTEGRATIONS_AWS_ECS_OBSERVER_SQS_QUEUE_NAME` to discover and read from the events queue. Create an SQS queue to receive ECS task state change events and a dead-letter queue for handling failed messages. First, create the dead-letter queue: ```bash aws sqs create-queue --queue-name my-ecs-pool-events-dlq --attributes MessageRetentionPeriod=1209600,VisibilityTimeout=60 ``` Get the ARN of the dead-letter queue: ```bash aws sqs get-queue-attributes --queue-url $(aws sqs get-queue-url --queue-name my-ecs-pool-events-dlq --query 'QueueUrl' --output text) --attribute-names QueueArn --query 'Attributes.QueueArn' --output text ``` Now create the main queue with the dead-letter queue configured: ```bash aws sqs create-queue \ --queue-name my-ecs-pool-events \ --attributes '{ "MessageRetentionPeriod": "604800", "VisibilityTimeout": "300", "RedrivePolicy": "{\"deadLetterTargetArn\":\"\",\"maxReceiveCount\":3}" }' ``` Replace `` with the ARN of the dead-letter queue from the previous step, and `my-ecs-pool` with your work pool name. The queue name should follow the pattern `{work-pool-name}-events` for consistency with the automated deployment. Allow EventBridge to send messages to your SQS queue by updating the queue policy: ```bash aws sqs set-queue-attributes \ --queue-url $(aws sqs get-queue-url --queue-name my-ecs-pool-events --query 'QueueUrl' --output text) \ --attributes '{"Policy":"{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"events.amazonaws.com\"},\"Action\":[\"sqs:SendMessage\",\"sqs:GetQueueAttributes\",\"sqs:GetQueueUrl\"],\"Resource\":\"\"}]}"}' ``` Replace `` with the ARN of the queue created in the previous step. Create an EventBridge rule to capture ECS task state changes and send them to the SQS queue: ```bash wrap aws events put-rule \ --name my-ecs-pool-task-state-changes \ --event-pattern '{ "source": ["aws.ecs"], "detail-type": ["ECS Task State Change"], "detail": { "clusterArn": ["arn:aws:ecs:::cluster/"] } }' \ --description "Capture ECS task state changes for Prefect worker" \ --state ENABLED ``` Replace: * `` with your AWS region * `` with your AWS account ID * `` with your ECS cluster name * `my-ecs-pool` with your work pool name You can find your cluster ARN using: ```bash wrap aws ecs describe-clusters --clusters --query 'clusters[0].clusterArn' --output text ``` Get the queue ARN and add it as a target for the EventBridge rule: ```bash aws events put-targets \ --rule my-ecs-pool-task-state-changes \ --targets "Id=1,Arn=" ``` Replace `` with the ARN of the queue created in step 1. Add SQS permissions to the worker task role created earlier: Create a file named `sqs-policy.json`: ```json wrap { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "sqs:ReceiveMessage", "sqs:DeleteMessage", "sqs:GetQueueAttributes", "sqs:GetQueueUrl", "sqs:ChangeMessageVisibility" ], "Resource": "arn:aws:sqs:::my-ecs-pool-events" } ] } ``` Replace ``, ``, and `my-ecs-pool-events` with your values. Apply the policy to the worker task role: ```bash wrap aws iam put-role-policy \ --role-name ecsTaskRole \ --policy-name EcsWorkerSqsPolicy \ --policy-document file://sqs-policy.json ``` ## Creating the ECS worker service Now that all the AWS IAM roles and event monitoring infrastructure have been created, we can deploy the Prefect worker to the ECS cluster. This task definition will be used to run the Prefect worker in an ECS task. Ensure you replace the placeholders for: * `` with the ARN of the `ecsTaskExecutionRole` you created in Step 2. You can find the ARN of the `ecsTaskExecutionRole` using the following command: ```bash wrap aws iam get-role --role-name ecsTaskExecutionRole --query 'Role.Arn' --output text ``` * `` with the ARN of the `ecsTaskRole` you created in Step 2. You can find the ARN of the `ecsTaskRole` using the following command: ```bash wrap aws iam get-role --role-name ecsTaskRole --query 'Role.Arn' --output text ``` * `` with the URL of your Prefect Server. You can find your Prefect API URL several ways: If you have the Prefect CLI installed, you can run the following command to view your current Prefect profile's API URL: ```bash prefect config view ``` To manually construct the Prefect Cloud API URL, use the following format: ```text wrap https://api.prefect.cloud/api/accounts//workspaces/ ``` * `` with the ARN of the resource from Secrets Manager or Systems Manager Parameter Store. * `my-ecs-pool-events` in the `PREFECT_INTEGRATIONS_AWS_ECS_OBSERVER_SQS_QUEUE_NAME` environment variable with your actual queue name from the event monitoring setup. Your secret ARN is based on the service you are using: You can find the ARN of your secret using the following command: ```bash wrap aws secretsmanager describe-secret --secret-id PrefectECSWorkerAPIKey --query 'ARN' --output text ``` You can find the ARN of your parameter using the following command: ```bash wrap aws ssm get-parameter --name "/prefect/my-ecs-pool/api/key" --query 'Parameter.ARN' --output text ``` As `PREFECT_API_KEY` is not used with a self-hosted Prefect server, you will need to replace the `PREFECT_API_KEY` environment variable in the task definition secrets with `PREFECT_API_AUTH_STRING`. ```json focus={28-35} { "family": "prefect-worker-task", "networkMode": "awsvpc", "requiresCompatibilities": [ "FARGATE" ], "cpu": "512", "memory": "1024", "executionRoleArn": "", "taskRoleArn": "", "containerDefinitions": [ { "name": "prefect-worker", "image": "prefecthq/prefect:3-latest", "cpu": 512, "memory": 1024, "essential": true, "command": [ "/bin/sh", "-c", "pip install prefect-aws && prefect worker start --pool my-ecs-pool --type ecs" ], "environment": [ { "name": "PREFECT_API_URL", "value": "" }, { "name": "PREFECT_INTEGRATIONS_AWS_ECS_OBSERVER_SQS_QUEUE_NAME", "value": "my-ecs-pool-events" } ], "secrets": [ { "name": "PREFECT_API_KEY", // [!code --] "name": "PREFECT_API_AUTH_STRING", // [!code ++] "value": "" } ] } ] } ``` Save the following JSON to a file named `task-definition.json`: ```json wrap { "family": "prefect-worker-task", "networkMode": "awsvpc", "requiresCompatibilities": [ "FARGATE" ], "cpu": "512", "memory": "1024", "executionRoleArn": "", "taskRoleArn": "", "containerDefinitions": [ { "name": "prefect-worker", "image": "prefecthq/prefect:3-latest", "cpu": 512, "memory": 1024, "essential": true, "command": [ "/bin/sh", "-c", "pip install prefect-aws && prefect worker start --pool my-ecs-pool --type ecs" ], "environment": [ { "name": "PREFECT_API_URL", "value": "" }, { "name": "PREFECT_INTEGRATIONS_AWS_ECS_OBSERVER_SQS_QUEUE_NAME", "value": "my-ecs-pool-events" } ], "secrets": [ { "name": "PREFECT_API_KEY", "valueFrom": "" } ] } ] } ``` Alternately, you can download this file using the following command: ```bash curl wrap curl -O https://raw.githubusercontent.com/PrefectHQ/prefect/refs/heads/main/docs/integrations/prefect-aws/ecs/iam/task-definition.json ``` ```bash wget wrap wget https://raw.githubusercontent.com/PrefectHQ/prefect/refs/heads/main/docs/integrations/prefect-aws/ecs/iam/task-definition.json ``` Notice that the CPU and Memory allocations are relatively small. The worker's main responsibility is to submit work through API calls to AWS, *not* to execute your Prefect flow code. To avoid hardcoding your API key into the task definition JSON see [how to add sensitive data using AWS secrets manager to the container definition](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data-tutorial.html#specifying-sensitive-data-tutorial-create-taskdef). Before creating a service, you first need to register a task definition. You can do that using the [`register-task-definition` command](https://docs.aws.amazon.com/cli/latest/reference/ecs/register-task-definition.html): ```bash wrap aws ecs register-task-definition --cli-input-json file://task-definition.json ``` Replace `task-definition.json` with the name of your task definition file. Finally, create a service that will manage your Prefect worker. Ensure you replace the placeholders for: * `` with the name of your ECS cluster. * `` with the ARN of the task definition you just registered. * `` with a comma-separated list of your VPC subnet IDs. * Replace `` with a comma-separated list of your VPC security group IDs. If you are using the default VPC, you will need to gather some information about it to use in the next steps. We will use the default VPC for this guide. To find the default VPC ID, run the following command: ```bash wrap aws ec2 describe-vpcs --filters "Name=isDefault,Values=true" --query "Vpcs[0].VpcId" --output text ``` This will output the VPC ID (e.g. `vpc-abcdef01`) of the default VPC, which you can use in the next steps in this section. To find the subnets associated with the default VPC: ```bash wrap aws ec2 describe-subnets --filters "Name=vpc-id,Values=" --query "Subnets[*].SubnetId" --output text ``` Which will output a list of available subnets (e.g. `subnet-12345678 subnet-23456789`). Finally, we will need the security group ID for the default VPC: ```bash wrap aws ec2 describe-security-groups --filters "Name=vpc-id,Values=" "Name=group-name,Values=default" --query "SecurityGroups[*].GroupId" --output text ``` This will output the security group ID (e.g. `sg-12345678`) of the default security group. Copy the subnet IDs and security group ID for use in Step 3. Use the [`aws ecs create-service`](https://docs.aws.amazon.com/cli/latest/reference/ecs/create-service.html) command to create an ECS service running on Fargate for the Prefect worker: ```bash wrap aws ecs create-service --service-name prefect-worker-service --cluster --task-definition --launch-type FARGATE --desired-count 1 --network-configuration "awsvpcConfiguration={subnets=[],securityGroups=[],assignPublicIp='ENABLED'}" ``` The work pool page in the Prefect UI allows you to check the health of your workers - make sure your new worker is live! It may take a few minutes for the worker to come online after creating the service. Refer to the [troubleshooting](#troubleshooting) section for further assistance if the worker isn't online. ## Configure work pool defaults Now that your infrastructure is deployed, you should update your ECS work pool configuration with the resource identifiers so they don't need to be specified on every deployment. Navigate to your work pool in the Prefect UI and update the following fields in the **Infrastructure** tab: * **Cluster ARN**: Set to your ECS cluster ARN (e.g., `arn:aws:ecs:us-east-1:123456789012:cluster/my-cluster`) * **VPC ID**: Set to your VPC ID (e.g., `vpc-12345678`) * **Subnets**: Add your subnet IDs (e.g., `subnet-12345678,subnet-87654321`) * **Execution Role ARN**: Set to the task execution role ARN (e.g., `arn:aws:iam::123456789012:role/ecsTaskExecutionRole`) These settings will be used as defaults for all deployments using this work pool, but can be overridden per deployment if needed. You can also update the work pool configuration programmatically using the Prefect API: ```python from prefect.client.schemas.objects import WorkPoolUpdate from prefect import get_client async def update_work_pool(): async with get_client() as client: work_pool = await client.read_work_pool("my-ecs-pool") # Update base job template variables base_template = work_pool.base_job_template variables = base_template.get("variables", {}) properties = variables.get("properties", {}) # Update infrastructure defaults properties["cluster"] = { "default": "arn:aws:ecs:us-east-1:123456789012:cluster/my-cluster" } properties["vpc_id"] = { "default": "vpc-12345678" } properties["execution_role_arn"] = { "default": "arn:aws:iam::123456789012:role/ecsTaskExecutionRole" } # Update network configuration network_config = properties.setdefault("network_configuration", {}) network_props = network_config.setdefault("properties", {}) awsvpc_config = network_props.setdefault("awsvpcConfiguration", {}) awsvpc_props = awsvpc_config.setdefault("properties", {}) awsvpc_props["subnets"] = { "default": ["subnet-12345678", "subnet-87654321"] } # Update work pool variables["properties"] = properties base_template["variables"] = variables await client.update_work_pool( "my-ecs-pool", WorkPoolUpdate(base_job_template=base_template) ) # Run the update import asyncio asyncio.run(update_work_pool()) ``` Replace the ARNs and IDs with your actual resource identifiers. ## Deploy a flow run to your ECS work pool This guide uses the [AWS Elastic Container Registry (ECR)](https://aws.amazon.com/ecr/) to store a Docker image containing your flow code. To do this, we will write a flow, then deploy it using build and push steps that copy flow code into a Docker image and push that image to an ECR repository. ```python my_flow.py lines icon="python" from prefect import flow from prefect.logging import get_run_logger @flow def my_flow(): logger = get_run_logger() logger.info("Hello from ECS!!") if __name__ == "__main__": my_flow() ``` Use the [`aws ecr create-repository`](https://docs.aws.amazon.com/cli/latest/reference/ecr/create-repository.html) command to create an ECR repository. The name you choose for your repository will be reused in the next step when defining your Prefect deployment. ```bash wrap aws ecr create-repository --repository-name ``` To have Prefect build your image when deploying your flow create a `prefect.yaml` file with the following specification: ```yaml prefect.yaml lines name: ecs-worker-guide pull: - prefect.deployments.steps.set_working_directory: directory: /opt/prefect/ecs-worker-guide # build section allows you to manage and build docker images build: - prefect_docker.deployments.steps.build_docker_image: id: build_image requires: prefect-docker>=0.3.1 image_name: tag: latest dockerfile: auto # push section allows you to manage if and how this project is uploaded to remote locations push: - prefect_docker.deployments.steps.push_docker_image: requires: prefect-docker>=0.3.1 image_name: '{{ build_image.image_name }}' tag: '{{ build_image.tag }}' # the deployments section allows you to provide configuration for deploying flows deployments: - name: my_ecs_deployment version: tags: [] description: entrypoint: flow.py:my_flow parameters: {} work_pool: name: my-ecs-pool work_queue_name: job_variables: image: '{{ build_image.image }}' schedules: [] ``` [Deploy](https://docs.prefect.io/deploy/serve-flows/#create-a-deployment) the flow to the Prefect Cloud or your self-managed server instance. ```bash prefect deploy my_flow.py:my_ecs_deployment ``` Find the deployment in the UI and click the **Quick Run** button! ## Troubleshooting If your worker does not appear in the Prefect UI, check the following: * Ensure that the ECS service is running and that the task definition is registered correctly. * Check the ECS service logs in CloudWatch to see if there are any errors. * Verify that the IAM roles have the correct permissions. * Ensure that the `PREFECT_API_URL` and `PREFECT_API_KEY` environment variables are set correctly in the task definition. * For self-hosted Prefect servers, ensure that you replaced `PREFECT_API_KEY` from the example with `PREFECT_API_AUTH_STRING` in the task definition. * Ensure your Prefect ECS worker has network connectivity to the Prefect API. If you are using a private VPC, ensure that there is a NAT gateway or internet gateway configured to allow outbound traffic to the Prefect API. ### Event monitoring issues If flow runs are not updating their status properly, check the event monitoring setup: * Verify the SQS queue was created and is receiving messages from EventBridge * Check that the EventBridge rule is active and properly configured for your ECS cluster * Ensure the worker task role has the necessary SQS permissions (`sqs:ReceiveMessage`, `sqs:DeleteMessage`, etc.) * Verify the `PREFECT_INTEGRATIONS_AWS_ECS_OBSERVER_SQS_QUEUE_NAME` environment variable is set correctly in the worker task definition * Check CloudWatch logs for any SQS-related errors in the worker logs ## Next steps Now that you are confident your ECS worker is healthy, you can experiment with different work pool configurations. * Do your flow runs require higher `CPU`? * Would an EC2 `Launch Type` speed up your flow run execution? These infrastructure configuration values can be set on your ECS work pool or they can be overridden on the deployment level through [job\_variables](/v3/deploy/infrastructure-concepts/customize/) if desired. # prefect-aws Source: https://docs-3.prefect.io/integrations/prefect-aws/index Build production-ready data workflows that seamlessly integrate with AWS services. `prefect-aws` provides battle-tested blocks, tasks, and infrastructure integrations for AWS, including ECS orchestration, S3 storage, Secrets Manager, Lambda functions, Batch computing, and Glue ETL operations. ## Why use prefect-aws? `prefect-aws` offers significant advantages over direct boto3 integration: * **Production-ready integrations**: Pre-built, tested components that handle common AWS patterns and edge cases * **Unified credential management**: Secure, centralized authentication that works consistently across all AWS services * **Built-in observability**: Automatic logging, monitoring, and state tracking for all AWS operations * **Infrastructure as code**: Deploy and scale workflows on AWS ECS with minimal configuration ## Getting started ### Prerequisites * An [AWS account](https://aws.amazon.com/account/) and the necessary permissions to access desired services. ### Install prefect-aws The following command will install a version of `prefect-aws` compatible with your installed version of `prefect`. If you don't already have `prefect` installed, it will install the newest version of `prefect` as well. ```bash pip install "prefect[aws]" ``` Upgrade to the latest versions of `prefect` and `prefect-aws`: ```bash pip install -U "prefect[aws]" ``` ## Blocks setup ### Credentials Most AWS services requires an authenticated session. Prefect makes it simple to provide credentials via AWS Credentials blocks. Steps: 1. Refer to the [AWS Configuration documentation](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-creds) to retrieve your access key ID and secret access key. 2. Copy the access key ID and secret access key. 3. Create an `AwsCredentials` block in the Prefect UI or use a Python script like the one below. ```python from prefect_aws import AwsCredentials AwsCredentials( aws_access_key_id="PLACEHOLDER", aws_secret_access_key="PLACEHOLDER", aws_session_token=None, # replace this with token if necessary region_name="us-east-2" ).save("BLOCK-NAME-PLACEHOLDER") ``` Prefect uses the Boto3 library under the hood. To find credentials for authentication, any data not provided to the block are sourced at runtime in the order shown in the [Boto3 docs](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html#configuring-credentials). Prefect creates the session object using the values in the block and then, any missing values follow the sequence in the Boto3 docs. ### S3 Create a block for reading and writing files to S3. ```python from prefect_aws import AwsCredentials from prefect_aws.s3 import S3Bucket S3Bucket( bucket_name="BUCKET-NAME-PLACEHOLDER", credentials=aws_credentials ).save("S3-BLOCK-NAME-PLACEHOLDER") ``` ### Lambda Invoke AWS Lambdas, synchronously or asynchronously. ```python from prefect_aws.lambda_function import LambdaFunction from prefect_aws.credentials import AwsCredentials LambdaFunction( function_name="test_lambda_function", aws_credentials=credentials, ).save("LAMBDA-BLOCK-NAME-PLACEHOLDER") ``` ### Secret Manager Create a block to read, write, and delete AWS Secret Manager secrets. ```python from prefect_aws import AwsCredentials from prefect_aws.secrets_manager import AwsSecret AwsSecret( secret_name="test_secret_name", aws_credentials=credentials, ).save("AWS-SECRET-BLOCK-NAME-PLACEHOLDER") ``` ## Supported AWS services `prefect-aws` provides comprehensive integrations for key AWS services: | Service | Integration Type | Use Cases | | ------------------- | -------------------------- | ------------------------------------------------------ | | **S3** | `S3Bucket` block | File storage, data lake operations, deployment storage | | **Secrets Manager** | `AwsSecret` block | Secure credential storage, API key management | | **Lambda** | `LambdaFunction` block | Serverless function execution, event-driven processing | | **Glue** | `GlueJobBlock` block | ETL operations, data transformation pipelines | | **ECS** | `ECSWorker` infrastructure | Container orchestration, scalable compute workloads | | **Batch** | `batch_submit` task | High-throughput computing, batch job processing | **Integration types:** * **Blocks**: Reusable configuration objects that can be saved and shared across flows * **Tasks**: Functions decorated with `@task` for direct use in flows * **Workers**: Infrastructure components for running flows on AWS compute services ## Scale workflows with AWS infrastructure ### ECS (Elastic Container Service) Deploy and scale your Prefect workflows on [AWS ECS](https://aws.amazon.com/ecs/) for production workloads. `prefect-aws` provides: * **ECS worker**: Long-running worker for hybrid deployments with full control over execution environment * **Auto-scaling**: Dynamic resource allocation based on workflow demands * **Cost optimization**: Pay only for compute resources when workflows are running See the [ECS worker deployment guide](/integrations/prefect-aws/ecs_guide) for a step-by-step walkthrough of deploying production-ready workers to your ECS cluster. ### Docker Images Pre-built Docker images with `prefect-aws` are available for simplified deployment: ```bash docker pull prefecthq/prefect-aws:latest ``` #### Available Tags Image tags have the following format: * `prefecthq/prefect-aws:latest` - Latest stable release with Python 3.12 * `prefecthq/prefect-aws:latest-python3.11` - Latest stable with Python 3.11 * `prefecthq/prefect-aws:0.5.9-python3.12` - Specific prefect-aws version with Python 3.12 * `prefecthq/prefect-aws:0.5.9-python3.12-prefect3.4.9` - Full version specification #### Usage Examples **Running an ECS worker:** ```bash docker run -d \ --name prefect-ecs-worker \ -e PREFECT_API_URL=https://api.prefect.cloud/api/accounts/your-account/workspaces/your-workspace \ -e PREFECT_API_KEY=your-api-key \ prefecthq/prefect-aws:latest \ prefect worker start --pool ecs-pool ``` **Local development:** ```bash docker run -it --rm \ -v $(pwd):/opt/prefect \ prefecthq/prefect-aws:latest \ python your_flow.py ``` ## Examples ### Read and write files to AWS S3 Upload a file to an AWS S3 bucket and download the same file under a different filename. The following code assumes that the bucket already exists: ```python from pathlib import Path from prefect import flow from prefect_aws import AwsCredentials, S3Bucket @flow def s3_flow(): # create a dummy file to upload file_path = Path("test-example.txt") file_path.write_text("Hello, Prefect!") aws_credentials = AwsCredentials.load("BLOCK-NAME-PLACEHOLDER") s3_bucket = S3Bucket( bucket_name="BUCKET-NAME-PLACEHOLDER", credentials=aws_credentials ) s3_bucket_path = s3_bucket.upload_from_path(file_path) downloaded_file_path = s3_bucket.download_object_to_path( s3_bucket_path, "downloaded-test-example.txt" ) return downloaded_file_path.read_text() if __name__ == "__main__": s3_flow() ``` ### Access secrets with AWS Secrets Manager Write a secret to AWS Secrets Manager, read the secret data, delete the secret, and return the secret data. ```python from prefect import flow from prefect_aws import AwsCredentials, AwsSecret @flow def secrets_manager_flow(): aws_credentials = AwsCredentials.load("BLOCK-NAME-PLACEHOLDER") aws_secret = AwsSecret(secret_name="test-example", aws_credentials=aws_credentials) aws_secret.write_secret(secret_data=b"Hello, Prefect!") secret_data = aws_secret.read_secret() aws_secret.delete_secret() return secret_data if __name__ == "__main__": secrets_manager_flow() ``` ### Invoke lambdas ```python from prefect_aws.lambda_function import LambdaFunction from prefect_aws.credentials import AwsCredentials credentials = AwsCredentials() lambda_function = LambdaFunction( function_name="test_lambda_function", aws_credentials=credentials, ) response = lambda_function.invoke( payload={"foo": "bar"}, invocation_type="RequestResponse", ) response["Payload"].read() ``` ### Submit AWS Glue jobs ```python from prefect import flow from prefect_aws import AwsCredentials from prefect_aws.glue_job import GlueJobBlock @flow def example_run_glue_job(): aws_credentials = AwsCredentials( aws_access_key_id="your_access_key_id", aws_secret_access_key="your_secret_access_key" ) glue_job_run = GlueJobBlock( job_name="your_glue_job_name", arguments={"--YOUR_EXTRA_ARGUMENT": "YOUR_EXTRA_ARGUMENT_VALUE"}, ).trigger() return glue_job_run.wait_for_completion() if __name__ == "__main__": example_run_glue_job() ``` ### Submit AWS Batch jobs ```python from prefect import flow from prefect_aws import AwsCredentials from prefect_aws.batch import batch_submit @flow def example_batch_submit_flow(): aws_credentials = AwsCredentials( aws_access_key_id="access_key_id", aws_secret_access_key="secret_access_key" ) job_id = batch_submit( "job_name", "job_queue", "job_definition", aws_credentials ) return job_id if __name__ == "__main__": example_batch_submit_flow() ``` ## Resources ### Documentation * **[prefect-aws SDK Reference](https://reference.prefect.io/prefect_aws/)** - Complete API documentation for all blocks and tasks * **[ECS Deployment Guide](/integrations/prefect-aws/ecs_guide)** - Step-by-step guide for deploying workflows on ECS * **[Prefect Secrets Management](/v3/develop/secrets)** - Using AWS credentials with third-party services ### AWS Resources * **[AWS Documentation](https://docs.aws.amazon.com/)** - Official AWS service documentation * **[Boto3 Documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html)** - Python SDK reference for AWS services * **[AWS IAM Best Practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html)** - Security recommendations for AWS access # SDK docs Source: https://docs-3.prefect.io/integrations/prefect-aws/sdk # Azure Container Instances Worker Guide Source: https://docs-3.prefect.io/integrations/prefect-azure/aci_worker ## Why use ACI for flow run execution? ACI (Azure Container Instances) is a fully managed compute platform that streamlines running your Prefect flows on scalable, on-demand infrastructure on Azure. ## Prerequisites Before starting this guide, make sure you have: * An Azure account and user permissions for provisioning resource groups and container instances. * The `azure` CLI installed on your local machine. You can follow Microsoft's [installation guide](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli). * Docker installed on your local machine. ## Step 1. Create a resource group Azure resource groups serve as containers for managing groupings of Azure resources. Replace `` with the name of your choosing, and `` with a valid Azure location name, such as`eastus`. ```bash export RG_NAME= && \ az group create --name $RG_NAME --location ``` Throughout the rest of the guide, we'll need to refer to the **scope** of the created resource group, which is a string describing where the resource group lives in the hierarchy of your Azure account. To save the scope of your resource group as an environment variable, run the following command: ```bash RG_SCOPE=$(az group show --name $RG_NAME --query id --output tsv) ``` You can check that the scope is correct before moving on by running `echo $RG_SCOPE` in your terminal. It should be formatted as follows: ``` /subscriptions//resourceGroups/ ``` ## Step 2. Prepare ACI permissions In order for the worker to create, monitor, and delete the other container instances in which flows will run, we'll need to create a **custom role** and an **identity**, and then affiliate that role to the identity with a **role assignment**. When we start our worker, we'll assign that identity to the container instance it's running in. ### 1. Create a role The custom `Container Instances Contributor` role has all the permissions your worker will need to run flows in other container instances. Create it by running the following command: ```bash az role definition create --role-definition '{ "Name": "Container Instances Contributor", "IsCustom": true, "Description": "Can create, delete, and monitor container instances.", "Actions": [ "Microsoft.ManagedIdentity/userAssignedIdentities/assign/action", "Microsoft.Resources/deployments/*", "Microsoft.ContainerInstance/containerGroups/*" ], "NotActions": [ ], "AssignableScopes": [ '"\"$RG_SCOPE\""' ] }' ``` ### 2. Create an identity Create a user-managed identity with the following command, replacing `` with the name you'd like to use for the identity: ```bash export IDENTITY_NAME= && \ az identity create -g $RG_NAME -n $IDENTITY_NAME ``` We'll also need to save the principal ID and full object ID of the identity for the role assignment and container creation steps, respectively: ```bash IDENTITY_PRINCIPAL_ID=$(az identity list --query "[?name=='$IDENTITY_NAME'].principalId" --output tsv) && \ IDENTITY_ID=$(az identity list --query "[?name=='$IDENTITY_NAME'].id" --output tsv) ``` ### 3. Assign roles to the identity Now let's assign the `Container Instances Contributor` role we created earlier to the new identity: ```bash az role assignment create \ --assignee $IDENTITY_PRINCIPAL_ID \ --role "Container Instances Contributor" \ --scope $RG_SCOPE ``` Since we'll be using ACR to host a custom Docker image containing a Prefect flow later in the guide, let's also assign the built in `AcrPull` role to the identity: ```bash az role assignment create \ --assignee $IDENTITY_PRINCIPAL_ID \ --role "AcrPull" \ --scope $RG_SCOPE ``` ## Step 3. Create the worker container instance Before running this command, set your `PREFECT_API_URL` and `PREFECT_API_KEY` as environment variables: ```bash export PREFECT_API_URL= PREFECT_API_KEY= ``` Running the following command will create a container instance in your Azure resource group that will start a Prefect ACI worker. If there is not already a work pool in Prefect with the name you chose, a work pool will also be created. Replace `` with the name of the ACI work pool you want to create in Prefect. Here we're using the work pool name as the name of the container instance in Azure as well, but you may name it something else if you prefer. ```bash az container create \ --name \ --resource-group $RG_NAME \ --assign-identity $IDENTITY_ID \ --image "prefecthq/prefect:3-python3.12" \ --secure-environment-variables PREFECT_API_URL=$PREFECT_API_URL PREFECT_API_KEY=$PREFECT_API_KEY \ --command-line "/bin/bash -c 'pip install prefect-azure && prefect worker start --pool --type azure-container-instance'" ``` This container instance uses default networking and security settings. For advanced configuration, refer the `az container create` [CLI reference](https://learn.microsoft.com/en-us/cli/azure/container?view=azure-cli-latest#az-container-create). ## Step 4. Create an ACR registry In order to build and push images containing flow code to Azure, we'll need a container registry. Create one with the following command, replacing `` with the registry name of your choosing: ```bash export REGISTRY_NAME= && \ az acr create --resource-group $RG_NAME \ --name --sku Basic ``` ## Step 5. Update your ACI work pool configuration Once your work pool is created, navigate to the Edit page of your ACI work pool. You will need to update the following fields: ### Identities This will be your `IDENTITY_ID`. You can get it from your terminal by running `echo $IDENTITY_ID`. When adding it to your work pool, it should be formatted as a JSON array: ``` ["/subscriptions//resourcegroups//providers/Microsoft.ManagedIdentity/userAssignedIdentities/"] ``` Configuring an ACI work pool's identities. ### ACRManagedIdentity ACRManagedIdentity is required for your flow code containers to be pulled from ACR. It consists of the following: * Identity: the same `IDENTITY_ID` as above, as a string ``` /subscriptions//resourcegroups//providers/Microsoft.ManagedIdentity/userAssignedIdentities/ ``` * Registry URL: your ``, followed by `.azurecr.io` ``` .azurecr.io ``` Configuring an ACI work pool's ACR Managed Identity. ### Subscription ID and resource group name Both the subscription ID and resource group name can be found in the `RG_SCOPE` environment variable created earlier in the guide. View their values by running `echo $RG_SCOPE`: ``` /subscriptions//resourceGroups/ ``` Configuring an ACI work pool. Then click Save. ## Step 6. Pick up a flow run with your new worker This guide uses ACR to store a Docker image containing your flow code. Write a flow, then deploy it using `flow.deploy()`, which will copy flow code into a Docker image and push that image to an ACR registry. ### 1. Log in to ACR Use the following commands to log in to ACR: ``` TOKEN=$(az acr login --name $REGISTRY_NAME --expose-token --output tsv --query accessToken) ``` ``` docker login $REGISTRY_NAME.azurecr.io --username 00000000-0000-0000-0000-000000000000 --password-stdin <<< $TOKEN ``` ### 2. Write and deploy a simple test flow Create and run the following script to deploy your flow. Be sure to replace `` and `` with the appropriate values. `my_flow.py` ```python from prefect import flow from prefect.logging import get_run_logger from prefect.docker import DockerImage @flow def my_flow(): logger = get_run_logger() logger.info("Hello from ACI!") if __name__ == "__main__": my_flow.deploy( name="aci-deployment", image=DockerImage( name=".azurecr.io/example:latest", platform="linux/amd64", ), work_pool_name="", ) ``` ### 3. Find the deployment in the UI and click the **Quick Run** button! # prefect-azure Source: https://docs-3.prefect.io/integrations/prefect-azure/index `prefect-azure` makes it easy to leverage the capabilities of Azure in your workflows. For example, you can retrieve secrets, read and write Blob Storage objects, and deploy your flows on Azure Container Instances (ACI). ## Getting started ### Prerequisites * An [Azure account](https://azure.microsoft.com/) and the necessary permissions to access desired services. ### Install `prefect-azure` The following command will install a version of `prefect-azure` compatible with your installed version of `prefect`. If you don't already have `prefect` installed, it will install the newest version of `prefect` as well. ```bash pip install "prefect[azure]" ``` Upgrade to the latest versions of `prefect` and `prefect-azure`: ```bash pip install -U "prefect[azure]" ``` If necessary, see [additional installation options for Blob Storage, Cosmos DB, and ML Datastore](#additional-installation-options). To install prefect-azure with all additional capabilities, run the install command above and then run the following command: ```bash pip install "prefect-azure[all_extras]" ``` ### Register newly installed block types Register the block types in the module to make them available for use. ```bash prefect block register -m prefect_azure ``` ## Examples ### Download a blob ```python from prefect import flow from prefect_azure import AzureBlobStorageCredentials from prefect_azure.blob_storage import blob_storage_download @flow def example_blob_storage_download_flow(): connection_string = "connection_string" blob_storage_credentials = AzureBlobStorageCredentials( connection_string=connection_string, ) data = blob_storage_download( blob="prefect.txt", container="prefect", blob_storage_credentials=blob_storage_credentials, ) return data example_blob_storage_download_flow() ``` Use `with_options` to customize options on any existing task or flow: ```python custom_blob_storage_download_flow = example_blob_storage_download_flow.with_options( name="My custom task name", retries=2, retry_delay_seconds=10, ) ``` ### Run flows on Azure Container Instances Run flows on [Azure Container Instances (ACI)](https://learn.microsoft.com/en-us/azure/container-instances/) to dynamically scale your infrastructure. See the [Azure Container Instances Worker Guide](/integrations/prefect-azure/aci_worker/) for a walkthrough of using ACI in a hybrid work pool. If you're using Prefect Cloud, [ACI push work pools](/v3/how-to-guides/deployment_infra/serverless#azure-container-instances) provide all the benefits of ACI with a quick setup and no worker needed. ## Resources For assistance using Azure, consult the [Azure documentation](https://learn.microsoft.com/en-us/azure). Refer to the `prefect-azure` API documentation linked in the sidebar to explore all the capabilities of the `prefect-azure` library. ### Additional installation options First install the main library compatible with your `prefect` version: ```bash pip install "prefect[azure]" ``` Then install the additional capabilities you need. To use Blob Storage: ```bash pip install "prefect-azure[blob_storage]" ``` To use Cosmos DB: ```bash pip install "prefect-azure[cosmos_db]" ``` To use ML Datastore: ```bash pip install "prefect-azure[ml_datastore]" ``` # SDK docs Source: https://docs-3.prefect.io/integrations/prefect-azure/sdk # prefect-bitbucket Source: https://docs-3.prefect.io/integrations/prefect-bitbucket/index The `prefect-bitbucket` library makes it easy to interact with Bitbucket repositories and credentials. ## Getting started ### Prerequisites * A [Bitbucket account](https://bitbucket.org/product). ### Install `prefect-bitbucket` The following command will install a version of `prefect-bitbucket` compatible with your installed version of `prefect`. If you don't already have `prefect` installed, it will install the newest version of `prefect` as well. ```bash pip install "prefect[bitbucket]" ``` Upgrade to the latest versions of `prefect` and `prefect-bitbucket`: ```bash pip install -U "prefect[bitbucket]" ``` ### Register newly installed block types Register the block types in the `prefect-bitbucket` module to make them available for use. ```bash prefect block register -m prefect_bitbucket ``` ## Examples In the examples below, you create blocks with Python code. Alternatively, blocks can be created through the Prefect UI. ## Store deployment flow code in a private Bitbucket repository To create a deployment and run a deployment where the flow code is stored in a private Bitbucket repository, you can use the `BitBucketCredentials` block. A deployment can use flow code stored in a Bitbucket repository without using this library in either of the following cases: * The repository is public * The deployment uses a [Secret block](/v3/develop/secrets) to store the token Create a Bitbucket Credentials block: ```python from prefect_bitbucket import BitBucketCredentials bitbucket_credentials_block = BitBucketCredentials(token="x-token-auth:my-token") bitbucket_credentials_block.save(name="my-bitbucket-credentials-block") ``` **Difference between Bitbucket Server and Bitbucket Cloud authentication** If using a token to authenticate to Bitbucket Cloud, only set the `token` to authenticate. Do not include a value in the `username` field or authentication will fail. If using Bitbucket Server, provide both the `token` and `username` values. ### Access flow code stored in a private Bitbucket repository in a deployment Use the credentials block you created above to pass the Bitbucket access token during deployment creation. The code below assumes there's flow code stored in a private Bitbucket repository. ```python from prefect import flow from prefect.runner.storage import GitRepository from prefect_bitbucket import BitBucketCredentials if __name__ == "__main__": source = GitRepository( url="https://bitbucket.com/org/private-repo.git", credentials=BitBucketCredentials.load("my-bitbucket-credentials-block") ) flow.from_source( source=source, entrypoint="my_file.py:my_flow", ).deploy( name="private-bitbucket-deploy", work_pool_name="my_pool", ) ``` Alternatively, if you use a `prefect.yaml` file to create the deployment, reference the Bitbucket Credentials block in the `pull` step: ```yaml pull: - prefect.deployments.steps.git_clone: credentials: https://bitbucket.org/org/private-repo.git credentials: "{{ prefect.blocks.bitbucket-credentials.my-bitbucket-credentials-block }}" ``` ### Interact with a Bitbucket repository The code below shows how to reference a particular branch or tag of a Bitbucket repository. ```python from prefect_bitbucket import BitbucketRepository def save_bitbucket_block(): bitbucket_block = BitbucketRepository( repository="https://bitbucket.org/testing/my-repository.git", reference="branch-or-tag-name", ) bitbucket_block.save("my-bitbucket-block") if __name__ == "__main__": save_bitbucket_block() ``` Exclude the `reference` field to use the default branch. Reference a BitBucketCredentials block for authentication if the repository is private. Use the newly created block to interact with the Bitbucket repository. For example, download the repository contents with the `.get_directory()` method like this: ```python from prefect_bitbucket.repositories import BitbucketRepository def fetch_repo(): bitbucket_block = BitbucketRepository.load("my-bitbucket-block") bitbucket_block.get_directory() if __name__ == "__main__": fetch_repo() ``` ## Resources For assistance using Bitbucket, consult the [Bitbucket documentation](https://bitbucket.org/product/guides). Refer to the `prefect-bitbucket` [SDK documentation](https://reference.prefect.io/prefect_bitbucket/) to explore all the capabilities of the `prefect-bitbucket` library. # SDK docs Source: https://docs-3.prefect.io/integrations/prefect-bitbucket/sdk # prefect-dask Source: https://docs-3.prefect.io/integrations/prefect-dask/index Accelerate your workflows by running tasks in parallel with Dask Dask can run your tasks in parallel and distribute them over multiple machines. The `prefect-dask` integration makes it easy to accelerate your flow runs with Dask. ## Getting started ### Install `prefect-dask` The following command will install a version of `prefect-dask` compatible with your installed version of `prefect`. If you don't already have `prefect` installed, it will install the newest version of `prefect` as well. ```bash pip pip install "prefect[dask]" ``` ```bash uv uv pip install "prefect[dask]" ``` Upgrade to the latest versions of `prefect` and `prefect-dask`: ```bash pip pip install -U "prefect[dask]" ``` ```bash uv uv pip install -U "prefect[dask]" ``` ## Why use Dask? Say your flow downloads many images to train a machine learning model. It takes longer than you'd like for the flow to run because it executes sequentially. To accelerate your flow code, parallelize it with `prefect-dask` in three steps: 1. Add the import: `from prefect_dask import DaskTaskRunner` 2. Specify the task runner in the flow decorator: `@flow(task_runner=DaskTaskRunner)` 3. Submit tasks to the flow's task runner: `a_task.submit(*args, **kwargs)` Below is code with and without the DaskTaskRunner: ```python # Completed in 15.2 seconds from typing import List from pathlib import Path import httpx from prefect import flow, task URL_FORMAT = ( "https://www.cpc.ncep.noaa.gov/products/NMME/archive/" "{year:04d}{month:02d}0800/current/images/nino34.rescaling.ENSMEAN.png" ) @task def download_image(year: int, month: int, directory: Path) -> Path: # download image from URL url = URL_FORMAT.format(year=year, month=month) resp = httpx.get(url) # save content to directory/YYYYMM.png file_path = (directory / url.split("/")[-1]).with_stem(f"{year:04d}{month:02d}") file_path.write_bytes(resp.content) return file_path @flow def download_nino_34_plumes_from_year(year: int) -> List[Path]: # create a directory to hold images directory = Path("data") directory.mkdir(exist_ok=True) # download all images file_paths = [] for month in range(1, 12 + 1): file_path = download_image(year, month, directory) file_paths.append(file_path) return file_paths if __name__ == "__main__": download_nino_34_plumes_from_year(2022) ``` ```python # Completed in 5.7 seconds from typing import List from pathlib import Path import httpx from prefect import flow, task from prefect_dask import DaskTaskRunner URL_FORMAT = ( "https://www.cpc.ncep.noaa.gov/products/NMME/archive/" "{year:04d}{month:02d}0800/current/images/nino34.rescaling.ENSMEAN.png" ) @task def download_image(year: int, month: int, directory: Path) -> Path: # download image from URL url = URL_FORMAT.format(year=year, month=month) resp = httpx.get(url) # save content to directory/YYYYMM.png file_path = (directory / url.split("/")[-1]).with_stem(f"{year:04d}{month:02d}") file_path.write_bytes(resp.content) return file_path @flow(task_runner=DaskTaskRunner(cluster_kwargs={"processes": False})) def download_nino_34_plumes_from_year(year: int) -> List[Path]: # create a directory to hold images directory = Path("data") directory.mkdir(exist_ok=True) # download all images file_paths = [] for month in range(1, 12 + 1): file_path = download_image.submit(year, month, directory) file_paths.append(file_path) return file_paths if __name__ == "__main__": download_nino_34_plumes_from_year(2022) ``` In our tests, the flow run took 15.2 seconds to execute sequentially. Using the `DaskTaskRunner` reduced the runtime to **5.7** seconds! ## Run tasks on Dask The `DaskTaskRunner` is a [task runner](/v3/develop/task-runners) that submits tasks to the [`dask.distributed`](http://distributed.dask.org/) scheduler. By default, when the `DaskTaskRunner` is specified for a flow run, a temporary Dask cluster is created and used for the duration of the flow run. If you already have a Dask cluster running, either cloud-hosted or local, you can provide the connection URL with the `address` kwarg. `DaskTaskRunner` accepts the following optional parameters: | Parameter | Description | | --------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | address | Address of a currently running Dask scheduler. | | cluster\_class | The cluster class to use when creating a temporary Dask cluster. It can be either the full class name (for example, `"distributed.LocalCluster"`), or the class itself. | | cluster\_kwargs | Additional kwargs to pass to the `cluster_class` when creating a temporary Dask cluster. | | adapt\_kwargs | Additional kwargs to pass to `cluster.adapt` when creating a temporary Dask cluster. Note that adaptive scaling is only enabled if `adapt_kwargs` are provided. | | client\_kwargs | Additional kwargs to use when creating a [`dask.distributed.Client`](https://distributed.dask.org/en/latest/api.html#client). | **Multiprocessing safety** Because the `DaskTaskRunner` uses multiprocessing, calls to flows in scripts must be guarded with `if __name__ == "__main__":` or you will encounter warnings and errors. If you don't provide the `address` of a Dask scheduler, Prefect creates a temporary local cluster automatically. The number of workers used is based on the number of cores on your machine. The default provides a mix of processes and threads that work well for most workloads. To specify this explicitly, pass values for `n_workers` or `threads_per_worker` to `cluster_kwargs`: ```python from prefect_dask import DaskTaskRunner # Use 4 worker processes, each with 2 threads DaskTaskRunner( cluster_kwargs={"n_workers": 4, "threads_per_worker": 2} ) ``` ### Use a temporary cluster The `DaskTaskRunner` can create a temporary cluster using any of [Dask's cluster-manager options](https://docs.dask.org/en/latest/setup.html). This is useful when you want each flow run to have its own Dask cluster, allowing for per-flow adaptive scaling. To configure it, provide a `cluster_class`. This can be: * A string specifying the import path to the cluster class (for example, `"dask_cloudprovider.aws.FargateCluster"`) * The cluster class itself * A function for creating a custom cluster You can also configure `cluster_kwargs`. This takes a dictionary of keyword arguments to pass to `cluster_class` when starting the flow run. For example, to configure a flow to use a temporary `dask_cloudprovider.aws.FargateCluster` with four workers running with an image named `my-prefect-image`: ```python from prefect_dask import DaskTaskRunner DaskTaskRunner( cluster_class="dask_cloudprovider.aws.FargateCluster", cluster_kwargs={"n_workers": 4, "image": "my-prefect-image"}, ) ``` For larger workloads, you can accelerate execution further by distributing task runs over multiple machines. ### Connect to an existing cluster Multiple Prefect flow runs can use the same existing Dask cluster. You might manage a single long-running Dask cluster (for example, using the Dask [Helm Chart](https://docs.dask.org/en/latest/setup/kubernetes-helm.html)) and configure flows to connect to it during execution. This has disadvantages compared to using a temporary Dask cluster: * All workers in the cluster must have dependencies installed for all flows you intend to run. * Multiple flow runs may compete for resources. Dask tries to do a good job sharing resources between tasks, but you may still run into issues. Still, you may prefer managing a single long-running Dask cluster. To configure a `DaskTaskRunner` to connect to an existing cluster, pass in the address of the scheduler to the `address` argument: ```python from prefect_dask import DaskTaskRunner @flow(task_runner=DaskTaskRunner(address="http://my-dask-cluster")) def my_flow(): ... ``` Suppose you have an existing Dask client/cluster such as a `dask.dataframe.DataFrame`. With `prefect-dask`, it takes just a few steps: 1. Add imports 2. Add `task` and `flow` decorators 3. Use `get_dask_client` context manager to distribute work across Dask workers 4. Specify the task runner and client's address in the flow decorator 5. Submit the tasks to the flow's task runner ```python import dask.dataframe import dask.distributed client = dask.distributed.Client() def read_data(start: str, end: str) -> dask.dataframe.DataFrame: df = dask.datasets.timeseries(start, end, partition_freq="4w") return df def process_data(df: dask.dataframe.DataFrame) -> dask.dataframe.DataFrame: df_yearly_avg = df.groupby(df.index.year).mean() return df_yearly_avg.compute() def dask_pipeline(): df = read_data("1988", "2022") df_yearly_average = process_data(df) return df_yearly_average if __name__ == "__main__": dask_pipeline() ``` ```python import dask.dataframe import dask.distributed from prefect import flow, task from prefect_dask import DaskTaskRunner, get_dask_client client = dask.distributed.Client() @task def read_data(start: str, end: str) -> dask.dataframe.DataFrame: df = dask.datasets.timeseries(start, end, partition_freq="4w") return df @task def process_data(df: dask.dataframe.DataFrame) -> dask.dataframe.DataFrame: with get_dask_client(): df_yearly_avg = df.groupby(df.index.year).mean() return df_yearly_avg.compute() @flow(task_runner=DaskTaskRunner(address=client.scheduler.address)) def dask_pipeline(): df = read_data.submit("1988", "2022") df_yearly_average = process_data.submit(df) return df_yearly_average if __name__ == "__main__": dask_pipeline() ``` ### Configure adaptive scaling A key feature of using a `DaskTaskRunner` is the ability to scale adaptively to the workload. Instead of specifying `n_workers` as a fixed number, you can specify a minimum and maximum number of workers to use, and the Dask cluster scales up and down as needed. To do this, pass `adapt_kwargs` to `DaskTaskRunner`. This takes the following fields: * `maximum` (`int` or `None`, optional): the maximum number of workers to scale to. Set to `None` for no maximum. * `minimum` (`int` or `None`, optional): the minimum number of workers to scale to. Set to `None` for no minimum. For example, this configures a flow to run on a `FargateCluster` scaling up to a maximum of 10 workers: ```python from prefect_dask import DaskTaskRunner DaskTaskRunner( cluster_class="dask_cloudprovider.aws.FargateCluster", adapt_kwargs={"maximum": 10} ) ``` ### Use Dask annotations Use Dask annotations to further control the behavior of tasks. For example, set the [priority](http://distributed.dask.org/en/stable/priority.html) of tasks in the Dask scheduler: ```python import dask from prefect import flow, task from prefect_dask.task_runners import DaskTaskRunner @task def show(x): print(x) @flow(task_runner=DaskTaskRunner()) def my_flow(): with dask.annotate(priority=-10): future = show.submit(1) # low priority task with dask.annotate(priority=10): future = show.submit(2) # high priority task ``` Another common use case is [resource](http://distributed.dask.org/en/stable/resources.html) annotations: ```python import dask from prefect import flow, task from prefect_dask.task_runners import DaskTaskRunner @task def show(x): print(x) # Create a `LocalCluster` with some resource annotations # Annotations are abstract in dask and not inferred from your system. # Here, we claim that our system has 1 GPU and 1 process available per worker @flow( task_runner=DaskTaskRunner( cluster_kwargs={"n_workers": 1, "resources": {"GPU": 1, "process": 1}} ) ) def my_flow(): with dask.annotate(resources={'GPU': 1}): future = show(0) # this task requires 1 GPU resource on a worker with dask.annotate(resources={'process': 1}): # These tasks each require 1 process on a worker; because we've # specified that our cluster has 1 process per worker and 1 worker, # these tasks will run sequentially future = show(1) future = show(2) future = show(3) if __name__ == "__main__": my_flow() ``` ## Additional Resources Refer to the `prefect-dask` [SDK documentation](https://reference.prefect.io/prefect_dask/) to explore all the capabilities of the `prefect-dask` library. For assistance using Dask, consult the [Dask documentation](https://docs.dask.org/en/stable/) **Resolving futures in sync client** Note, by default, `dask_collection.compute()` returns concrete values while `client.compute(dask_collection)` returns Dask Futures. Therefore, if you call `client.compute`, you must resolve all futures before exiting out of the context manager by either: 1. setting `sync=True` ```python with get_dask_client() as client: df = dask.datasets.timeseries("2000", "2001", partition_freq="4w") summary_df = client.compute(df.describe(), sync=True) ``` 2. calling `result()` ```python with get_dask_client() as client: df = dask.datasets.timeseries("2000", "2001", partition_freq="4w") summary_df = client.compute(df.describe()).result() ``` For more information, visit the docs on [Waiting on Futures](https://docs.dask.org/en/stable/futures.html#waiting-on-futures). There is also an equivalent context manager for asynchronous tasks and flows: `get_async_dask_client`. When using the async client, you must `await client.compute(dask_collection)` before exiting the context manager. Note that task submission (`.submit()`) and future resolution (`.result()`) are always synchronous operations in Prefect, even when working with async tasks and flows. # SDK docs Source: https://docs-3.prefect.io/integrations/prefect-dask/sdk # prefect-databricks Source: https://docs-3.prefect.io/integrations/prefect-databricks/index Prefect integrations for interacting with Databricks. ## Getting started ### Prerequisites * A [Databricks account](https://databricks.com/) and the necessary permissions to access desired services. ### Install `prefect-databricks` The following command will install a version of `prefect-databricks` compatible with your installed version of `prefect`. If you don't already have `prefect` installed, it will install the newest version of `prefect` as well. ```bash pip install "prefect[databricks]" ``` Upgrade to the latest versions of `prefect` and `prefect-databricks`: ```bash pip install -U "prefect[databricks]" ``` ### List jobs on the Databricks instance ```python from prefect import flow from prefect_databricks import DatabricksCredentials from prefect_databricks.jobs import jobs_list @flow def example_execute_endpoint_flow(): databricks_credentials = DatabricksCredentials.load("my-block") jobs = jobs_list( databricks_credentials, limit=5 ) return jobs if __name__ == "__main__": example_execute_endpoint_flow() ``` ### Use `with_options` to customize options on any existing task or flow ```python custom_example_execute_endpoint_flow = example_execute_endpoint_flow.with_options( name="My custom flow name", retries=2, retry_delay_seconds=10, ) ``` ### Launch a new cluster and run a Databricks notebook Notebook named `example.ipynb` on Databricks which accepts a name parameter: ```python name = dbutils.widgets.get("name") message = f"Don't worry {name}, I got your request! Welcome to prefect-databricks!" print(message) ``` Prefect flow that launches a new cluster to run `example.ipynb`: ```python from prefect import flow from prefect_databricks import DatabricksCredentials from prefect_databricks.jobs import jobs_runs_submit from prefect_databricks.models.jobs import ( AutoScale, AwsAttributes, JobTaskSettings, NotebookTask, NewCluster, ) @flow def jobs_runs_submit_flow(notebook_path, **base_parameters): databricks_credentials = DatabricksCredentials.load("my-block") # specify new cluster settings aws_attributes = AwsAttributes( availability="SPOT", zone_id="us-west-2a", ebs_volume_type="GENERAL_PURPOSE_SSD", ebs_volume_count=3, ebs_volume_size=100, ) auto_scale = AutoScale(min_workers=1, max_workers=2) new_cluster = NewCluster( aws_attributes=aws_attributes, autoscale=auto_scale, node_type_id="m4.large", spark_version="10.4.x-scala2.12", spark_conf={"spark.speculation": True}, ) # specify notebook to use and parameters to pass notebook_task = NotebookTask( notebook_path=notebook_path, base_parameters=base_parameters, ) # compile job task settings job_task_settings = JobTaskSettings( new_cluster=new_cluster, notebook_task=notebook_task, task_key="prefect-task" ) run = jobs_runs_submit( databricks_credentials=databricks_credentials, run_name="prefect-job", tasks=[job_task_settings] ) return run if __name__ == "__main__": jobs_runs_submit_flow("/Users/username@gmail.com/example.ipynb", name="Marvin") ``` Note, instead of using the built-in models, you may also input valid JSON. For example, `AutoScale(min_workers=1, max_workers=2)` is equivalent to `{"min_workers": 1, "max_workers": 2}`. ## Resources For assistance using Databricks, consult the [Databricks documentation](https://www.databricks.com/databricks-documentation). Refer to the `prefect-databricks` [SDK documentation](https://reference.prefect.io/prefect_databricks/) to explore all the capabilities of the `prefect-databricks` library. # SDK docs Source: https://docs-3.prefect.io/integrations/prefect-databricks/sdk # prefect-dbt Source: https://docs-3.prefect.io/integrations/prefect-dbt/index With `prefect-dbt`, you can trigger and observe dbt Cloud jobs, execute dbt Core CLI commands, and incorporate other tools, such as [Snowflake](/integrations/prefect-snowflake/index), into your dbt runs. Prefect provides a global view of the state of your workflows and allows you to take action based on state changes. Prefect integrations may provide pre-built [blocks](/v3/develop/blocks), [flows](/v3/develop/write-flows), or [tasks](/v3/develop/write-tasks) for interacting with external systems. Block types in this library allow you to do things such as run a dbt Cloud job or execute a dbt Core command. ## Getting started ### Prerequisites * A [dbt Cloud account](https://cloud.getdbt.com/) if using dbt Cloud. ### Install `prefect-dbt` The following command will install a version of `prefect-dbt` compatible with your installed version of `prefect`. If you don't already have `prefect` installed, it will install the newest version of `prefect` as well. ```bash pip install "prefect[dbt]" ``` Upgrade to the latest versions of `prefect` and `prefect-dbt`: ```bash pip install -U "prefect[dbt]" ``` If necessary, see [additional installation options for dbt Core with BigQuery, Snowflake, and Postgres](#additional-installation-options). ### Register newly installed blocks types Register the block types in the `prefect-dbt` module to make them available for use. ```bash prefect block register -m prefect_dbt ``` ## dbt Cloud If you have an existing dbt Cloud job, use the pre-built flow `run_dbt_cloud_job` to trigger a job run and wait until the job run is finished. If some nodes fail, `run_dbt_cloud_job` can efficiently retry the unsuccessful nodes. Prior to running this flow, save your dbt Cloud credentials to a DbtCloudCredentials block and create a dbt Cloud Job block: ### Save dbt Cloud credentials to a block Blocks can be [created through code](/v3/develop/blocks) or through the UI. To create a dbt Cloud Credentials block: 1. Log into your [dbt Cloud account](https://cloud.getdbt.com/settings/profile). 2. Click **API Tokens** on the sidebar. 3. Copy a Service Token. 4. Copy the account ID from the URL: `https://cloud.getdbt.com/settings/accounts/`. 5. Create and run the following script, replacing the placeholders: ```python from prefect_dbt.cloud import DbtCloudCredentials DbtCloudCredentials( api_key="API-KEY-PLACEHOLDER", account_id="ACCOUNT-ID-PLACEHOLDER" ).save("CREDENTIALS-BLOCK-NAME-PLACEHOLDER") ``` ### Create a dbt Cloud job block 1. In dbt Cloud, click on **Deploy** -> **Jobs**. 2. Select a job. 3. Copy the job ID from the URL: `https://cloud.getdbt.com/deploy//projects//jobs/` 4. Create and run the following script, replacing the placeholders. ```python from prefect_dbt.cloud import DbtCloudCredentials, DbtCloudJob dbt_cloud_credentials = DbtCloudCredentials.load("CREDENTIALS-BLOCK-PLACEHOLDER") dbt_cloud_job = DbtCloudJob( dbt_cloud_credentials=dbt_cloud_credentials, job_id="JOB-ID-PLACEHOLDER" ).save("JOB-BLOCK-NAME-PLACEHOLDER") ``` ### Run a dbt Cloud job and wait for completion ```python from prefect import flow from prefect_dbt.cloud import DbtCloudJob from prefect_dbt.cloud.jobs import run_dbt_cloud_job import asyncio @flow async def run_dbt_job_flow(): result = await run_dbt_cloud_job( dbt_cloud_job = await DbtCloudJob.load("JOB-BLOCK-NAME-PLACEHOLDER"), targeted_retries = 0, ) return await result if __name__ == "__main__": asyncio.run(run_dbt_job_flow()) ``` ## dbt Core ### prefect-dbt 0.7.0 and later Versions 0.7.0 and later of `prefect-dbt` include the `PrefectDbtRunner` class, which provides an improved interface for running dbt Core commands with better logging, failure handling, and automatic asset lineage. The `PrefectDbtRunner` is inspired by the `DbtRunner` from dbt Core, and its `invoke` method accepts the same arguments. Refer to the [`DbtRunner` documentation](https://docs.getdbt.com/reference/programmatic-invocations) for more information on how to call `invoke`. Basic usage: ```python from prefect import flow from prefect_dbt import PrefectDbtRunner @flow def run_dbt(): PrefectDbtRunner().invoke(["build"]) if __name__ == "__main__": run_dbt() ``` When calling `.invoke()` in a flow or task, each node in dbt's execution graph is reflected as a task in Prefect's execution graph. Logs from each node will belong to the corresponding task, and each task's state is determined by the state of that node's execution. ```bash 15:54:59.119 | INFO | Flow run 'imposing-partridge' - Found 8 models, 3 seeds, 18 data tests, 543 macros 15:54:59.134 | INFO | Flow run 'imposing-partridge' - 15:54:59.148 | INFO | Flow run 'imposing-partridge' - Concurrency: 1 threads (target='dev') 15:54:59.164 | INFO | Flow run 'imposing-partridge' - 15:54:59.665 | INFO | Task run 'model my_first_dbt_model' - 1 of 29 OK created sql table model main.my_first_dbt_model ..................... [OK in 0.18s] 15:54:59.671 | INFO | Task run 'model my_first_dbt_model' - Finished in state Completed() ... 15:55:02.373 | ERROR | Task run 'model product_metrics' - Runtime Error in model product_metrics (models/marts/product/product_metrics.sql) Binder Error: Values list "o" does not have a column named "product_id" LINE 47: on p.product_id = o.product_id 15:55:02.857 | ERROR | Task run 'model product_metrics' - Finished in state Failed('Task run encountered an exception Exception: Node model.demo.product_metrics finished with status error') ``` The task runs created by calling `.invoke()` run separately from dbt Core, and do not affect dbt's execution behavior. These tasks do not persist results and cannot be cached. Use [dbt's native retry functionality](https://docs.getdbt.com/reference/commands/retry) in combination with [runtime data from `prefect`](/v3/how-to-guides/workflows/access-runtime-info) to retry failed nodes. ```python from prefect import flow from prefect.runtime.flow_run import get_run_count from prefect_dbt import PrefectDbtRunner @flow(retries=2) def run_dbt(): runner = PrefectDbtRunner() if get_run_count() == 1: runner.invoke(["build"]) else: runner.invoke(["retry"]) if __name__ == "__main__": run_dbt() ``` #### Assets Prefect Cloud maintains a graph of [assets](/v3/concepts/assets), objects produced by your workflows. Any dbt seed, source or model will appear on your asset graph in Prefect Cloud once it has been executed using the `PrefectDbtRunner`. The upstream dependencies of an asset materialized by `prefect-dbt` are derived from the `depends_on` field in dbt's `manifest.json`. The asset's `key` will be its corresponding dbt resource's `relation_name`. The `name` and `description` asset properties are populated by a dbt resource's name description. The `owners` asset property is populated if there is data assigned to the `owner` key under a resoure's `meta` config. ```yaml models: - name: product_metrics description: "Product metrics and categorization" config: meta: owner: "kevin-g" ``` Asset metadata is collected from the result of the node's execution. ```json { "node_path": "marts/product/product_metrics.sql", "node_name": "product_metrics", "unique_id": "model.demo.product_metrics", "resource_type": "model", "materialized": "table", "node_status": "error", "node_started_at": "2025-06-26T20:55:05.661126", "node_finished_at": "2025-06-26T20:55:05.733257", "meta": { "owner": "kevin-g" }, "node_relation": { "database": "dev", "schema": "main_marts", "alias": "product_metrics", "relation_name": "\"dev\".\"main_marts\".\"product_metrics\"" } } ``` Optionally, the compiled code of a dbt model can be appended to the asset description. ```python from prefect import flow from prefect_dbt import PrefectDbtRunner @flow def run_dbt(): PrefectDbtRunner(include_compiled_code=True).invoke(["build"]) if __name__ == "__main__": run_dbt() ``` #### dbt settings The `PrefectDbtSettings` class, based on Pydantic's `BaseSettings` class, automatically detects `DBT_`-prefixed environment variables that have a direct effect on the `PrefectDbtRunner` class. If no environment variables are set, dbt's defaults are used. Provide a `PrefectDbtSettings` instance to `PrefectDbtRunner` to customize dbt settings or override environment variables. ```python from prefect import flow from prefect_dbt import PrefectDbtRunner, PrefectDbtSettings @flow def run_dbt(): PrefectDbtRunner( settings=PrefectDbtSettings( project_dir="test", profiles_dir="examples/run_dbt" ) ).invoke(["build"]) if __name__ == "__main__": run_dbt() ``` #### Logging The `PrefectDbtRunner` class maps all dbt log levels to standard Python logging levels, so filtering for log levels like `WARNING` or `ERROR` in the Prefect UI applies to dbt's logs. By default, the logging level used by dbt is Prefect's logging level, which can be configured using the `PREFECT_LOGGING_LEVEL` Prefect setting. The dbt logging level can be set independently from Prefect's by using the `DBT_LOG_LEVEL` environment variable, setting `log_level` in `PrefectDbtSettings`, or passing the `--log-level` flag or `log_level` kwarg to `.invoke()`. Only logging levels of higher severity (more restrictive) than Prefect's logging level will have an effect. ```python from dbt_common.events.base_types import EventLevel from prefect import flow from prefect_dbt import PrefectDbtRunner, PrefectDbtSettings @flow def run_dbt(): PrefectDbtRunner( settings=PrefectDbtSettings( project_dir="test", profiles_dir="examples/run_dbt", log_level=EventLevel.ERROR, # explicitly choose a higher log level for dbt ) ).invoke(["build"]) if __name__ == "__main__": run_dbt() ``` #### `profiles.yml` templating The `PrefectDbtRunner` class supports templating in your `profiles.yml` file, allowing you to reference Prefect blocks and variables that will be resolved at runtime. This enables you to store sensitive credentials securely using Prefect blocks, and configure different targets based on the Prefect workspace. For example, a Prefect variable called `target` can have a different value in development (`dev`) and production (`prod`) workspaces. This allows you to use the same `profiles.yml` file to automatically reference a local DuckDB instance in development and a Snowflake instance in production. ```yaml example: outputs: dev: type: duckdb path: dev.duckdb threads: 1 prod: type: snowflake account: "{{ prefect.blocks.snowflake-credentials.warehouse-access.account }}" user: "{{ prefect.blocks.snowflake-credentials.warehouse-access.user }}" password: "{{ prefect.blocks.snowflake-credentials.warehouse-access.password }}" database: "{{ prefect.blocks.snowflake-connector.prod-connector.database }}" schema: "{{ prefect.blocks.snowflake-connector.prod-connector.schema }}" warehouse: "{{ prefect.blocks.snowflake-connector.prod-connector.warehouse }}" threads: 4 target: "{{ prefect.variables.target }}" ``` #### Failure handling By default, any dbt node execution failures cause the entire dbt run to raise an exception with a message containing detailed information about the failure. ``` Failures detected during invocation of dbt command 'build': Test not_null_my_first_dbt_model_id failed with message: "Got 1 result, configured to fail if != 0" ``` The `PrefectDbtRunner`'s `raise_on_failure` option can be set to `False` to prevent failures in dbt from causing the failure of the flow or task in which `.invoke()` is called. ```python from prefect import flow from prefect_dbt import PrefectDbtRunner @flow def run_dbt(): PrefectDbtRunner( raise_on_failure=False # Failed tests will not fail the flow run ).invoke(["build"]) if __name__ == "__main__": run_dbt() ``` #### Native dbt configuration You can disable automatic asset lineage detection for all resources in your dbt project config, or for specific resources in their own config: ```yaml prefect: enable_assets: False ``` ### prefect-dbt 0.6.6 and earlier `prefect-dbt` supports a couple of ways to run dbt Core commands. A `DbtCoreOperation` block will run the commands as shell commands, while other tasks use dbt's [Programmatic Invocation](#programmatic-invocation). Optionally, specify the `project_dir`. If `profiles_dir` is not set, the `DBT_PROFILES_DIR` environment variable will be used. If `DBT_PROFILES_DIR` is not set, the default directory will be used `$HOME/.dbt/`. #### Use an existing profile If you have an existing dbt `profiles.yml` file, specify the `profiles_dir` where the file is located: ```python from prefect import flow from prefect_dbt.cli.commands import DbtCoreOperation @flow def trigger_dbt_flow() -> str: result = DbtCoreOperation( commands=["pwd", "dbt debug", "dbt run"], project_dir="PROJECT-DIRECTORY-PLACEHOLDER", profiles_dir="PROFILES-DIRECTORY-PLACEHOLDER" ).run() return result if __name__ == "__main__": trigger_dbt_flow() ``` If you are already using Prefect blocks such as the [Snowflake Connector block](integrations/prefect-snowflake), you can use those blocks to [create a new `profiles.yml` with a `DbtCliProfile` block](#create-a-new-profile-with-blocks). ##### Use environment variables with Prefect secret blocks If you use environment variables in `profiles.yml`, set a Prefect Secret block as an environment variable: ```python import os from prefect.blocks.system import Secret secret_block = Secret.load("DBT_PASSWORD_PLACEHOLDER") # Access the stored secret DBT_PASSWORD = secret_block.get() os.environ["DBT_PASSWORD"] = DBT_PASSWORD ``` This example `profiles.yml` file could then access that variable. ```yaml profile: target: prod outputs: prod: type: postgres host: 127.0.0.1 # IMPORTANT: Make sure to quote the entire Jinja string here user: dbt_user password: "{{ env_var('DBT_PASSWORD') }}" ``` #### Create a new `profiles.yml` file with blocks If you don't have a `profiles.yml` file, you can use a DbtCliProfile block to create `profiles.yml`. Then, specify `profiles_dir` where `profiles.yml` will be written. Here's example code with placeholders: ```python from prefect import flow from prefect_dbt.cli import DbtCliProfile, DbtCoreOperation @flow def trigger_dbt_flow(): dbt_cli_profile = DbtCliProfile.load("DBT-CORE-OPERATION-BLOCK-PLACEHOLDER") with DbtCoreOperation( commands=["dbt debug", "dbt run"], project_dir="PROJECT-DIRECTORY-PLACEHOLDER", profiles_dir="PROFILES-DIRECTORY-PLACEHOLDER", dbt_cli_profile=dbt_cli_profile, ) as dbt_operation: dbt_process = dbt_operation.trigger() # do other things before waiting for completion dbt_process.wait_for_completion() result = dbt_process.fetch_result() return result if __name__ == "__main__": trigger_dbt_flow() ``` **Supplying the `dbt_cli_profile` argument will overwrite existing `profiles.yml` files** If you already have a `profiles.yml` file in the specified `profiles_dir`, the file will be overwritten. If you do not specify a profiles directory, `profiles.yml` at `~/.dbt/` would be overwritten. Visit the SDK reference in the side navigation to see other built-in `TargetConfigs` blocks. If the desired service profile is not available, you can build one from the generic `TargetConfigs` class. #### Programmatic Invocation `prefect-dbt` has some pre-built tasks that use dbt's [programmatic invocation](https://docs.getdbt.com/reference/programmatic-invocations). For example: ```python from prefect import flow from prefect_dbt.cli.tasks import from prefect import flow from prefect_dbt.cli.commands import trigger_dbt_cli_command, dbt_build_task @flow def dbt_build_flow(): trigger_dbt_cli_command( command="dbt deps", project_dir="/Users/test/my_dbt_project_dir", ) dbt_build_task( project_dir = "/Users/test/my_dbt_project_dir", create_summary_artifact = True, summary_artifact_key = "dbt-build-task-summary", extra_command_args=["--select", "foo_model"] ) if __name__ == "__main__": dbt_build_flow() ``` See the [SDK docs](https://reference.prefect.io/prefect_dbt/) for other pre-built tasks. ##### Create a summary artifact These pre-built tasks can also create artifacts. These artifacts have extra information about dbt Core runs, such as messages and compiled code for nodes that fail or have errors. prefect-dbt Summary Artifact #### BigQuery CLI profile block example To create dbt Core target config and profile blocks for BigQuery: 1. Save and load a `GcpCredentials` block. 2. Determine the schema / dataset you want to use in BigQuery. 3. Create a short script, replacing the placeholders. ```python from prefect_gcp.credentials import GcpCredentials from prefect_dbt.cli import BigQueryTargetConfigs, DbtCliProfile credentials = GcpCredentials.load("CREDENTIALS-BLOCK-NAME-PLACEHOLDER") target_configs = BigQueryTargetConfigs( schema="SCHEMA-NAME-PLACEHOLDER", # also known as dataset credentials=credentials, ) target_configs.save("TARGET-CONFIGS-BLOCK-NAME-PLACEHOLDER") dbt_cli_profile = DbtCliProfile( name="PROFILE-NAME-PLACEHOLDER", target="TARGET-NAME-placeholder", target_configs=target_configs, ) dbt_cli_profile.save("DBT-CLI-PROFILE-BLOCK-NAME-PLACEHOLDER") ``` To create a dbt Core operation block: 1. Determine the dbt commands you want to run. 2. Create a short script, replacing the placeholders. ```python from prefect_dbt.cli import DbtCliProfile, DbtCoreOperation dbt_cli_profile = DbtCliProfile.load("DBT-CLI-PROFILE-BLOCK-NAME-PLACEHOLDER") dbt_core_operation = DbtCoreOperation( commands=["DBT-CLI-COMMANDS-PLACEHOLDER"], dbt_cli_profile=dbt_cli_profile, overwrite_profiles=True, ) dbt_core_operation.save("DBT-CORE-OPERATION-BLOCK-NAME-PLACEHOLDER") ``` Load the saved block that holds your credentials: ```python from prefect_dbt.cloud import DbtCoreOperation DbtCoreOperation.load("DBT-CORE-OPERATION-BLOCK-NAME-PLACEHOLDER") ``` ## Resources For assistance using dbt, consult the [dbt documentation](https://docs.getdbt.com/docs/building-a-dbt-project/documentation). Refer to the `prefect-dbt` [SDK documentation](https://reference.prefect.io/prefect_dbt/) to explore all the capabilities of the `prefect-dbt` library. ### Additional installation options Additional installation options for dbt Core with BigQuery, Snowflake, and Postgres are shown below. #### Additional capabilities for dbt Core and Snowflake profiles First install the main library compatible with your Prefect version: ```bash pip install "prefect[dbt]" ``` Then install the additional capabilities you need. ```bash pip install "prefect-dbt[snowflake]" ``` #### Additional capabilities for dbt Core and BigQuery profiles ```bash pip install "prefect-dbt[bigquery]" ``` #### Additional capabilities for dbt Core and Postgres profiles ```bash pip install "prefect-dbt[postgres]" ``` Or, install all of the extras. ```bash pip install -U "prefect-dbt[all_extras]" ``` # SDK docs Source: https://docs-3.prefect.io/integrations/prefect-dbt/sdk # prefect-docker Source: https://docs-3.prefect.io/integrations/prefect-docker/index The `prefect-docker` library is required to create deployments that will submit runs to most Prefect work pool infrastructure types. ## Getting started ### Prerequisites * [Docker installed](https://www.docker.com/) and running. ### Install `prefect-docker` The following command will install a version of `prefect-docker` compatible with your installed version of `prefect`. If you don't already have `prefect` installed, it will install the newest version of `prefect` as well. ```bash pip install "prefect[docker]" ``` Upgrade to the latest versions of `prefect` and `prefect-docker`: ```bash pip install -U "prefect[docker]" ``` ### Examples See the Prefect [Workers docs](/v3/how-to-guides/deployment_infra/docker) to learn how to create and run deployments that use Docker. ## Resources For assistance using Docker, consult the [Docker documentation](https://docs.docker.com/). Refer to the `prefect-docker` [SDK documentation](https://reference.prefect.io/prefect_docker/) to explore all the capabilities of the `prefect-docker` library. # SDK docs Source: https://docs-3.prefect.io/integrations/prefect-docker/sdk # prefect-email Source: https://docs-3.prefect.io/integrations/prefect-email/index The `prefect-email` library helps you send emails from your Prefect flows. ## Getting started ### Prerequisites * Many email services, such as Gmail, require an [App Password](https://support.google.com/accounts/answer/185833) to successfully send emails. If you encounter an error similar to `smtplib.SMTPAuthenticationError: (535, b'5.7.8 Username and Password not accepted...`, it's likely you are not using an App Password. ### Install `prefect-email` The following command will install a version of prefect-email compatible with your installed version of Prefect. If you don't already have Prefect installed, it will install the newest version of Prefect as well. ```bash pip install "prefect[email]" ``` Upgrade to the latest versions of Prefect and prefect-email: ```bash pip install -U "prefect[email]" ``` ### Register newly installed block types Register the block types in the prefect-email module to make them available for use. ```bash prefect block register -m prefect_email ``` ## Save credentials to an EmailServerCredentials block Save your email credentials to a block. Replace the placeholders with your email address and password. ```python from prefect_email import EmailServerCredentials credentials = EmailServerCredentials( username="EMAIL-ADDRESS-PLACEHOLDER", password="PASSWORD-PLACEHOLDER", # must be an application password ) credentials.save("BLOCK-NAME-PLACEHOLDER") ``` In the examples below you load a credentials block to authenticate with the email server. ## Send emails The code below shows how to send an email using the pre-built `email_send_message` [task](https://docs.prefect.io/latest/develop/write-tasks/). ```python from prefect import flow from prefect_email import EmailServerCredentials, email_send_message @flow def example_email_send_message_flow(email_addresses): email_server_credentials = EmailServerCredentials.load("BLOCK-NAME-PLACEHOLDER") for email_address in email_addresses: subject = email_send_message.with_options(name=f"email {email_address}").submit( email_server_credentials=email_server_credentials, subject="Example Flow Notification using Gmail", msg="This proves email_send_message works!", email_to=email_address, ) if __name__ == "__main__": example_email_send_message_flow(["EMAIL-ADDRESS-PLACEHOLDER"]) ``` ## Capture exceptions and send an email This example demonstrates how to send an email notification with the details of the exception when a flow run fails. `prefect-email` can be wrapped in an `except` statement to do just that! ```python from prefect import flow from prefect.context import get_run_context from prefect_email import EmailServerCredentials, email_send_message def notify_exc_by_email(exc): context = get_run_context() flow_run_name = context.flow_run.name email_server_credentials = EmailServerCredentials.load("email-server-credentials") email_send_message( email_server_credentials=email_server_credentials, subject=f"Flow run {flow_run_name!r} failed", msg=f"Flow run {flow_run_name!r} failed due to {exc}.", email_to=email_server_credentials.username, ) @flow def example_flow(): try: 1 / 0 except Exception as exc: notify_exc_by_email(exc) raise if __name__ == "__main__": example_flow() ``` ## Resources Refer to the `prefect-email` [SDK documentation](https://reference.prefect.io/prefect_email/) to explore all the capabilities of the `prefect-email` library. # SDK docs Source: https://docs-3.prefect.io/integrations/prefect-email/sdk # Google Cloud Run Worker Guide Source: https://docs-3.prefect.io/integrations/prefect-gcp/gcp-worker-guide ## Why use Google Cloud Run for flow run execution? Google Cloud Run is a fully managed compute platform that automatically scales your containerized applications. 1. Serverless architecture: Cloud Run follows a serverless architecture, which means you don't need to manage any underlying infrastructure. Google Cloud Run automatically handles the scaling and availability of your flow run infrastructure, allowing you to focus on developing and deploying your code. 2. Scalability: Cloud Run can automatically scale your pipeline to handle varying workloads and traffic. It can quickly respond to increased demand and scale back down during low activity periods, ensuring efficient resource utilization. 3. Integration with Google Cloud services: Google Cloud Run easily integrates with other Google Cloud services, such as Google Cloud Storage, Google Cloud Pub/Sub, and Google Cloud Build. This interoperability enables you to build end-to-end data pipelines that use a variety of services. 4. Portability: Since Cloud Run uses container images, you can develop your pipelines locally using Docker and then deploy them on Google Cloud Run without significant modifications. This portability allows you to run the same pipeline in different environments. ## Google Cloud Run guide After completing this guide, you will have: 1. Created a Google Cloud Service Account 2. Created a Prefect Work Pool 3. Deployed a Prefect Worker as a Cloud Run Service 4. Deployed a Flow 5. Executed the Flow as a Google Cloud Run Job ### Prerequisites Before starting this guide, make sure you have: * A [Google Cloud Platform (GCP) account](https://cloud.google.com/gcp). * A project on your GCP account where you have the necessary permissions to create Cloud Run Services and Service Accounts. * The `gcloud` CLI installed on your local machine. You can follow Google Cloud's [installation guide](https://cloud.google.com/sdk/docs/install). If you're using Apple (or a Linux system) you can also use [Homebrew](https://formulae.brew.sh/cask/google-cloud-sdk) for installation. * [Docker](https://www.docker.com/get-started/) installed on your local machine. * A Prefect server instance. You can sign up for a forever free [Prefect Cloud Account](https://app.prefect.cloud/) or, alternatively, self-host a Prefect server. ### Step 1. Create a Google Cloud service account First, open a terminal or command prompt on your local machine where `gcloud` is installed. If you haven't already authenticated with `gcloud`, run the following command and follow the instructions to log in to your GCP account. ```bash gcloud auth login ``` Next, you'll set your project where you'd like to create the service account. Use the following command and replace `` with your GCP project's ID. ```bash gcloud config set project ``` For example, if your project's ID is `prefect-project` the command will look like this: ```bash gcloud config set project prefect-project ``` Now you're ready to make the service account. To do so, you'll need to run this command: ```bash gcloud iam service-accounts create --display-name="" ``` Here's an example of the command above which you can use which already has the service account name and display name provided. An additional option to describe the service account has also been added: ```bash gcloud iam service-accounts create prefect-service-account \ --description="service account to use for the prefect worker" \ --display-name="prefect-service-account" ``` The last step of this process is to make sure the service account has the proper permissions to execute flow runs as Cloud Run jobs. Run the following commands to grant the necessary permissions: ```bash gcloud projects add-iam-policy-binding \ --member="serviceAccount:@.iam.gserviceaccount.com" \ --role="roles/iam.serviceAccountUser" ``` ```bash gcloud projects add-iam-policy-binding \ --member="serviceAccount:@.iam.gserviceaccount.com" \ --role="roles/run.admin" ``` ### Step 2. Create a Cloud Run work pool Let's walk through the process of creating a Cloud Run work pool. #### Fill out the work pool base job template You can create a new work pool using the Prefect UI or CLI. The following command creates a work pool of type `cloud-run` via the CLI (you'll want to replace the `` with the name of your work pool): ```bash prefect work-pool create --type cloud-run ``` Once the work pool is created, find the work pool in the UI and edit it. There are many ways to customize the base job template for the work pool. Modifying the template influences the infrastructure configuration that the worker provisions for flow runs submitted to the work pool. For this guide we are going to modify just a few of the available fields. Specify the region for the cloud run job. region Save the name of the service account created in first step of this guide. name Your work pool is now ready to receive scheduled flow runs! ### Step 3. Deploy a Cloud Run worker Now you can launch a Cloud Run service to host the Cloud Run worker. This worker will poll the work pool that you created in the previous step. Navigate back to your terminal and run the following commands to set your Prefect API key and URL as environment variables. Be sure to replace `` and `` with your Prefect account and workspace IDs (both will be available in the URL of the UI when previewing the workspace dashboard). You'll want to replace `` with an active API key as well. ```bash export PREFECT_API_URL='https://api.prefect.cloud/api/accounts//workspaces/' export PREFECT_API_KEY='' ``` Once those variables are set, run the following shell command to deploy your worker as a service. Don't forget to replace `` with the name of the service account you created in the first step of this guide, and replace `` with the name of the work pool you created in the second step. ```bash gcloud run deploy prefect-worker --image=prefecthq/prefect:3-latest \ --set-env-vars PREFECT_API_URL=$PREFECT_API_URL,PREFECT_API_KEY=$PREFECT_API_KEY \ --service-account \ --no-cpu-throttling \ --min-instances 1 \ --startup-probe httpGet.port=8080,httpGet.path=/health,initialDelaySeconds=100,periodSeconds=20,timeoutSeconds=20 \ --args "prefect","worker","start","--install-policy","always","--with-healthcheck","-p","","-t","cloud-run" ``` After running this command, you'll be prompted to specify a region. Choose the same region that you selected when creating the Cloud Run work pool in the second step of this guide. The next prompt will ask if you'd like to allow unauthenticated invocations to your worker. For this guide, you can select "No". After a few seconds, you'll be able to see your new `prefect-worker` service by navigating to the Cloud Run page of your Google Cloud console. Additionally, you should be able to see a record of this worker in the Prefect UI on the work pool's page by navigating to the `Worker` tab. Let's not leave our worker hanging, it's time to give it a job. ### Step 4. Deploy a flow Let's prepare a flow to run as a Cloud Run job. In this section of the guide, we'll "bake" our code into a Docker image, and push that image to Google Artifact Registry. ### Create a registry Let's create a docker repository in your Google Artifact Registry to host your custom image. If you already have a registry, and are authenticated to it, skip ahead to the *Write a flow* section. The following command creates a repository using the gcloud CLI. You'll want to replace the `` with your own value. : ```bash gcloud artifacts repositories create \ --repository-format=docker --location=us ``` Now you can authenticate to artifact registry: ```bash gcloud auth configure-docker us-docker.pkg.dev ``` ### Write a flow First, create a new directory. This will serve as the root of your project's repository. Within the directory, create a sub-directory called `flows`. Navigate to the `flows` subdirectory and create a new file for your flow. Feel free to write your own flow, but here's a ready-made one for your convenience: ```python import httpx from prefect import flow, task from prefect.artifacts import create_markdown_artifact @task def mark_it_down(temp): markdown_report = f"""# Weather Report ## Recent weather | Time | Temperature | | :-------- | ----------: | | Now | {temp} | | In 1 hour | {temp + 2} | """ create_markdown_artifact( key="weather-report", markdown=markdown_report, description="Very scientific weather report", ) @flow def fetch_weather(lat: float, lon: float): base_url = "https://api.open-meteo.com/v1/forecast/" weather = httpx.get( base_url, params=dict(latitude=lat, longitude=lon, hourly="temperature_2m"), ) most_recent_temp = float(weather.json()["hourly"]["temperature_2m"][0]) mark_it_down(most_recent_temp) if __name__ == "__main__": fetch_weather(38.9, -77.0) ``` In the remainder of this guide, this script will be referred to as `weather_flow.py`, but you can name yours whatever you'd like. #### Creating a `prefect.yaml` file Now we're ready to make a `prefect.yaml` file, which will be responsible for managing the deployments of this repository. **Navigate back to the root of your directory**, and run the following command to create a `prefect.yaml` file using Prefect's docker deployment recipe. ```bash prefect init --recipe docker ``` You'll receive a prompt to put in values for the image name and tag. Since we will be pushing the image to Google Artifact Registry, the name of your image should be prefixed with the path to the docker repository you created within the registry. For example: `us-docker.pkg.dev///`. You'll want to replace `` with the ID of your project in GCP. This should match the ID of the project you used in first step of this guide. Here is an example of what this could look like: ```bash image_name: us-docker.pkg.dev/prefect-project/my-artifact-registry/gcp-weather-image tag: latest ``` At this point, there will be a new `prefect.yaml` file available at the root of your project. The contents will look similar to the example below, however, we've added in a combination of YAML templating options and Prefect deployment actions to build out a simple CI/CD process. Feel free to copy the contents and paste them in your prefect.yaml: ```yaml # Welcome to your prefect.yaml file! You can you this file for storing and managing # configuration for deploying your flows. We recommend committing this file to source # control along with your flow code. # Generic metadata about this project name: prefect-version: 3.0.0 # build section allows you to manage and build docker image build: - prefect_docker.deployments.steps.build_docker_image: id: build_image requires: prefect-docker>=0.3.1 image_name: /gcp-weather-image tag: latest dockerfile: auto platform: linux/amd64 # push section allows you to manage if and how this project is uploaded to remote locations push: - prefect_docker.deployments.steps.push_docker_image: requires: prefect-docker>=0.3.1 image_name: '{{ build_image.image_name }}' tag: '{{ build_image.tag }}' # pull section allows you to provide instructions for cloning this project in remote locations pull: - prefect.deployments.steps.set_working_directory: directory: /opt/prefect/ # the deployments section allows you to provide configuration for deploying flows deployments: - name: gcp-weather-deploy version: null tags: [] description: null schedule: {} flow_name: null entrypoint: flows/weather_flow.py:fetch_weather parameters: lat: 14.5994 lon: 28.6731 work_pool: name: my-cloud-run-pool work_queue_name: default job_variables: image: '{{ build_image.image }}' ``` After copying the example above, don't forget to replace `` with the name of the directory where your flow folder and `prefect.yaml` live. You'll also need to replace `` with the path to the Docker repository in your Google Artifact Registry. To get a better understanding of the different components of the `prefect.yaml` file above and what they do, feel free to read this next section. Otherwise, you can skip ahead to *Flow Deployment*. In the `build` section of the `prefect.yaml` the following step is executed at deployment build time: 1. `prefect_docker.deployments.steps.build_docker_image` : builds a Docker image automatically which uses the name and tag chosen previously. If you are using an ARM-based chip (such as an M1 or M2 Mac), you'll want to ensure that you add `platform: linux/amd64` to your `build_docker_image` step to ensure that your docker image uses an AMD architecture. For example: ```yaml - prefect_docker.deployments.steps.build_docker_image: id: build_image requires: prefect-docker>=0.3.1 image_name: us-docker.pkg.dev/prefect-project/my-docker-repository/gcp-weather-image tag: latest dockerfile: auto platform: linux/amd64 ``` The `push` section sends the Docker image to the Docker repository in your Google Artifact Registry, so that it can be easily accessed by the worker for flow run execution. The `pull` section sets the working directory for the process prior to importing your flow. In the `deployments` section of the `prefect.yaml` file above, you'll see that there is a deployment declaration named `gcp-weather-deploy`. Within the declaration, the entrypoint for the flow is specified along with some default parameters which will be passed to the flow at runtime. Last but not least, the name of the work pool that we created in step 2 of this guide is specified. #### Flow deployment Once you're happy with the specifications in the `prefect.yaml` file, run the following command in the terminal to deploy your flow: ```bash prefect deploy --name gcp-weather-deploy ``` Once the flow is deployed to Prefect Cloud or your local Prefect Server, it's time to queue up a flow run! ### Step 5. Flow execution Find your deployment in the UI, and hit the *Quick Run* button. You have now successfully submitted a flow run to your Cloud Run worker! If you used the flow script provided in this guide, check the *Artifacts* tab for the flow run once it completes. You'll have a nice little weather report waiting for you there. Hope your day is a sunny one! ### Recap and next steps Congratulations on completing this guide! Looking back on our journey, you have: 1. Created a Google Cloud service account 2. Created a Cloud Run work pool 3. Deployed a Cloud Run worker 4. Deployed a flow 5. Executed a flow For next steps, take a look at some of the other [work pools](/v3/how-to-guides/deployment_infra/serverless) Prefect has to offer. The world is your oyster πŸ¦ͺ✨. # prefect-gcp Source: https://docs-3.prefect.io/integrations/prefect-gcp/index `prefect-gcp` helps you leverage the capabilities of Google Cloud Platform (GCP) in your workflows. For example, you can run flows on Vertex AI or Cloud Run, read and write data to BigQuery and Cloud Storage, and retrieve secrets with Secret Manager. ## Getting started ### Prerequisites * A [GCP account](https://cloud.google.com/) and the necessary permissions to access desired services. ### Install `prefect-gcp` Install `prefect-gcp` as an extra of `prefect`. If you don't already have `prefect` installed, it will install the newest version of `prefect` as well. ```bash pip pip install -U "prefect[gcp]" ``` ```bash uv uv pip install -U "prefect[gcp]" ``` If using BigQuery, Cloud Storage, Secret Manager, or Vertex AI, see [additional installation options](#install-extras). #### Install extras To install `prefect-gcp` with all additional capabilities, run the install command above and then run the following command: ```bash pip pip install -U "prefect-gcp[all_extras]" ``` ```bash uv uv pip install -U "prefect-gcp[all_extras]" ``` Or, install extras individually: ```bash pip # Use Cloud Storage pip install -U "prefect-gcp[cloud_storage]" # Use BigQuery pip install -U "prefect-gcp[bigquery]" # Use Secret Manager pip install -U "prefect-gcp[secret_manager]" # Use Vertex AI pip install -U "prefect-gcp[aiplatform]" ``` ```bash uv # Use Cloud Storage uv pip install -U "prefect-gcp[cloud_storage]" # Use BigQuery uv pip install -U "prefect-gcp[bigquery]" # Use Secret Manager uv pip install -U "prefect-gcp[secret_manager]" # Use Vertex AI uv pip install -U "prefect-gcp[aiplatform]" ``` ### Register newly installed block types Register the block types in the module to make them available for use. ```bash prefect block register -m prefect_gcp ``` ## Blocks setup ### Credentials Authenticate with a service account to use `prefect-gcp` services. 1. Refer to the [GCP service account documentation](https://cloud.google.com/iam/docs/creating-managing-service-account-keys#creating) to create and download a service account key file. 2. Copy the JSON contents. 3. Use the Python code below, replace the placeholders with your information. ```python from prefect_gcp import GcpCredentials # replace this PLACEHOLDER dict with your own service account info service_account_info = { "type": "service_account", "project_id": "PROJECT_ID", "private_key_id": "KEY_ID", "private_key": "-----BEGIN PRIVATE KEY-----\nPRIVATE_KEY\n-----END PRIVATE KEY-----\n", "client_email": "SERVICE_ACCOUNT_EMAIL", "client_id": "CLIENT_ID", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://accounts.google.com/o/oauth2/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/SERVICE_ACCOUNT_EMAIL" } GcpCredentials( service_account_info=service_account_info ).save("CREDENTIALS-BLOCK-NAME") ``` This credential block can be used to create other `prefect_gcp` blocks. **`service_account_info` vs `service_account_file`** The advantage of using `service_account_info`, instead of `service_account_file`, is that it is accessible across containers. If `service_account_file` is used, the provided path *must be available* in the container executing the flow. ### BigQuery Read data from and write to Google BigQuery within your Prefect flows. Be sure to [install](#install-extras) `prefect-gcp` with the BigQuery extra. ```python from prefect_gcp.bigquery import GcpCredentials, BigQueryWarehouse gcp_credentials = GcpCredentials.load("CREDENTIALS-BLOCK-NAME") bigquery_block = BigQueryWarehouse( gcp_credentials = gcp_credentials, fetch_size = 1 # Optional: specify a default number of rows to fetch when calling fetch_many ) bigquery_block.save("BIGQUERY-BLOCK-NAME") ``` ### Secret Manager Manage secrets in Google Cloud Platform's Secret Manager. ```python from prefect_gcp import GcpCredentials, GcpSecret gcp_credentials = GcpCredentials.load("CREDENTIALS-BLOCK-NAME") gcp_secret = GcpSecret( secret_name = "your-secret-name", secret_version = "latest", gcp_credentials = gcp_credentials ) gcp_secret.save("SECRET-BLOCK-NAME") ``` ### Cloud Storage Create a block to interact with a GCS bucket. ```python from prefect_gcp import GcpCredentials, GcsBucket gcs_bucket = GcsBucket( bucket="BUCKET-NAME", gcp_credentials=GcpCredentials.load("BIGQUERY-BLOCK-NAME") ) gcs_bucket.save("GCS-BLOCK-NAME") ``` ## Run flows on Google Cloud Run or Vertex AI Run flows on [Google Cloud Run](https://cloud.google.com/run) or [Vertex AI](https://cloud.google.com/vertex-ai) to dynamically scale your infrastructure. Prefect Cloud offers [Google Cloud Run push work pools](/v3/how-to-guides/deployment_infra/serverless). Push work pools submit runs directly to Google Cloud Run, instead of requiring a worker to actively poll for flow runs to execute. See the [Google Cloud Run Worker Guide](/integrations/prefect-gcp/gcp-worker-guide) for a walkthrough of using Google Cloud Run in a hybrid work pool. ## Examples ### Interact with BigQuery This code creates a new dataset in BigQuery, defines a table, insert rows, and fetches data from the table: ```python from prefect import flow from prefect_gcp.bigquery import GcpCredentials, BigQueryWarehouse @flow def bigquery_flow(): all_rows = [] gcp_credentials = GcpCredentials.load("CREDENTIALS-BLOCK-NAME") client = gcp_credentials.get_bigquery_client() client.create_dataset("test_example", exists_ok=True) with BigQueryWarehouse(gcp_credentials=gcp_credentials) as warehouse: warehouse.execute( "CREATE TABLE IF NOT EXISTS test_example.customers (name STRING, address STRING);" ) warehouse.execute_many( "INSERT INTO test_example.customers (name, address) VALUES (%(name)s, %(address)s);", seq_of_parameters=[ {"name": "Marvin", "address": "Highway 42"}, {"name": "Ford", "address": "Highway 42"}, {"name": "Unknown", "address": "Highway 42"}, ], ) while True: # Repeated fetch* calls using the same operation will # skip re-executing and instead return the next set of results new_rows = warehouse.fetch_many("SELECT * FROM test_example.customers", size=2) if len(new_rows) == 0: break all_rows.extend(new_rows) return all_rows if __name__ == "__main__": bigquery_flow() ``` ### Use Prefect with Google Cloud Storage Interact with Google Cloud Storage. The code below uses `prefect_gcp` to upload a file to a Google Cloud Storage bucket and download the same file under a different filename. ```python from pathlib import Path from prefect import flow from prefect_gcp import GcpCredentials, GcsBucket @flow def cloud_storage_flow(): # create a dummy file to upload file_path = Path("test-example.txt") file_path.write_text("Hello, Prefect!") gcp_credentials = GcpCredentials.load("CREDENTIALS-BLOCK-NAME") gcs_bucket = GcsBucket( bucket="BUCKET-NAME", gcp_credentials=gcp_credentials ) gcs_bucket_path = gcs_bucket.upload_from_path(file_path) downloaded_file_path = gcs_bucket.download_object_to_path( gcs_bucket_path, "downloaded-test-example.txt" ) return downloaded_file_path.read_text() if __name__ == "__main__": cloud_storage_flow() ``` **Upload and download directories** `GcsBucket` supports uploading and downloading entire directories. ### Save secrets with Google Secret Manager Read and write secrets with Google Secret Manager. Be sure to [install](#instal-prefect-gcp) `prefect-gcp` with the Secret Manager extra. The code below writes a secret to the Secret Manager, reads the secret data, and deletes the secret. ```python from prefect import flow from prefect_gcp import GcpCredentials, GcpSecret @flow def secret_manager_flow(): gcp_credentials = GcpCredentials.load("CREDENTIALS-BLOCK-NAME") gcp_secret = GcpSecret(secret_name="test-example", gcp_credentials=gcp_credentials) gcp_secret.write_secret(secret_data=b"Hello, Prefect!") secret_data = gcp_secret.read_secret() gcp_secret.delete_secret() return secret_data if __name__ == "__main__": secret_manager_flow() ``` ## Resources For assistance using GCP, consult the [Google Cloud documentation](https://cloud.google.com/docs). GCP can also authenticate without storing credentials in a block. See [Access third-party secrets](/v3/develop/secrets) for an example that uses AWS Secrets Manager and Snowflake. Refer to the `prefect-gcp` [SDK documentation](https://reference.prefect.io/prefect_gcp/) to explore all of the capabilities of the `prefect-gcp` library. # SDK docs Source: https://docs-3.prefect.io/integrations/prefect-gcp/sdk # prefect-github Source: https://docs-3.prefect.io/integrations/prefect-github/index Prefect-github makes it easy to interact with GitHub repositories and use GitHub credentials. ## Getting started ### Prerequisites * A [GitHub account](https://github.com/). ### Install `prefect-github` The following command will install a version of `prefect-github` compatible with your installed version of `prefect`. If you don't already have `prefect` installed, it will install the newest version of `prefect` as well. ```bash pip install "prefect[github]" ``` Upgrade to the latest versions of `prefect` and `prefect-github`: ```bash pip install -U "prefect[github]" ``` ### Register newly installed block types Register the block types in the `prefect-github` module to make them available for use. ```bash prefect block register -m prefect_github ``` ## Examples In the examples below, you create blocks with Python code. Alternatively, blocks can be created through the Prefect UI. To create a deployment and run a deployment where the flow code is stored in a private GitHub repository, you can use the `GitHubCredentials` block. A deployment can use flow code stored in a GitHub repository without using this library in either of the following cases: * The repository is public * The deployment uses a [Secret block](https://docs.prefect.io/latest/develop/blocks/) to store the token Code to create a GitHub Credentials block: ```python from prefect_github import GitHubCredentials github_credentials_block = GitHubCredentials(token="my_token") github_credentials_block.save(name="my-github-credentials-block") ``` ### Access flow code stored in a private GitHub repository in a deployment Use the credentials block you created above to pass the GitHub access token during deployment creation. The code below assumes there's flow code stored in a private GitHub repository. ```python from prefect import flow from prefect.runner.storage import GitRepository from prefect_github import GitHubCredentials if __name__ == "__main__": source = GitRepository( url="https://github.com/org/private-repo.git", credentials=GitHubCredentials.load("my-github-credentials-block") ) flow.from_source(source=source, entrypoint="my_file.py:my_flow").deploy( name="private-github-deploy", work_pool_name="my_pool", ) ``` Alternatively, if you use a `prefect.yaml` file to create the deployment, reference the GitHub Credentials block in the `pull` step: ```yaml pull: - prefect.deployments.steps.git_clone: repository: https://github.com/org/repo.git credentials: "{{ prefect.blocks.github-credentials.my-github-credentials-block }}" ``` ### Interact with a GitHub repository You can use prefect-github to create and retrieve issues and PRs from a repository. Here's an example of adding a star to a GitHub repository: ```python from prefect import flow from prefect_github import GitHubCredentials from prefect_github.repository import query_repository from prefect_github.mutations import add_star_starrable @flow() def github_add_star_flow(): github_credentials = GitHubCredentials.load("github-token") repository_id = query_repository( "PrefectHQ", "Prefect", github_credentials=github_credentials, return_fields="id" )["id"] starrable = add_star_starrable( repository_id, github_credentials ) return starrable if __name__ == "__main__": github_add_star_flow() ``` ## Resources For assistance using GitHub, consult the [GitHub documentation](https://docs.github.com). Refer to the `prefect-github` [SDK documentation](https://reference.prefect.io/prefect_github/) to explore all the capabilities of the `prefect-github` library. # SDK docs Source: https://docs-3.prefect.io/integrations/prefect-github/sdk # prefect-gitlab Source: https://docs-3.prefect.io/integrations/prefect-gitlab/index The prefect-gitlab library makes it easy to interact with GitLab repositories and credentials. ## Getting started ### Prerequisites * A [GitLab account](https://gitlab.com/). ### Install `prefect-gitlab` The following command will install a version of `prefect-gitlab` compatible with your installed version of `prefect`. If you don't already have `prefect` installed, it will install the newest version of `prefect` as well. ```bash pip install "prefect[gitlab]" ``` Upgrade to the latest versions of `prefect` and `prefect-gitlab`: ```bash pip install -U "prefect[gitlab]" ``` ### Register newly installed block types Register the block types in the `prefect-gitlab` module to make them available for use. ```bash prefect block register -m prefect_gitlab ``` ## Examples In the examples below, you create blocks with Python code. Alternatively, blocks can be created through the Prefect UI. ## Store deployment flow code in a private GitLab repository To create a deployment where the flow code is stored in a private GitLab repository, you can use the `GitLabCredentials` block. A deployment can use flow code stored in a GitLab repository without using this library in either of the following cases: * The repository is public * The deployment uses a [Secret block](https://docs.prefect.io/latest/develop/blocks/) to store the token Code to create a GitLab Credentials block: ```python from prefect_gitlab import GitLabCredentials gitlab_credentials_block = GitLabCredentials(token="my_token") gitlab_credentials_block.save(name="my-gitlab-credentials-block") ``` ### Access flow code stored in a private GitLab repository in a deployment Use the credentials block you created above to pass the GitLab access token during deployment creation. The code below assumes there's flow code in your private GitLab repository. ```python from prefect import flow from prefect.runner.storage import GitRepository from prefect_gitlab import GitLabCredentials if __name__ == "__main__": source = GitRepository( url="https://gitlab.com/org/private-repo.git", credentials=GitLabCredentials.load("my-gitlab-credentials-block") ) source = GitRepository( flow.from_source( source=source, entrypoint="my_file.py:my_flow", ).deploy( name="private-gitlab-deploy", work_pool_name="my_pool", ) ``` Alternatively, if you use a `prefect.yaml` file to create the deployment, reference the GitLab Credentials block in the `pull` step: ```yaml pull: - prefect.deployments.steps.git_clone: repository: https://github.com/org/repo.git credentials: "{{ prefect.blocks.gitlab-credentials.my-gitlab-credentials-block }}" ``` ### Interact with a GitLab repository The code below shows how to reference a particular branch or tag of a GitLab repository. ```python from prefect_gitlab import GitLabRepository def save_private_gitlab_block(): private_gitlab_block = GitLabRepository( repository="https://gitlab.com/testing/my-repository.git", access_token="YOUR_GITLAB_PERSONAL_ACCESS_TOKEN", reference="branch-or-tag-name", ) private_gitlab_block.save("my-private-gitlab-block") if __name__ == "__main__": save_private_gitlab_block() ``` Exclude the `access_token` field if the repository is public and exclude the `reference` field to use the default branch. Use the newly created block to interact with the GitLab repository. For example, download the repository contents with the `.get_directory()` method like this: ```python from prefect_gitlab.repositories import GitLabRepository def fetch_repo(): private_gitlab_block = GitLabRepository.load("my-gitlab-block") private_gitlab_block.get_directory() if __name__ == "__main__": fetch_repo() ``` ## Resources For assistance using GitLab, consult the [GitLab documentation](https://gitlab.com). Refer to the `prefect-gitlab` [SDK documentation](https://reference.prefect.io/prefect_gitlab/) to explore all the capabilities of the `prefect-gitlab` library. # SDK docs Source: https://docs-3.prefect.io/integrations/prefect-gitlab/sdk # prefect-kubernetes Source: https://docs-3.prefect.io/integrations/prefect-kubernetes/index `prefect-kubernetes` contains Prefect tasks, flows, and blocks enabling orchestration, observation and management of Kubernetes resources. This library is most commonly used for installation with a Kubernetes worker. See the [Prefect docs on deploying with Kubernetes](/v3/how-to-guides/deployment_infra/kubernetes) to learn how to create and run deployments in Kubernetes. Prefect provides a Helm chart for deploying a worker, a self-hosted Prefect server instance, and other resources to a Kubernetes cluster. See the [Prefect Helm chart](https://github.com/PrefectHQ/prefect-helm) for more information. ## Getting started ### Prerequisites * [Kubernetes installed](https://kubernetes.io/). ### Install `prefect-kubernetes` The following command will install a version of `prefect-kubernetes` compatible with your installed version of `prefect`. If you don't already have `prefect` installed, it will install the newest version of `prefect` as well. ```bash pip install "prefect[kubernetes]" ``` Upgrade to the latest versions of `prefect` and `prefect-kubernetes`: ```bash pip install -U "prefect[kubernetes]" ``` ### Register newly installed block types Register the block types in the `prefect-kubernetes` module to make them available for use. ```bash prefect block register -m prefect_kubernetes ``` ## Examples ### Use `with_options` to customize options on an existing task or flow ```python from prefect_kubernetes.flows import run_namespaced_job customized_run_namespaced_job = run_namespaced_job.with_options( name="My flow running a Kubernetes Job", retries=2, retry_delay_seconds=10, ) # this is now a new flow object that can be called ``` ### Specify and run a Kubernetes Job from a YAML file ```python from prefect import flow, get_run_logger from prefect_kubernetes.credentials import KubernetesCredentials from prefect_kubernetes.flows import run_namespaced_job # this is a flow from prefect_kubernetes.jobs import KubernetesJob k8s_creds = KubernetesCredentials.load("k8s-creds") job = KubernetesJob.from_yaml_file( # or create in the UI with a dict manifest credentials=k8s_creds, manifest_path="path/to/job.yaml", ) job.save("my-k8s-job", overwrite=True) @flow def kubernetes_orchestrator(): # run the flow and send logs to the parent flow run's logger logger = get_run_logger() run_namespaced_job(job, print_func=logger.info) if __name__ == "__main__": kubernetes_orchestrator() ``` As with all Prefect flows and tasks, you can call the underlying function directly if you don't need Prefect features: ```python run_namespaced_job.fn(job, print_func=print) ``` ### Generate a resource-specific client from `KubernetesClusterConfig` ```python # with minikube / docker desktop & a valid ~/.kube/config this should ~just work~ from prefect_kubernetes.credentials import KubernetesCredentials, KubernetesClusterConfig k8s_config = KubernetesClusterConfig.from_file('~/.kube/config') k8s_credentials = KubernetesCredentials(cluster_config=k8s_config) with k8s_credentials.get_client("core") as v1_core_client: for namespace in v1_core_client.list_namespace().items: print(namespace.metadata.name) ``` ### List jobs in a namespace ```python from prefect import flow from prefect_kubernetes.credentials import KubernetesCredentials from prefect_kubernetes.jobs import list_namespaced_job @flow def kubernetes_orchestrator(): v1_job_list = list_namespaced_job( kubernetes_credentials=KubernetesCredentials.load("k8s-creds"), namespace="my-namespace", ) ``` For assistance using Kubernetes, consult the [Kubernetes documentation](https://kubernetes.io/). Refer to the `prefect-kubernetes` [SDK documentation](https://reference.prefect.io/prefect_kubernetes/) to explore all the capabilities of the `prefect-kubernetes` library. # SDK docs Source: https://docs-3.prefect.io/integrations/prefect-kubernetes/sdk # prefect-ray Source: https://docs-3.prefect.io/integrations/prefect-ray/index Accelerate your workflows by running tasks in parallel with Ray [Ray](https://docs.ray.io/en/latest/index.html) can run your tasks in parallel by distributing them over multiple machines. The `prefect-ray` integration makes it easy to accelerate your flow runs with Ray. ## Install `prefect-ray` The following command will install a version of `prefect-ray` compatible with your installed version of `prefect`. If you don't already have `prefect` installed, it will install the newest version of `prefect` as well. ```bash pip install "prefect[ray]" ``` Upgrade to the latest versions of `prefect` and `prefect-ray`: ```bash pip install -U "prefect[ray]" ``` **Ray limitations** There are a few limitations with Ray: * Ray has [experimental](https://docs.ray.io/en/latest/ray-overview/installation.html#install-nightlies) support for Python 3.13, but Prefect [does *not* currently support](https://github.com/PrefectHQ/prefect/issues/16910) Python 3.13. * Ray support for non-x86/64 architectures such as ARM/M1 processors with installation from `pip` alone and will be skipped during installation of Prefect. It is possible to manually install the blocking component with `conda`. See the [Ray documentation](https://docs.ray.io/en/latest/ray-overview/installation.html#m1-mac-apple-silicon-support) for instructions. * Ray support for Windows is currently in beta. See the [Ray installation documentation](https://docs.ray.io/en/latest/ray-overview/installation.html) for further compatibility information. ## Run tasks on Ray The `RayTaskRunner` is a [Prefect task runner](https://docs.prefect.io/develop/task-runners/) that submits tasks to [Ray](https://www.ray.io/) for parallel execution. By default, a temporary Ray instance is created for the duration of the flow run. For example, this flow counts to three in parallel: ```python import time from prefect import flow, task from prefect_ray import RayTaskRunner @task def shout(number): time.sleep(0.5) print(f"#{number}") @flow(task_runner=RayTaskRunner) def count_to(highest_number): shout.map(range(highest_number)).wait() if __name__ == "__main__": count_to(10) # outputs #3 #7 #2 #6 #4 #0 #1 #5 #8 #9 ``` If you already have a Ray instance running, you can provide the connection URL via an `address` argument. To configure your flow to use the `RayTaskRunner`: 1. Make sure the `prefect-ray` collection is installed as described earlier: `pip install prefect-ray`. 2. In your flow code, import `RayTaskRunner` from `prefect_ray.task_runners`. 3. Assign it as the task runner when the flow is defined using the `task_runner=RayTaskRunner` argument. For example, this flow uses the `RayTaskRunner` with a local, temporary Ray instance created by Prefect at flow run time. ```python from prefect import flow from prefect_ray.task_runners import RayTaskRunner @flow(task_runner=RayTaskRunner()) def my_flow(): ... ``` This flow uses the `RayTaskRunner` configured to access an existing Ray instance at `ray://:10001`. ```python from prefect import flow from prefect_ray.task_runners import RayTaskRunner @flow( task_runner=RayTaskRunner( address="ray://:10001", init_kwargs={"runtime_env": {"pip": ["prefect-ray"]}}, ) ) def my_flow(): ... ``` `RayTaskRunner` accepts the following optional parameters: | Parameter | Description | | ------------ | ----------------------------------------------------------------------------------------------------------------------------------- | | address | Address of a currently running Ray instance, starting with the [ray://](https://docs.ray.io/en/master/cluster/ray-client.html) URI. | | init\_kwargs | Additional kwargs to use when calling `ray.init`. | The Ray client uses the [ray://](https://docs.ray.io/en/master/cluster/ray-client.html) URI to indicate the address of a Ray instance. If you don't provide the `address` of a Ray instance, Prefect creates a temporary instance automatically. ## Run tasks on a remote Ray cluster When using the `RayTaskRunner` with a remote Ray cluster, you may run into issues that are not seen when using a local Ray instance. To resolve these issues, we recommend taking the following steps when working with a remote Ray cluster: 1. By default, Prefect will not persist any data to the filesystem of the remote ray worker. However, if you want to take advantage of Prefect's caching ability, you will need to configure a remote result storage to persist results across task runs. We recommend using the [Prefect UI to configure a storage block](https://docs.prefect.io/develop/blocks/) to use for remote results storage. Here's an example of a flow that uses caching and remote result storage: ```python from typing import List from prefect import flow, task from prefect.logging import get_run_logger from prefect.tasks import task_input_hash from prefect_aws import S3Bucket from prefect_ray.task_runners import RayTaskRunner # The result of this task will be cached in the configured result storage @task(cache_key_fn=task_input_hash) def say_hello(name: str) -> None: logger = get_run_logger() # This log statement will print only on the first run. Subsequent runs will be cached. logger.info(f"hello {name}!") return name @flow( task_runner=RayTaskRunner( address="ray://:10001", ), # Using an S3 block that has already been created via the Prefect UI result_storage="s3/my-result-storage", ) def greetings(names: List[str]) -> None: say_hello.map(names).wait() if __name__ == "__main__": greetings(["arthur", "trillian", "ford", "marvin"]) ``` 2. If you get an error stating that the module 'prefect' cannot be found, ensure `prefect` is installed on the remote cluster, with: ```bash pip install prefect ``` 3. If you get an error with a message similar to "File system created with scheme 's3' could not be created", ensure the required Python modules are installed on **both local and remote machines**. For example, if using S3 for storage: ```bash pip install s3fs ``` 4. If you are seeing timeout or other connection errors, double check the address provided to the `RayTaskRunner`. The address should look similar to: `address='ray://:10001'`: ```bash RayTaskRunner(address="ray://1.23.199.255:10001") ``` ## Specify remote options The `remote_options` context can be used to control the task's remote options. For example, we can set the number of CPUs and GPUs to use for the `process` task: ```python from prefect import flow, task from prefect_ray.task_runners import RayTaskRunner from prefect_ray.context import remote_options @task def process(x): return x + 1 @flow(task_runner=RayTaskRunner()) def my_flow(): # equivalent to setting @ray.remote(num_cpus=4, num_gpus=2) with remote_options(num_cpus=4, num_gpus=2): process.submit(42).wait() ``` ## Resources Refer to the `prefect-ray` [SDK documentation](https://reference.prefect.io/prefect_ray/) to explore all the capabilities of the `prefect-ray` library. For further assistance using Ray, consult the [Ray documentation](https://docs.ray.io/en/latest/index.html). # SDK docs Source: https://docs-3.prefect.io/integrations/prefect-ray/sdk # prefect-redis Source: https://docs-3.prefect.io/integrations/prefect-redis/index Integrations to extend Prefect's functionality with Redis. ## Getting started ### Install `prefect-redis` The following command will install a version of `prefect-redis` compatible with your installed version of `prefect`. If you don't already have `prefect` installed, it will install the newest version of `prefect` as well. ```bash pip install "prefect[redis]" ``` Upgrade to the latest versions of `prefect` and `prefect-redis`: ```bash pip install -U "prefect[redis]" ``` ### Register newly installed block types Register the block types in the `prefect-redis` module to make them available for use. ```bash prefect block register -m prefect_redis ``` ## Resources Refer to the [SDK documentation](https://reference.prefect.io/prefect_redis/) to explore all the capabilities of `prefect-redis`. # SDK docs Source: https://docs-3.prefect.io/integrations/prefect-redis/sdk # prefect-shell Source: https://docs-3.prefect.io/integrations/prefect-shell/index Execute shell commands from within Prefect flows. ## Getting started ### Install `prefect-shell` The following command will install a version of `prefect-shell` compatible with your installed version of `prefect`. If you don't already have `prefect` installed, it will install the newest version of `prefect` as well. ```bash pip install "prefect[shell]" ``` Upgrade to the latest versions of `prefect` and `prefect-shell`: ```bash pip install -U "prefect[shell]" ``` ### Register newly installed block types Register the block types in the `prefect-shell` module to make them available for use. ```bash prefect block register -m prefect_shell ``` ## Examples ### Integrate shell commands with Prefect flows With `prefect-shell`, you can use shell commands (and/or scripts) in Prefect flows to provide observability and resiliency. `prefect-shell` can be a useful tool if you're transitioning your orchestration from shell scripts to Prefect. Let's get the shell-abration started! The Python code below has shell commands embedded in a Prefect flow: ```python from prefect import flow from datetime import datetime from prefect_shell import ShellOperation @flow def download_data(): today = datetime.today().strftime("%Y%m%d") # for short running operations, you can use the `run` method # which automatically manages the context ShellOperation( commands=[ "mkdir -p data", "mkdir -p data/${today}" ], env={"today": today} ).run() # for long running operations, you can use a context manager with ShellOperation( commands=[ "curl -O https://masie_web.apps.nsidc.org/pub/DATASETS/NOAA/G02135/north/daily/data/N_seaice_extent_daily_v3.0.csv", ], working_dir=f"data/{today}", ) as download_csv_operation: # trigger runs the process in the background download_csv_process = download_csv_operation.trigger() # then do other things here in the meantime, like download another file ... # when you're ready, wait for the process to finish download_csv_process.wait_for_completion() # if you'd like to get the output lines, you can use the `fetch_result` method output_lines = download_csv_process.fetch_result() if __name__ == "__main__": download_data() ``` Running this script results in output like this: ```bash 14:48:16.550 | INFO | prefect.engine - Created flow run 'tentacled-chachalaca' for flow 'download-data' 14:48:17.977 | INFO | Flow run 'tentacled-chachalaca' - PID 19360 triggered with 2 commands running inside the '.' directory. 14:48:17.987 | INFO | Flow run 'tentacled-chachalaca' - PID 19360 completed with return code 0. 14:48:17.994 | INFO | Flow run 'tentacled-chachalaca' - PID 19363 triggered with 1 commands running inside the PosixPath('data/20230201') directory. 14:48:18.009 | INFO | Flow run 'tentacled-chachalaca' - PID 19363 stream output: % Total % Received % Xferd Average Speed Time Time Time Current Dl 14:48:18.010 | INFO | Flow run 'tentacled-chachalaca' - PID 19363 stream output: oad Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 14:48:18.840 | INFO | Flow run 'tentacled-chachalaca' - PID 19363 stream output: 11 1630k 11 192k 0 0 229k 0 0:00:07 --:--:-- 0:00:07 231k 14:48:19.839 | INFO | Flow run 'tentacled-chachalaca' - PID 19363 stream output: 83 1630k 83 1368k 0 0 745k 0 0:00:02 0:00:01 0:00:01 747k 14:48:19.993 | INFO | Flow run 'tentacled-chachalaca' - PID 19363 stream output: 100 1630k 100 1630k 0 0 819k 0 0 14:48:19.994 | INFO | Flow run 'tentacled-chachalaca' - PID 19363 stream output: :00:01 0:00:01 --:--:-- 821k 14:48:19.996 | INFO | Flow run 'tentacled-chachalaca' - PID 19363 completed with return code 0. 14:48:19.998 | INFO | Flow run 'tentacled-chachalaca' - Successfully closed all open processes. 14:48:20.203 | INFO | Flow run 'tentacled-chachalaca' - Finished in state Completed() ``` ### Save shell commands in Prefect blocks You can save commands within a `ShellOperation` block, then reuse them across multiple flows. Save the block with desired commands: ```python from prefect_shell import ShellOperation ping_op = ShellOperation(commands=["ping -t 1 prefect.io"]) ping_op.save("block-name") # Load the saved block: ping_op = ShellOperation.load("block-name") ``` ## Resources Refer to the `prefect-shell` [SDK documentation](https://reference.prefect.io/prefect_shell/) to explore all the capabilities of the `prefect-shell` library. # SDK docs Source: https://docs-3.prefect.io/integrations/prefect-shell/sdk # prefect-slack Source: https://docs-3.prefect.io/integrations/prefect-slack/index ## Welcome! `prefect-slack` is a collection of prebuilt Prefect tasks that can be used to quickly construct Prefect flows. ## Getting started ### Prerequisites A Slack account with permissions to create a Slack app and install it in your workspace. ### Installation The following command will install a version of `prefect-slack` compatible with your installed version of `prefect`. If you don't already have `prefect` installed, it will install the newest version of `prefect` as well. ```bash pip install "prefect[slack]" ``` Upgrade to the latest versions of `prefect` and `prefect-slack`: ```bash pip install -U "prefect[slack]" ``` ### Slack setup To use tasks in the package, create a Slack app and install it in your Slack workspace. You can create a Slack app by navigating to the [apps page](https://api.slack.com/apps) for your Slack account and selecting 'Create New App'. For tasks that require a Bot user OAuth token, you can get a token for your app by navigating to your app's **OAuth & Permissions** page. For tasks that require a Webhook URL, you can generate a new Webhook URL by navigating to you apps **Incoming Webhooks** page. Slack's [Basic app setup](https://api.slack.com/authentication/basics) guide provides additional details on setting up a Slack app. ### Write and run a flow ```python sync import asyncio from prefect import flow from prefect.context import get_run_context from prefect_slack import SlackCredentials from prefect_slack.messages import send_chat_message @flow def example_send_message_flow(): context = get_run_context() # Run other tasks or flows here token = "xoxb-your-bot-token-here" asyncio.run( send_chat_message( slack_credentials=SlackCredentials(token), channel="#prefect", text=f"Flow run {context.flow_run.name} completed :tada:" ) ) if __name__ == "__main__": example_send_message_flow() ``` ```python async from prefect import flow from prefect.context import get_run_context from prefect_slack import SlackCredentials from prefect_slack.messages import send_chat_message @flow async def example_send_message_flow(): context = get_run_context() # Run other tasks or flows here token = "xoxb-your-bot-token-here" await send_chat_message( slack_credentials=SlackCredentials(token), channel="#prefect", text=f"Flow run {context.flow_run.name} completed :tada:" ) if __name__ == "__main__": asyncio.run(example_send_message_flow()) ``` ## Resources Refer to the `prefect-slack` [SDK documentation](https://reference.prefect.io/prefect_slack/) to explore all the capabilities of the `prefect-slack` library. For further assistance developing with Slack, consult the [Slack documentation](https://api.slack.com/). # SDK docs Source: https://docs-3.prefect.io/integrations/prefect-slack/sdk # prefect-snowflake Source: https://docs-3.prefect.io/integrations/prefect-snowflake/index The `prefect-snowflake` integration makes it easy to connect to Snowflake in your Prefect flows. You can run queries both synchronously and asynchronously as Prefect flows and tasks. ## Getting started ### Prerequisites * [A Snowflake account](https://www.snowflake.com/en/) and the necessary connection information. ### Installation Install `prefect-snowflake` as a dependency of Prefect. If you don't already have Prefect installed, it will install the newest version of `prefect` as well. ```bash pip install "prefect[snowflake]" ``` Upgrade to the latest versions of `prefect` and `prefect-snowflake`: ```bash pip install -U "prefect[snowflake]" ``` ### Blocks setup The `prefect-snowflake` integration has two blocks: one for storing credentials and one for storing connection information. Register blocks in this module to view and edit them on Prefect Cloud: ```bash prefect block register -m prefect_snowflake ``` #### Create the credentials block Below is a walkthrough on saving a `SnowflakeCredentials` block through code. Log into your Snowflake account to find your credentials. The example below uses a user and password combination, but refer to the [SDK documentation](https://reference.prefect.io/prefect_snowflake/) for a full list of authentication and connection options. ```python from prefect_snowflake import SnowflakeCredentials credentials = SnowflakeCredentials( account="ACCOUNT-PLACEHOLDER", # resembles nh12345.us-east-2.snowflake user="USER-PLACEHOLDER", password="PASSWORD-PLACEHOLDER" ) credentials.save("CREDENTIALS-BLOCK-NAME-PLACEHOLDER") ``` #### Create the connection block Then, to create a `SnowflakeConnector` block: 1. After logging in, click on any worksheet. 2. On the left side, select a database and schema. 3. On the top right, select a warehouse. 4. Create a short script, replacing the placeholders below. ```python from prefect_snowflake import SnowflakeCredentials, SnowflakeConnector credentials = SnowflakeCredentials.load("CREDENTIALS-BLOCK-NAME-PLACEHOLDER") connector = SnowflakeConnector( credentials=credentials, database="DATABASE-PLACEHOLDER", schema="SCHEMA-PLACEHOLDER", warehouse="COMPUTE_WH", ) connector.save("CONNECTOR-BLOCK-NAME-PLACEHOLDER") ``` You can now easily load the saved block, which holds your credentials and connection info: ```python from prefect_snowflake import SnowflakeCredentials, SnowflakeConnector SnowflakeConnector.load("CONNECTOR-BLOCK-NAME-PLACEHOLDER") ``` ## Examples To set up a table, use the `execute` and `execute_many` methods. Then, use the `fetch_all` method. If the results are too large to fit into memory, use the `fetch_many` method to retrieve data in chunks. By using the `SnowflakeConnector` as a context manager, you can make sure that the Snowflake connection and cursors are closed properly after you're done with them. ```python from prefect import flow, task from prefect_snowflake import SnowflakeConnector @task def setup_table(block_name: str) -> None: with SnowflakeConnector.load(block_name) as connector: connector.execute( "CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);" ) connector.execute_many( "INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);", seq_of_parameters=[ {"name": "Ford", "address": "Highway 42"}, {"name": "Unknown", "address": "Space"}, {"name": "Me", "address": "Myway 88"}, ], ) @task def fetch_data(block_name: str) -> list: all_rows = [] with SnowflakeConnector.load(block_name) as connector: while True: # Repeated fetch* calls using the same operation will # skip re-executing and instead return the next set of results new_rows = connector.fetch_many("SELECT * FROM customers", size=2) if len(new_rows) == 0: break all_rows.append(new_rows) return all_rows @flow def snowflake_flow(block_name: str) -> list: setup_table(block_name) all_rows = fetch_data(block_name) return all_rows if __name__=="__main__": snowflake_flow() ``` If the native methods of the block don't meet your requirements, don't worry. You have the option to access the underlying Snowflake connection and utilize its built-in methods as well. ```python import pandas as pd from prefect import flow from prefect_snowflake.database import SnowflakeConnector from snowflake.connector.pandas_tools import write_pandas @flow def snowflake_write_pandas_flow(): connector = SnowflakeConnector.load("my-block") with connector.get_connection() as connection: table_name = "TABLE_NAME" ddl = "NAME STRING, NUMBER INT" statement = f'CREATE TABLE IF NOT EXISTS {table_name} ({ddl})' with connection.cursor() as cursor: cursor.execute(statement) # case sensitivity matters here! df = pd.DataFrame([('Marvin', 42), ('Ford', 88)], columns=['NAME', 'NUMBER']) success, num_chunks, num_rows, _ = write_pandas( conn=connection, df=df, table_name=table_name, database=snowflake_connector.database, schema=snowflake_connector.schema_ # note the "_" suffix ) ``` ## Resources Refer to the `prefect-snowflake` [SDK documentation](https://reference.prefect.io/prefect_snowflake/database/) to explore other capabilities of the `prefect-snowflake` library, such as async methods. For further assistance using Snowflake, consult the [Snowflake documentation](https://docs.snowflake.com/) or the [Snowflake Python Connector documentation](https://docs.snowflake.com/en/developer-guide/python-connector/python-connector-example). # SDK docs Source: https://docs-3.prefect.io/integrations/prefect-snowflake/sdk # prefect-sqlalchemy Source: https://docs-3.prefect.io/integrations/prefect-sqlalchemy/index # Welcome! `prefect-sqlalchemy` helps you connect to a database in your Prefect flows. ## Getting started ### Install `prefect-sqlalchemy` The following command will install a version of `prefect-sqlalchemy` compatible with your installed version of `prefect`. If you don't already have `prefect` installed, it will install the newest version of `prefect` as well. ```bash pip install "prefect[sqlalchemy]" ``` Upgrade to the latest versions of `prefect` and `prefect-sqlalchemy`: ```bash pip install -U "prefect[sqlalchemy]" ``` ### Register newly installed block types Register the block types in the `prefect-sqlalchemy` module to make them available for use. ```bash prefect block register -m prefect_sqlalchemy ``` ## Examples ### Save credentials to a block To use the `load` method on Blocks, you must have a block saved through code or saved through the UI. ```python from prefect_sqlalchemy import SqlAlchemyConnector, ConnectionComponents, SyncDriver connector = SqlAlchemyConnector( connection_info=ConnectionComponents( driver=SyncDriver.POSTGRESQL_PSYCOPG2, username="USERNAME-PLACEHOLDER", password="PASSWORD-PLACEHOLDER", host="localhost", port=5432, database="DATABASE-PLACEHOLDER", ) ) connector.save("BLOCK_NAME-PLACEHOLDER") ``` Load the saved block that holds your credentials: ```python from prefect_sqlalchemy import SqlAlchemyConnector SqlAlchemyConnector.load("BLOCK_NAME-PLACEHOLDER") ``` The required arguments depend upon the desired driver. For example, SQLite requires only the `driver` and `database` arguments: ```python from prefect_sqlalchemy import SqlAlchemyConnector, ConnectionComponents, SyncDriver connector = SqlAlchemyConnector( connection_info=ConnectionComponents( driver=SyncDriver.SQLITE_PYSQLITE, database="DATABASE-PLACEHOLDER.db" ) ) connector.save("BLOCK_NAME-PLACEHOLDER") ``` ### Work with databases in a flow To set up a table, use the `execute` and `execute_many` methods. Use the `fetch_many` method to retrieve data in a stream until there's no more data. Use the `SqlAlchemyConnector` as a context manager, to ensure that the SQLAlchemy engine and any connected resources are closed properly after you're done with them. **Async support** `SqlAlchemyConnector` supports async workflows. Just be sure to save, load, and use an async driver, as in the example below. ```python from prefect_sqlalchemy import SqlAlchemyConnector, ConnectionComponents, AsyncDriver connector = SqlAlchemyConnector( connection_info=ConnectionComponents( driver=AsyncDriver.SQLITE_AIOSQLITE, database="DATABASE-PLACEHOLDER.db" ) ) if __name__ == "__main__": connector.save("BLOCK_NAME-PLACEHOLDER") ``` ```python from prefect import flow, task from prefect_sqlalchemy import SqlAlchemyConnector @task def setup_table(block_name: str) -> None: with SqlAlchemyConnector.load(block_name) as connector: connector.execute( "CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);" ) connector.execute( "INSERT INTO customers (name, address) VALUES (:name, :address);", parameters={"name": "Marvin", "address": "Highway 42"}, ) connector.execute_many( "INSERT INTO customers (name, address) VALUES (:name, :address);", seq_of_parameters=[ {"name": "Ford", "address": "Highway 42"}, {"name": "Unknown", "address": "Highway 42"}, ], ) @task def fetch_data(block_name: str) -> list: all_rows = [] with SqlAlchemyConnector.load(block_name) as connector: while True: # Repeated fetch* calls using the same operation will # skip re-executing and instead return the next set of results new_rows = connector.fetch_many("SELECT * FROM customers", size=2) if len(new_rows) == 0: break all_rows.append(new_rows) return all_rows @flow def sqlalchemy_flow(block_name: str) -> list: setup_table(block_name) all_rows = fetch_data(block_name) return all_rows if __name__ == "__main__": sqlalchemy_flow("BLOCK-NAME-PLACEHOLDER") ``` ```python from prefect import flow, task from prefect_sqlalchemy import SqlAlchemyConnector import asyncio @task async def setup_table(block_name: str) -> None: async with await SqlAlchemyConnector.load(block_name) as connector: await connector.execute( "CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);" ) await connector.execute( "INSERT INTO customers (name, address) VALUES (:name, :address);", parameters={"name": "Marvin", "address": "Highway 42"}, ) await connector.execute_many( "INSERT INTO customers (name, address) VALUES (:name, :address);", seq_of_parameters=[ {"name": "Ford", "address": "Highway 42"}, {"name": "Unknown", "address": "Highway 42"}, ], ) @task async def fetch_data(block_name: str) -> list: all_rows = [] async with await SqlAlchemyConnector.load(block_name) as connector: while True: # Repeated fetch* calls using the same operation will # skip re-executing and instead return the next set of results new_rows = await connector.fetch_many("SELECT * FROM customers", size=2) if len(new_rows) == 0: break all_rows.append(new_rows) return all_rows @flow async def sqlalchemy_flow(block_name: str) -> list: await setup_table(block_name) all_rows = await fetch_data(block_name) return all_rows if __name__ == "__main__": asyncio.run(sqlalchemy_flow("BLOCK-NAME-PLACEHOLDER")) ``` ## Resources Refer to the `prefect-sqlalchemy` [SDK documentation](https://reference.prefect.io/prefect_sqlalchemy/) to explore all the capabilities of the `prefect-sqlalchemy` library. For assistance using SQLAlchemy, consult the [SQLAlchemy documentation](https://www.sqlalchemy.org/). # SDK docs Source: https://docs-3.prefect.io/integrations/prefect-sqlalchemy/sdk # null Source: https://docs-3.prefect.io/integrations/use-integrations Prefect integrations are PyPI packages you can install to help you build and integrate your workflows with third parties. ## Install an integration package Install an integration package with `pip`. For example, to install `prefect-aws` you can: * install the package directly: ```bash pip install prefect-aws ``` * install the corresponding extra: ```bash pip install 'prefect[aws]' ``` See [the `project.optional-dependencies` section of `pyproject.toml`](https://github.com/PrefectHQ/prefect/blob/main/pyproject.toml) for the full list of extras and the versions they specify. ## Register blocks from an integration Once the package is installed, [register the blocks](/v3/develop/blocks/#registering-blocks-for-use-in-the-prefect-ui) within the integration to view them in the Prefect Cloud UI: For example, to register the blocks available in `prefect-aws`: ```bash prefect block register -m prefect_aws ``` To use a block's `load` method, you must have a block [saved](/v3/develop/blocks/#saving-blocks). [Learn more about blocks](/v3/develop/blocks). ## Use tasks and flows from an Integration Integrations may contain pre-built tasks and flows that can be imported and called within your code. For example, read a secret from AWS Secrets Manager with the `read_secret` task with the following code: ```python from prefect import flow from prefect_aws import AwsCredentials from prefect_aws.secrets_manager import read_secret @flow def connect_to_database(): aws_credentials = AwsCredentials.load("MY_BLOCK_NAME") secret_value = read_secret( secret_name="db_password", aws_credentials=aws_credentials ) # Then, use secret_value to connect to a database ``` ## Customize tasks and flows from an integration To customize pre-configured tasks or flows, use `with_options`. For example, configure retries for dbt Cloud jobs: ```python from prefect import flow from prefect_dbt.cloud import DbtCloudCredentials from prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run_and_wait_for_completion custom_run_dbt_cloud_job = trigger_dbt_cloud_job_run_and_wait_for_completion.with_options( name="Run My DBT Cloud Job", retries=2, retry_delay_seconds=10 ) @flow def run_dbt_job_flow(): run_result = custom_run_dbt_cloud_job( dbt_cloud_credentials=DbtCloudCredentials.load("my-dbt-cloud-credentials"), job_id=1 ) if __name__ == "__main__": run_dbt_job_flow() ``` # How to use and configure the API client Source: https://docs-3.prefect.io/v3/advanced/api-client ## Overview The [`PrefectClient`](https://reference.prefect.io/prefect/client/) offers methods to simplify common operations against Prefect's REST API that may not be abstracted away by the SDK. For example, to [reschedule flow runs](/v3/develop/interact-with-api/#reschedule-late-flow-runs), one might use methods like: * `read_flow_runs` with a `FlowRunFilter` to read certain flow runs * `create_flow_run_from_deployment` to schedule new flow runs * `delete_flow_run` to delete a very `Late` flow run ### Getting a client By default, `get_client()` returns an asynchronous client to be used as a context manager, but you may also use a synchronous client. {/* pmd-metadata: notest */} ```python async from prefect import get_client async with get_client() as client: response = await client.hello() print(response.json()) # πŸ‘‹ ``` You can also use a synchronous client: ```python sync from prefect import get_client with get_client(sync_client=True) as client: response = client.hello() print(response.json()) # πŸ‘‹ ``` ## Configure custom headers You can configure custom HTTP headers to be sent with every API request by setting the `PREFECT_CLIENT_CUSTOM_HEADERS` setting. This is useful for adding authentication headers, API keys, or other custom headers required by proxies, CDNs, or security systems. ### Setting custom headers Custom headers can be configured via environment variables or settings. The headers are specified as key-value pairs in JSON format. ```bash Environment variable export PREFECT_CLIENT_CUSTOM_HEADERS='{"CF-Access-Client-Id": "your-client-id", "CF-Access-Client-Secret": "your-secret"}' ``` ```bash CLI prefect config set PREFECT_CLIENT_CUSTOM_HEADERS='{"CF-Access-Client-Id": "your-client-id", "CF-Access-Client-Secret": "your-secret"}' ``` ```toml prefect.toml [client] custom_headers = '''{ "CF-Access-Client-Id": "your-client-id", "CF-Access-Client-Secret": "your-secret", "X-API-Key": "your-api-key" }''' ``` **Protected headers** Certain headers are protected and cannot be overridden by custom headers for security reasons: * `User-Agent` - Managed by Prefect to identify client version * `Prefect-Csrf-Token` - Used for CSRF protection * `Prefect-Csrf-Client` - Used for CSRF protection If you attempt to override these headers, Prefect will log a warning and ignore the custom header value. ## Examples These examples are meant to illustrate how one might develop their own utilities for interacting with the API. If you believe a client method is missing, or you'd like to see a specific pattern better represented in the SDK generally, please [open an issue](https://github.com/PrefectHQ/prefect/issues/new/choose). ### Reschedule late flow runs To bulk reschedule flow runs that are late, delete the late flow runs and create new ones in a `Scheduled` state with a delay. This is useful if you accidentally scheduled many flow runs of a deployment to an inactive work pool, for example. The following example reschedules the last three late flow runs of a deployment named `healthcheck-storage-test` to run six hours later than their original expected start time. It also deletes any remaining late flow runs of that deployment. First, define the rescheduling function: {/* pmd-metadata: notest */} ```python async def reschedule_late_flow_runs( deployment_name: str, delay: timedelta, most_recent_n: int, delete_remaining: bool = True, states: list[str] | None = None ) -> list[FlowRun]: states = states or ["Late"] async with get_client() as client: flow_runs = await client.read_flow_runs( flow_run_filter=FlowRunFilter( state=dict(name=dict(any_=states)), expected_start_time=dict(before_=datetime.now(timezone.utc)), ), deployment_filter=DeploymentFilter(name={'like_': deployment_name}), sort=FlowRunSort.START_TIME_DESC, limit=most_recent_n if not delete_remaining else None ) rescheduled_flow_runs: list[FlowRun] = [] for i, run in enumerate(flow_runs): await client.delete_flow_run(flow_run_id=run.id) if i < most_recent_n: new_run = await client.create_flow_run_from_deployment( deployment_id=run.deployment_id, state=Scheduled(scheduled_time=run.expected_start_time + delay), ) rescheduled_flow_runs.append(new_run) return rescheduled_flow_runs ``` Then use it to reschedule flows: {/* pmd-metadata: notest */} ```python rescheduled_flow_runs = asyncio.run( reschedule_late_flow_runs( deployment_name="healthcheck-storage-test", delay=timedelta(hours=6), most_recent_n=3, ) ) ``` ```python reschedule_late_flows.py from __future__ import annotations import asyncio from datetime import datetime, timedelta, timezone from prefect import get_client from prefect.client.schemas.filters import DeploymentFilter, FlowRunFilter from prefect.client.schemas.objects import FlowRun from prefect.client.schemas.sorting import FlowRunSort from prefect.states import Scheduled async def reschedule_late_flow_runs( deployment_name: str, delay: timedelta, most_recent_n: int, delete_remaining: bool = True, states: list[str] | None = None ) -> list[FlowRun]: states = states or ["Late"] async with get_client() as client: flow_runs = await client.read_flow_runs( flow_run_filter=FlowRunFilter( state=dict(name=dict(any_=states)), expected_start_time=dict(before_=datetime.now(timezone.utc)), ), deployment_filter=DeploymentFilter(name={'like_': deployment_name}), sort=FlowRunSort.START_TIME_DESC, limit=most_recent_n if not delete_remaining else None ) if not flow_runs: print(f"No flow runs found in states: {states!r}") return [] rescheduled_flow_runs: list[FlowRun] = [] for i, run in enumerate(flow_runs): await client.delete_flow_run(flow_run_id=run.id) if i < most_recent_n: new_run = await client.create_flow_run_from_deployment( deployment_id=run.deployment_id, state=Scheduled(scheduled_time=run.expected_start_time + delay), ) rescheduled_flow_runs.append(new_run) return rescheduled_flow_runs if __name__ == "__main__": rescheduled_flow_runs = asyncio.run( reschedule_late_flow_runs( deployment_name="healthcheck-storage-test", delay=timedelta(hours=6), most_recent_n=3, ) ) print(f"Rescheduled {len(rescheduled_flow_runs)} flow runs") assert all(run.state.is_scheduled() for run in rescheduled_flow_runs) assert all( run.expected_start_time > datetime.now(timezone.utc) for run in rescheduled_flow_runs ) ``` ### Get the last `N` completed flow runs from your workspace To get the last `N` completed flow runs from your workspace, use `read_flow_runs` and `prefect.client.schemas`. This example gets the last three completed flow runs from your workspace: {/* pmd-metadata: notest */} ```python async def get_most_recent_flow_runs( n: int, states: list[str] | None = None ) -> list[FlowRun]: async with get_client() as client: return await client.read_flow_runs( flow_run_filter=FlowRunFilter( state={'type': {'any_': states or ["COMPLETED"]}} ), sort=FlowRunSort.END_TIME_DESC, limit=n, ) ``` Use it to get the last 3 completed runs: {/* pmd-metadata: notest */} ```python flow_runs: list[FlowRun] = asyncio.run( get_most_recent_flow_runs(n=3) ) ``` ```python get_recent_flows.py from __future__ import annotations import asyncio from prefect import get_client from prefect.client.schemas.filters import FlowRunFilter from prefect.client.schemas.objects import FlowRun from prefect.client.schemas.sorting import FlowRunSort async def get_most_recent_flow_runs( n: int, states: list[str] | None = None ) -> list[FlowRun]: async with get_client() as client: return await client.read_flow_runs( flow_run_filter=FlowRunFilter( state={'type': {'any_': states or ["COMPLETED"]}} ), sort=FlowRunSort.END_TIME_DESC, limit=n, ) if __name__ == "__main__": flow_runs: list[FlowRun] = asyncio.run( get_most_recent_flow_runs(n=3) ) assert len(flow_runs) == 3 assert all( run.state.is_completed() for run in flow_runs ) assert ( end_times := [run.end_time for run in flow_runs] ) == sorted(end_times, reverse=True) ``` Instead of the last three from the whole workspace, you can also use the `DeploymentFilter` to get the last three completed flow runs of a specific deployment. ### Transition all running flows to cancelled through the Client Use `get_client`to set multiple runs to a `Cancelled` state. This example cancels all flow runs that are in `Pending`, `Running`, `Scheduled`, or `Late` states when the script is run. {/* pmd-metadata: notest */} ```python async def list_flow_runs_with_states(states: list[str]) -> list[FlowRun]: async with get_client() as client: return await client.read_flow_runs( flow_run_filter=FlowRunFilter( state=FlowRunFilterState( name=FlowRunFilterStateName(any_=states) ) ) ) async def cancel_flow_runs(flow_runs: list[FlowRun]): async with get_client() as client: for flow_run in flow_runs: state = flow_run.state.copy( update={"name": "Cancelled", "type": StateType.CANCELLED} ) await client.set_flow_run_state(flow_run.id, state, force=True) ``` Cancel all pending, running, scheduled or late flows: {/* pmd-metadata: notest */} ```python async def bulk_cancel_flow_runs(): states = ["Pending", "Running", "Scheduled", "Late"] flow_runs = await list_flow_runs_with_states(states) while flow_runs: print(f"Cancelling {len(flow_runs)} flow runs") await cancel_flow_runs(flow_runs) flow_runs = await list_flow_runs_with_states(states) asyncio.run(bulk_cancel_flow_runs()) ``` ```python cancel_flows.py import asyncio from prefect import get_client from prefect.client.schemas.filters import FlowRunFilter, FlowRunFilterState, FlowRunFilterStateName from prefect.client.schemas.objects import FlowRun, StateType async def list_flow_runs_with_states(states: list[str]) -> list[FlowRun]: async with get_client() as client: return await client.read_flow_runs( flow_run_filter=FlowRunFilter( state=FlowRunFilterState( name=FlowRunFilterStateName(any_=states) ) ) ) async def cancel_flow_runs(flow_runs: list[FlowRun]): async with get_client() as client: for idx, flow_run in enumerate(flow_runs): print(f"[{idx + 1}] Cancelling flow run '{flow_run.name}' with ID '{flow_run.id}'") state_updates: dict[str, str] = {} state_updates.setdefault("name", "Cancelled") state_updates.setdefault("type", StateType.CANCELLED) state = flow_run.state.copy(update=state_updates) await client.set_flow_run_state(flow_run.id, state, force=True) async def bulk_cancel_flow_runs(): states = ["Pending", "Running", "Scheduled", "Late"] flow_runs = await list_flow_runs_with_states(states) while len(flow_runs) > 0: print(f"Cancelling {len(flow_runs)} flow runs\n") await cancel_flow_runs(flow_runs) flow_runs = await list_flow_runs_with_states(states) print("Done!") if __name__ == "__main__": asyncio.run(bulk_cancel_flow_runs()) ``` ### Create, read, or delete artifacts Create, read, or delete artifacts programmatically through the [Prefect REST API](/v3/api-ref/rest-api/). With the Artifacts API, you can automate the creation and management of artifacts as part of your workflow. For example, to read the five most recently created Markdown, table, and link artifacts, you can run the following: ```python fixture:mock_post_200 import requests PREFECT_API_URL="https://api.prefect.cloud/api/accounts/abc/workspaces/xyz" PREFECT_API_KEY="pnu_ghijk" data = { "sort": "CREATED_DESC", "limit": 5, "artifacts": { "key": { "exists_": True } } } headers = {"Authorization": f"Bearer {PREFECT_API_KEY}"} endpoint = f"{PREFECT_API_URL}/artifacts/filter" response = requests.post(endpoint, headers=headers, json=data) assert response.status_code == 200 for artifact in response.json(): print(artifact) ``` If you don't specify a key or that a key must exist, you will also return results, which are a type of key-less artifact. See the [Prefect REST API documentation](/v3/api-ref/rest-api/) on artifacts for more information. # How to customize asset metadata Source: https://docs-3.prefect.io/v3/advanced/assets This guide covers how to enhance your assets with rich metadata, custom properties, and explicit dependency management beyond what's automatically inferred from your task graph. Both `@materialize` and `asset_deps` accept either a string referencing an asset key or a full `Asset` class instance. Using the `Asset` class is the way to provide additional metadata like names, descriptions, and ownership information beyond just the key. ## The Asset class While you can use simple string keys with the `@materialize` decorator, the `Asset` class provides more control over asset properties and metadata. Create `Asset` instances to add organizational context and improve discoverability. ### Asset initialization fields The `Asset` class accepts the following parameters: * **`key`** (required): A valid URI that uniquely identifies your asset. This is the only required field. * **`properties`** (optional): An `AssetProperties` instance containing metadata about the asset. ```python from prefect.assets import Asset, AssetProperties # Simple asset with just a key basic_asset = Asset(key="s3://my-bucket/data.csv") # Asset with full properties detailed_asset = Asset( key="s3://my-bucket/processed-data.csv", properties=AssetProperties( name="Processed Customer Data", description="Clean customer data with PII removed", owners=["data-team@company.com", "alice@company.com"], url="https://dashboard.company.com/datasets/customer-data" ) ) ``` Each of these fields will be displayed alongside the asset in the UI. **Interactive fields** The owners field can optionally reference user emails, as well as user and team handles within your Prefect Cloud workspace. The URL field becomes a clickable link when this asset is displayed. **Updates occur at runtime** Updates to asset metadata are always performed at workflow runtime whenever a materializing task is executed that references the asset. ## Using assets in materializations Once you've defined your `Asset` instances, use them directly with the `@materialize` decorator: ```python from prefect import flow from prefect.assets import Asset, AssetProperties, materialize detailed_asset = Asset( key="s3://my-bucket/processed-data.csv", properties=AssetProperties( name="Processed Customer Data", description="Clean customer data with PII removed", owners=["data-team@company.com", "alice@company.com"], url="https://dashboard.company.com/datasets/customer-data" ) ) properties = AssetProperties( name="Sales Analytics Dataset", description="This dataset contains daily sales figures by region along with customer segmentation data", owners=["analytics-team", "john.doe@company.com"], url="https://analytics.company.com/sales-dashboard" ) sales_asset = Asset( key="snowflake://warehouse/analytics/sales_summary", properties=properties ) @materialize(detailed_asset) def process_customer_data(): # Your processing logic here pass @materialize(sales_asset) def generate_sales_report(): # Your reporting logic here pass @flow def analytics_pipeline(): process_customer_data() generate_sales_report() ``` ## Adding runtime metadata Beyond static properties, you can add dynamic metadata during task execution. This is useful for tracking runtime information like row counts, processing times, or data quality metrics. ### Using `Asset.add_metadata()` The preferred approach is to use the `add_metadata()` method on your `Asset` instances. This prevents typos in asset keys and provides better IDE support: ```python from prefect.assets import Asset, AssetProperties, materialize customer_data = Asset( key="s3://my-bucket/customer-data.csv", properties=AssetProperties( name="Customer Data", owners=["data-team@company.com"] ) ) @materialize(customer_data) def process_customers(): # Your processing logic result = perform_customer_processing() # Add runtime metadata customer_data.add_metadata({ "record_count": len(result), "processing_duration_seconds": 45.2, "data_quality_score": 0.95, "last_updated": "2024-01-15T10:30:00Z" }) return result ``` ### Using `add_asset_metadata()` utility Alternatively, you can use the `add_asset_metadata()` function, which requires specifying the asset key: ```python from prefect.assets import materialize, add_asset_metadata @materialize("s3://my-bucket/processed-data.csv") def process_data(): result = perform_processing() add_asset_metadata( "s3://my-bucket/processed-data.csv", {"rows_processed": len(result), "processing_time": "2.5s"} ) return result ``` ### Accumulating metadata You can call metadata methods multiple times to accumulate information: {/* pmd-metadata: notest */} ```python @materialize(customer_data) def comprehensive_processing(): # First processing step raw_data = extract_data() customer_data.add_metadata({"raw_records": len(raw_data)}) # Second processing step cleaned_data = clean_data(raw_data) customer_data.add_metadata({ "cleaned_records": len(cleaned_data), "records_removed": len(raw_data) - len(cleaned_data) }) # Final step final_data = enrich_data(cleaned_data) customer_data.add_metadata({ "final_records": len(final_data), "enrichment_success_rate": 0.92 }) return final_data ``` ## Explicit asset dependencies While Prefect automatically infers dependencies from your task graph, you can explicitly declare asset relationships using the `asset_deps` parameter. This is useful when: * The task graph doesn't fully capture your data dependencies due to dynamic execution rules * You need to reference assets that aren't directly passed between tasks * You want to be explicit about critical dependencies for documentation purposes ### Hard-coding dependencies Use `asset_deps` to explicitly declare which assets your materialization depends on. You can reference assets by key string or by full `Asset` instance: ```python from prefect import flow from prefect.assets import Asset, AssetProperties, materialize # Define your assets raw_data_asset = Asset(key="s3://my-bucket/raw-data.csv") config_asset = Asset(key="s3://my-bucket/processing-config.json") processed_asset = Asset(key="s3://my-bucket/processed-data.csv") @materialize(raw_data_asset) def extract_raw_data(): pass @materialize( processed_asset, asset_deps=[raw_data_asset, config_asset] # Explicit dependencies ) def process_data(): # This function depends on both raw data and config # even if they're not directly passed as parameters pass @flow def explicit_dependencies_flow(): extract_raw_data() process_data() # Explicitly depends on raw_data_asset and config_asset ``` ### Mixing inferred and explicit dependencies You can combine task graph inference with explicit dependencies: ```python from prefect import flow from prefect.assets import Asset, materialize upstream_asset = Asset(key="s3://my-bucket/upstream.csv") config_asset = Asset(key="s3://my-bucket/config.json") downstream_asset = Asset(key="s3://my-bucket/downstream.csv") @materialize(upstream_asset) def create_upstream(): return "upstream_data" @materialize( downstream_asset, asset_deps=[config_asset] # Explicit dependency on config ) def create_downstream(upstream_data): # Inferred dependency on upstream_asset # This asset depends on: # 1. upstream_asset (inferred from task graph) # 2. config_asset (explicit via asset_deps) pass @flow def mixed_dependencies(): data = create_upstream() create_downstream(data) ``` ### Best practices for `asset_deps` references Use *string keys* when referencing assets that are materialized by other Prefect workflows. This avoids duplicate metadata definitions and lets the materializing workflow be the source of truth: ```python from prefect.assets import materialize # Good: Reference by key when another workflow materializes the asset @materialize( "s3://my-bucket/final-report.csv", asset_deps=["s3://my-bucket/data-from-other-workflow.csv"] # String key ) def create_report(): pass ``` Use *full `Asset` instances* when referencing assets that are completely external to Prefect. This provides metadata about external systems that Prefect wouldn't otherwise know about: ```python from prefect.assets import Asset, AssetProperties, materialize # Good: Full Asset for external systems external_database = Asset( key="postgres://prod-db/public/users", properties=AssetProperties( name="Production Users Table", description="Main user database maintained by the platform team", owners=["platform-team@company.com"], url="https://internal-db-dashboard.com/users" ) ) @materialize( "s3://my-bucket/user-analytics.csv", asset_deps=[external_database] # Full Asset for external system ) def analyze_users(): pass ``` ## Updating asset properties Asset properties should have one source of truth to avoid conflicts. When you materialize an asset with properties, those properties perform a complete overwrite of all metadata fields for that asset. **Important** Any `Asset` instance with properties will completely replace all existing metadata. Partial updates are not supported - you must provide all the metadata you want to preserve. ```python from prefect.assets import Asset, AssetProperties, materialize # Initial materialization with full properties initial_asset = Asset( key="s3://my-bucket/evolving-data.csv", properties=AssetProperties( name="Evolving Dataset", description="Initial description", owners=["team-a@company.com"], url="https://dashboard.company.com/evolving-data" ) ) @materialize(initial_asset) def initial_creation(): pass # Later materialization - OVERWRITES all properties updated_asset = Asset( key="s3://my-bucket/evolving-data.csv", properties=AssetProperties( name="Evolving Dataset", # Must include to preserve description="Updated description with new insights", # Updated owners=["team-a@company.com"], # Must include to preserve # url is now None because it wasn't included ) ) @materialize(updated_asset) def update_dataset(): pass # The final asset will have: # - name: "Evolving Dataset" (preserved) # - description: "Updated description with new insights" (updated) # - owners: ["team-a@company.com"] (preserved) # - url: None (lost because not included in update) ``` **Best practice** Designate one workflow as the authoritative source for each asset's metadata. Other workflows that reference the asset should use string keys only to avoid conflicting metadata definitions. ## Further Reading * [Learn about asset health and asset events](/v3/concepts/assets) # How to deploy a web application powered by background tasks Source: https://docs-3.prefect.io/v3/advanced/background-tasks Learn how to background heavy tasks from a web application to dedicated infrastructure. This example demonstrates how to use [background tasks](/v3/concepts/tasks#background-tasks) in the context of a web application using Prefect for task submission, execution, monitoring, and result storage. We'll build out an application using FastAPI to offer API endpoints to our clients, and task workers to execute the background tasks these endpoints defer. Refer to the [examples repository](https://github.com/PrefectHQ/examples/tree/main/apps/background-tasks) for the complete example's source code. This pattern is useful when you need to perform operations that are too long for a standard web request-response cycle, such as data processing, sending emails, or interacting with external APIs that might be slow. ## Overview This example will build out: * `@prefect.task` definitions representing the work you want to run in the background * A `fastapi` application providing API endpoints to: * Receive task parameters via `POST` request and submit the task to Prefect with `.delay()` * Allow polling for the task's status via a `GET` request using its `task_run_id` * A `Dockerfile` to build a multi-stage image for the web app, Prefect server and task worker(s) * A `compose.yaml` to manage lifecycles of the web app, Prefect server and task worker(s) ```bash β”œβ”€β”€ Dockerfile β”œβ”€β”€ README.md β”œβ”€β”€ compose.yaml β”œβ”€β”€ pyproject.toml β”œβ”€β”€ src β”‚ └── foo β”‚ β”œβ”€β”€ __init__.py β”‚ β”œβ”€β”€ _internal/*.py β”‚ β”œβ”€β”€ api.py β”‚ └── task.py ``` You can follow along by cloning the [examples repository](https://github.com/PrefectHQ/examples) or instead use [`uv`](https://docs.astral.sh/uv/getting-started/installation/) to bootstrap a your own new project: ```bash uv init --lib foo uv add prefect marvin ``` This example application is structured as a library with a `src/foo` directory for portability and organization. This example does ***not*** require: * Prefect Cloud * creating a Prefect Deployment * creating a work pool ## Useful things to remember * You can call any Python code from your task definitions (including other flows and tasks!) * Prefect [Results](/v3/concepts/caching) allow you to save/serialize the `return` value of your task definitions to your result storage (e.g. a local directory, S3, GCS, etc), enabling [caching](/v3/concepts/caching) and [idempotency](/v3/advanced/transactions). ## Defining the background task The core of the background processing is a Python function decorated with `@prefect.task`. This marks the function as a unit of work that Prefect can manage (e.g. observe, cache, retry, etc.) {/* pmd-metadata: notest */} ```python src/foo/task.py from typing import Any, TypeVar import marvin from prefect import task, Task from prefect.cache_policies import INPUTS, TASK_SOURCE from prefect.states import State from prefect.task_worker import serve from prefect.client.schemas.objects import TaskRun T = TypeVar("T") def _print_output(task: Task, task_run: TaskRun, state: State[T]): result = state.result() print(f"result type: {type(result)}") print(f"result: {result!r}") @task(cache_policy=INPUTS + TASK_SOURCE, on_completion=[_print_output]) async def create_structured_output(data: Any, target: type[T], instructions: str) -> T: return await marvin.cast_async( data, target=target, instructions=instructions, ) def main(): serve(create_structured_output) if __name__ == "__main__": main() ``` Key details: * `@task`: Decorator to define our task we want to run in the background. * `cache_policy`: Caching based on `INPUTS` and `TASK_SOURCE`. * `serve(create_structured_output)`: This function starts a task worker subscribed to newly `delay()`ed task runs. ## Building the FastAPI application The FastAPI application provides API endpoints to trigger the background task and check its status. {/* pmd-metadata: notest */} ```python src/foo/api.py import logging from uuid import UUID from fastapi import Depends, FastAPI, Response from fastapi.responses import JSONResponse from foo._internal import get_form_data, get_task_result, StructuredOutputRequest from foo.task import create_structured_output logger = logging.getLogger(__name__) app = FastAPI() @app.post("/tasks", status_code=202) async def submit_task( form_data: StructuredOutputRequest = Depends(get_form_data), ) -> JSONResponse: """Submit a task to Prefect for background execution.""" future = create_structured_output.delay( form_data.payload, target=form_data.target_type, instructions=form_data.instructions, ) logger.info(f"Submitted task run: {future.task_run_id}") return {"task_run_id": str(future.task_run_id)} @app.get("/tasks/{task_run_id}/status") async def get_task_status_api(task_run_id: UUID) -> Response: """Checks the status of a submitted task run.""" status, data = await get_task_result(task_run_id) response_data = {"task_run_id": str(task_run_id), "status": status} http_status_code = 200 if status == "completed": response_data["result"] = data elif status == "error": response_data["message"] = data # Optionally set a different HTTP status for errors return JSONResponse(response_data, status_code=http_status_code) ``` The `get_task_result` helper function (in `src/foo/_internal/_prefect.py`) uses the Prefect Python client to interact with the Prefect API: ```python src/foo/_internal/_prefect.py from typing import Any, Literal, cast from uuid import UUID from prefect.client.orchestration import get_client from prefect.client.schemas.objects import TaskRun from prefect.logging import get_logger logger = get_logger(__name__) Status = Literal["completed", "pending", "error"] def _any_task_run_result(task_run: TaskRun) -> Any: try: return cast(Any, task_run.state.result(_sync=True)) # type: ignore except Exception as e: logger.warning(f"Could not retrieve result for task run {task_run.id}: {e}") return None async def get_task_result(task_run_id: UUID) -> tuple[Status, Any]: """Get task result or status. Returns: tuple: (status, data) status: "completed", "pending", or "error" data: the result if completed, error message if error, None if pending """ try: async with get_client() as client: task_run = await client.read_task_run(task_run_id) if not task_run.state: return "pending", None if task_run.state.is_completed(): try: result = _any_task_run_result(task_run) return "completed", result except Exception as e: logger.warning( f"Could not retrieve result for completed task run {task_run_id}: {e}" ) return "completed", "" elif task_run.state.is_failed(): try: error_result = _any_task_run_result(task_run) error_message = ( str(error_result) if error_result else "Task failed without specific error message." ) return "error", error_message except Exception as e: logger.warning( f"Could not retrieve error result for failed task run {task_run_id}: {e}" ) return "error", "" else: return "pending", None except Exception as e: logger.error(f"Error checking task status for {task_run_id}: {e}") return "error", f"Failed to check task status: {str(e)}" ``` This function fetches the `TaskRun` object from the API and checks its `state` to determine if it's `Completed`, `Failed`, or still `Pending`/`Running`. If completed, it attempts to retrieve the result using `task_run.state.result()`. If failed, it tries to get the error message. ## Building the Docker Image A multi-stage `Dockerfile` is used to create optimized images for each service (Prefect server, task worker, and web API). This approach helps keep image sizes small and separates build dependencies from runtime dependencies. ```dockerfile Dockerfile # Stage 1: Base image with Python and uv FROM --platform=linux/amd64 ghcr.io/astral-sh/uv:python3.12-bookworm-slim as base WORKDIR /app ENV UV_SYSTEM_PYTHON=1 ENV PATH="/root/.local/bin:$PATH" COPY pyproject.toml uv.lock* ./ # Note: We install all dependencies needed for all stages here. # A more optimized approach might separate dependencies per stage. RUN --mount=type=cache,target=/root/.cache/uv \ uv pip install --system -r pyproject.toml COPY src/ /app/src FROM base as server CMD ["prefect", "server", "start"] # --- Task Worker Stage --- # FROM base as task # Command to start the task worker by running the task script # This script should call `prefect.task_worker.serve(...)` CMD ["python", "src/foo/task.py"] # --- API Stage --- # FROM base as api # Command to start the FastAPI server using uvicorn CMD ["uvicorn", "src.foo.api:app", "--host", "0.0.0.0", "--port", "8000"] ``` * **Base Stage (`base`)**: Sets up Python, `uv`, installs all dependencies from `pyproject.toml` into a base layer to make use of Docker caching, and copies the source code. * **Server Stage (`server`)**: Builds upon the `base` stage. Sets the default command (`CMD`) to start the Prefect server. * **Task Worker Stage (`task`)**: Builds upon the `base` stage. Sets the `CMD` to run the `src/foo/task.py` script, which is expected to contain the `serve()` call for the task(s). * **API Stage (`api`)**: Builds upon the `base` stage. Sets the `CMD` to start the FastAPI application using `uvicorn`. The `compose.yaml` file then uses the `target` build argument to specify which of these final stages (`server`, `task`, `api`) to use for each service container. ## Declaring the application services We use `compose.yaml` to define and run the multi-container application, managing the lifecycles of the FastAPI web server, the Prefect API server, database and task worker(s). ```yaml compose.yaml services: prefect-server: build: context: . target: server ports: - "4200:4200" volumes: - prefect-data:/root/.prefect # Persist Prefect DB environment: # Allow connections from other containers PREFECT_SERVER_API_HOST: 0.0.0.0 # Task Worker task: build: context: . target: deploy: replicas: 1 # task workers are safely horizontally scalable (think redis stream consumer groups) volumes: # Mount storage for results - ./task-storage:/task-storage depends_on: prefect-server: condition: service_started environment: PREFECT_API_URL: http://prefect-server:4200/api PREFECT_LOCAL_STORAGE_PATH: /task-storage PREFECT_LOGGING_LOG_PRINTS: "true" PREFECT_RESULTS_PERSIST_BY_DEFAULT: "true" MARVIN_ENABLE_DEFAULT_PRINT_HANDLER: "false" OPENAI_API_KEY: ${OPENAI_API_KEY} develop: # Optionally watch for code changes for development watch: - action: sync path: . target: /app ignore: - .venv/ - task-storage/ - action: rebuild path: uv.lock api: build: context: . target: api volumes: # Mount storage for results - ./task-storage:/task-storage ports: - "8000:8000" depends_on: task: condition: service_started prefect-server: condition: service_started environment: PREFECT_API_URL: http://prefect-server:4200/api PREFECT_LOCAL_STORAGE_PATH: /task-storage develop: # Optionally watch for code changes for development watch: - action: sync path: . target: /app ignore: - .venv/ - task-storage/ - action: rebuild path: uv.lock volumes: # Named volumes for data persistence prefect-data: {} task-storage: {} ``` In a production use-case, you'd likely want to: * write a `Dockerfile` for each service * add a `postgres` service and [configure it as the Prefect database](/v3/manage/server/index#quickstart%3A-configure-a-postgresql-database-with-docker). * remove the hot-reloading configuration in the `develop` section - **`prefect-server`**: Runs the Prefect API server and UI. * `build`: Uses a multi-stage `Dockerfile` (not shown here, but present in the example repo) targeting the `server` stage. * `ports`: Exposes the Prefect API/UI on port `4200`. * `volumes`: Uses a named volume `prefect-data` to persist the Prefect SQLite database (`/root/.prefect/prefect.db`) across container restarts. * `PREFECT_SERVER_API_HOST=0.0.0.0`: Makes the API server listen on all interfaces within the Docker network, allowing the `task` and `api` services to connect. - **`task`**: Runs the Prefect task worker process (executing `python src/foo/task.py` which calls `serve`). * `build`: Uses the `task` stage from the `Dockerfile`. * `depends_on`: Ensures the `prefect-server` service is started before this service attempts to connect. * `PREFECT_API_URL`: Crucial setting that tells the worker where to find the Prefect API to poll for submitted task runs. * `PREFECT_LOCAL_STORAGE_PATH=/task-storage`: Configures the worker to store task run results in the `/task-storage` directory inside the container. This path is mounted to the host using the `task-storage` named volume via `volumes: - ./task-storage:/task-storage` (or just `task-storage:` if using a named volume without a host path binding). * `PREFECT_RESULTS_PERSIST_BY_DEFAULT=true`: Tells Prefect tasks to automatically save their results using the configured storage (defined by `PREFECT_LOCAL_STORAGE_PATH` in this case). * `PREFECT_LOGGING_LOG_PRINTS=true`: Configures the Prefect logger to capture output from `print()` statements within tasks. * `OPENAI_API_KEY=${OPENAI_API_KEY}`: Passes secrets needed by the task code from the host environment (via a `.env` file loaded by Docker Compose) into the container's environment. - **`api`**: Runs the FastAPI web application. * `build`: Uses the `api` stage from the `Dockerfile`. * `depends_on`: Waits for the `prefect-server` (required for submitting tasks and checking status) and optionally the `task` worker. * `PREFECT_API_URL`: Tells the FastAPI application where to send `.delay()` calls and status check requests. * `PREFECT_LOCAL_STORAGE_PATH`: May be needed if the API itself needs to directly read result files (though typically fetching results via `task_run.state.result()` is preferred). - **`volumes`**: Defines named volumes (`prefect-data`, `task-storage`) to persist data generated by the containers. ## Running this example Assuming you have obtained the code (either by cloning the repository or using `uv init` as described previously) and are in the project directory: 1. **Prerequisites:** Ensure Docker Desktop (or equivalent) with `docker compose` support is running. 2. **Build and Run Services:** This example's task uses [marvin](https://github.com/PrefectHQ/marvin), which (by default) requires an OpenAI API key. Provide it as an environment variable when starting the services: ```bash OPENAI_API_KEY= docker compose up --build --watch ``` This command will: * `--build`: Build the container images if they don't exist or if the Dockerfile/context has changed. * `--watch`: Watch for changes in the project source code and automatically sync/rebuild services (useful for development). * Add `--detach` or `-d` to run the containers in the background. 3. **Access Services:** * If you cloned the existing example, check out the basic [htmx](https://htmx.org/) UI at [http://localhost:8000](http://localhost:8000) * FastAPI docs: [http://localhost:8000/docs](http://localhost:8000/docs) * Prefect UI (for observing task runs): [http://localhost:4200](http://localhost:4200) ### Cleaning up ```bash docker compose down # also remove the named volumes docker compose down -v ``` ## Next Steps This example provides a repeatable pattern for integrating Prefect-managed background tasks with any python web application. You can: * Explore the [background tasks examples repository](https://github.com/PrefectHQ/prefect-background-task-examples) for more examples. * Adapt `src/**/*.py` to define and submit your specific web app and background tasks. * Configure Prefect settings (environment variables in `compose.yaml`) further, for example, using different result storage or logging levels. * Deploy these services to cloud infrastructure using managed container services. # How to customize caching behavior Source: https://docs-3.prefect.io/v3/advanced/caching ### Separate cache key storage from result storage To store cache records separately from the cached value, you can configure a cache policy to use a custom storage location. Here's an example of a cache policy configured to store cache records in a local directory: ```python from prefect import task from prefect.cache_policies import TASK_SOURCE, INPUTS cache_policy = (TASK_SOURCE + INPUTS).configure(key_storage="/path/to/cache/storage") @task(cache_policy=cache_policy) def my_cached_task(x: int): return x + 42 ``` Cache records will be stored in the specified directory while the persisted results will continue to be stored in `~/prefect/storage`. To store cache records in a remote object store such as S3, pass a storage block instead: ```python from prefect import task from prefect.cache_policies import TASK_SOURCE, INPUTS from prefect_aws import S3Bucket, AwsCredentials s3_bucket = S3Bucket( credentials=AwsCredentials( aws_access_key_id="my-access-key-id", aws_secret_access_key="my-secret-access-key", ), bucket_name="my-bucket", ) # save the block to ensure it is available across machines s3_bucket.save("my-cache-records-bucket") cache_policy = (TASK_SOURCE + INPUTS).configure(key_storage=s3_bucket) @task(cache_policy=cache_policy) def my_cached_task(x: int): return x + 42 ``` Storing cache records in a remote object store allows you to share cache records across multiple machines. ### Isolate cache access You can control concurrent access to cache records by setting the `isolation_level` parameter on the cache policy. Prefect supports two isolation levels: `READ_COMMITTED` and `SERIALIZABLE`. By default, cache records operate with a `READ_COMMITTED` isolation level. This guarantees that reading a cache record will see the latest committed cache value, but allows multiple executions of the same task to occur simultaneously. Consider the following example: ```python from prefect import task from prefect.cache_policies import INPUTS import threading cache_policy = INPUTS @task(cache_policy=cache_policy) def my_task_version_1(x: int): print("my_task_version_1 running") return x + 42 @task(cache_policy=cache_policy) def my_task_version_2(x: int): print("my_task_version_2 running") return x + 43 if __name__ == "__main__": thread_1 = threading.Thread(target=my_task_version_1, args=(1,)) thread_2 = threading.Thread(target=my_task_version_2, args=(1,)) thread_1.start() thread_2.start() thread_1.join() thread_2.join() ``` When running this script, both tasks will execute in parallel and perform work despite both tasks using the same cache key. For stricter isolation, you can use the `SERIALIZABLE` isolation level. This ensures that only one execution of a task occurs at a time for a given cache record via a locking mechanism. When setting `isolation_level` to `SERIALIZABLE`, you must also provide a `lock_manager` that implements locking logic for your system. Here's an updated version of the previous example that uses `SERIALIZABLE` isolation: ```python import threading from prefect import task from prefect.cache_policies import INPUTS from prefect.locking.memory import MemoryLockManager from prefect.transactions import IsolationLevel cache_policy = INPUTS.configure( isolation_level=IsolationLevel.SERIALIZABLE, lock_manager=MemoryLockManager(), ) @task(cache_policy=cache_policy) def my_task_version_1(x: int): print("my_task_version_1 running") return x + 42 @task(cache_policy=cache_policy) def my_task_version_2(x: int): print("my_task_version_2 running") return x + 43 if __name__ == "__main__": thread_1 = threading.Thread(target=my_task_version_1, args=(2,)) thread_2 = threading.Thread(target=my_task_version_2, args=(2,)) thread_1.start() thread_2.start() thread_1.join() thread_2.join() ``` In this example, only one of the tasks will run and the other will use the cached value. **Locking in a distributed setting** To manage locks in a distributed setting, you will need to use a storage system for locks that is accessible by all of your execution infrastructure. We recommend using the `RedisLockManager` provided by `prefect-redis` in conjunction with a shared Redis instance: ```python from prefect import task from prefect.cache_policies import TASK_SOURCE, INPUTS from prefect.transactions import IsolationLevel from prefect_redis import RedisLockManager cache_policy = (INPUTS + TASK_SOURCE).configure( isolation_level=IsolationLevel.SERIALIZABLE, lock_manager=RedisLockManager(host="my-redis-host"), ) @task(cache_policy=cache_policy) def my_cached_task(x: int): return x + 42 ``` ### Coordinate caching across multiple tasks To coordinate cache writes across tasks, you can run multiple tasks within a single [*transaction*](/v3/develop/transactions). ```python from prefect import task, flow from prefect.transactions import transaction @task(cache_key_fn=lambda *args, **kwargs: "static-key-1") def load_data(): return "some-data" @task(cache_key_fn=lambda *args, **kwargs: "static-key-2") def process_data(data, fail): if fail: raise RuntimeError("Error! Abort!") return len(data) @flow def multi_task_cache(fail: bool = True): with transaction(): data = load_data() process_data(data=data, fail=fail) ``` When this flow is run with the default parameter values it will fail on the `process_data` task after the `load_data` task has succeeded. However, because caches are only written to when a transaction is *committed*, the `load_data` task will *not* write a result to its cache key location until the `process_data` task succeeds as well. On a subsequent run with `fail=False`, both tasks will be re-executed and the results will be cached. ### Handling Non-Serializable Objects You may have task inputs that can't (or shouldn't) be serialized as part of the cache key. There are two direct approaches to handle this, both of which based on the same idea. You can **adjust the serialization logic** to only serialize certain properties of an input: 1. Using a custom cache key function: ```python from prefect import flow, task from prefect.cache_policies import CacheKeyFnPolicy, RUN_ID from prefect.context import TaskRunContext from pydantic import BaseModel, ConfigDict class NotSerializable: def __getstate__(self): raise TypeError("NooOoOOo! I will not be serialized!") class ContainsNonSerializableObject(BaseModel): model_config = ConfigDict(arbitrary_types_allowed=True) name: str bad_object: NotSerializable def custom_cache_key_fn(context: TaskRunContext, parameters: dict) -> str: return parameters["some_object"].name @task(cache_policy=CacheKeyFnPolicy(cache_key_fn=custom_cache_key_fn) + RUN_ID) def use_object(some_object: ContainsNonSerializableObject) -> str: return f"Used {some_object.name}" @flow def demo_flow(): obj = ContainsNonSerializableObject(name="test", bad_object=NotSerializable()) state = use_object(obj, return_state=True) # Not cached! assert state.name == "Completed" other_state = use_object(obj, return_state=True) # Cached! assert other_state.name == "Cached" assert state.result() == other_state.result() ``` 2. Using Pydantic's [custom serialization](https://docs.pydantic.dev/latest/concepts/serialization/#custom-serializers) on your input types: ```python from pydantic import BaseModel, ConfigDict, model_serializer from prefect import flow, task from prefect.cache_policies import INPUTS, RUN_ID class NotSerializable: def __getstate__(self): raise TypeError("NooOoOOo! I will not be serialized!") class ContainsNonSerializableObject(BaseModel): model_config = ConfigDict(arbitrary_types_allowed=True) name: str bad_object: NotSerializable @model_serializer def ser_model(self) -> dict: """Only serialize the name, not the problematic object""" return {"name": self.name} @task(cache_policy=INPUTS + RUN_ID) def use_object(some_object: ContainsNonSerializableObject) -> str: return f"Used {some_object.name}" @flow def demo_flow(): some_object = ContainsNonSerializableObject( name="test", bad_object=NotSerializable() ) state = use_object(some_object, return_state=True) # Not cached! assert state.name == "Completed" other_state = use_object(some_object, return_state=True) # Cached! assert other_state.name == "Cached" assert state.result() == other_state.result() ``` Choose the approach that best fits your needs: * Use Pydantic models when you want consistent serialization across your application * Use custom cache key functions when you need different caching logic for different tasks # How to cancel running workflows Source: https://docs-3.prefect.io/v3/advanced/cancel-workflows You can cancel a scheduled or in-progress flow run from the CLI, UI, REST API, or Python client. {/* */} When requesting cancellation, the flow run moves to a "Cancelling" state. If the deployment is associated with a work pool, then the worker monitors the state of flow runs and detects that cancellation is requested. The worker then sends a signal to the flow run infrastructure, requesting termination of the run. If the run does not terminate after a grace period (default of 30 seconds), the infrastructure is killed, ensuring the flow run exits. {/* */} **A deployment is required** Flow run cancellation requires that the flow run is associated with a deployment. A monitoring process must be running to enforce the cancellation. Inline nested flow runs (those created without `run_deployment`), cannot be cancelled without cancelling the parent flow run. To cancel a nested flow run independent of its parent flow run, we recommend deploying it separately and starting it using the [run\_deployment](/v3/deploy/index) function. Cancellation is resilient to restarts of Prefect workers. To enable this, we attach metadata about the created infrastructure to the flow run. Internally, this is referred to as the `infrastructure_pid` or infrastructure identifier. Generally, this is composed of two parts: * Scope: identifying where the infrastructure is running. * ID: a unique identifier for the infrastructure within the scope. {/* */} The scope ensures that Prefect does not kill the wrong infrastructure. For example, workers running on multiple machines may have overlapping process IDs but should not have a matching scope. {/* */} The identifiers for infrastructure types are: * Processes: The machine hostname and the PID. * Docker Containers: The Docker API URL and container ID. * Kubernetes Jobs: The Kubernetes cluster name and the job name. While the cancellation process is robust, there are a few issues than can occur: * If the infrastructure for the flow run does not support cancellation, cancellation will not work. * If the identifier scope does not match when attempting to cancel a flow run, the worker cannot cancel the flow run. Another worker may attempt cancellation. * If the infrastructure associated with the run cannot be found or has already been killed, the worker marks the flow run as cancelled. * If the `infrastructre_pid` is missing, the flow run is marked as cancelled but cancellation cannot be enforced. * If the worker runs into an unexpected error during cancellation, the flow run may or may not be cancelled depending on where the error occurred. The worker will try again to cancel the flow run. Another worker may attempt cancellation. ### Cancel through the CLI From the command line in your execution environment, you can cancel a flow run by using the `prefect flow-run cancel` CLI command, passing the ID of the flow run. ```bash prefect flow-run cancel 'a55a4804-9e3c-4042-8b59-b3b6b7618736' ``` ### Cancel through the UI Navigate to the flow run's detail page and click `Cancel` in the upper right corner. Prefect UI # How to create custom blocks Source: https://docs-3.prefect.io/v3/advanced/custom-blocks ### Create a new block type To create a custom block type, define a class that subclasses `Block`. The `Block` base class builds on Pydantic's `BaseModel`, so you can declare custom fields just like a [Pydantic model](https://pydantic-docs.helpmanual.io/usage/models/#basic-model-usage). We've already seen an example of a `Cube` block that represents a cube and holds information about the length of each edge in inches: {/* pmd-metadata: notest */} ```python from prefect.blocks.core import Block class Cube(Block): edge_length_inches: float Cube.register_type_and_schema() ``` ### Register custom blocks In addition to the `register_type_and_schema` method shown above, you can register blocks from a Python module with a CLI command: ```bash prefect block register --module prefect_aws.credentials ``` This command is useful for registering all blocks found within a module in a [Prefect Integration library](/integrations/). Alternatively, if a custom block was created in a `.py` file, you can register the block with the CLI command: ```bash prefect block register --file my_block.py ``` Block documents can now be created with the registered block schema. ### Secret fields All block values are encrypted before being stored. If you have values that you would not like visible in the UI or in logs, use the `SecretStr` field type provided by Pydantic to automatically obfuscate those values. You can use this capability for fields that store credentials such as passwords and API tokens. Here's an example of an `AwsCredentials` block that uses `SecretStr`: ```python from typing import Optional from prefect.blocks.core import Block from pydantic import SecretStr class AwsCredentials(Block): aws_access_key_id: Optional[str] = None aws_secret_access_key: Optional[SecretStr] = None aws_session_token: Optional[str] = None profile_name: Optional[str] = None region_name: Optional[str] = None ``` Since `aws_secret_access_key` has the `SecretStr` type hint assigned to it, the value of that field is not exposed if the object is logged: {/* pmd-metadata: notest */} ```python aws_credentials_block = AwsCredentials( aws_access_key_id="AKIAJKLJKLJKLJKLJKLJK", aws_secret_access_key="secret_access_key" ) print(aws_credentials_block) # aws_access_key_id='AKIAJKLJKLJKLJKLJKLJK' aws_secret_access_key=SecretStr('**********') aws_session_token=None profile_name=None region_name=None ``` Prefect's `SecretDict` field type allows you to add a dictionary field to your block that automatically obfuscates values at all levels in the UI or in logs. This capability is useful for blocks where typing or structure of secret fields is not known until configuration time. Here's an example of a block that uses `SecretDict`: ```python from prefect.blocks.core import Block from prefect.blocks.fields import SecretDict class SystemConfiguration(Block): system_secrets: SecretDict system_variables: dict system_configuration_block = SystemConfiguration( system_secrets={ "password": "p@ssw0rd", "api_token": "token_123456789", "private_key": "", }, system_variables={ "self_destruct_countdown_seconds": 60, "self_destruct_countdown_stop_time": 7, }, ) ``` `system_secrets` is obfuscated when `system_configuration_block` is displayed, but `system_variables` show up in plain-text: {/* pmd-metadata: notest */} ```python print(system_configuration_block) # SystemConfiguration( # system_secrets=SecretDict('{'password': '**********', 'api_token': '**********', 'private_key': '**********'}'), # system_variables={'self_destruct_countdown_seconds': 60, 'self_destruct_countdown_stop_time': 7} # ) ``` ### Customize a block's display You can set metadata fields on a block type's subclass to control how a block displays. Available metadata fields include: | Property | Description | | ------------------ | --------------------------------------------------------------------------------------------------------------------------------------------- | | `_block_type_name` | Display name of the block in the UI. Defaults to the class name. | | `_block_type_slug` | Unique slug used to reference the block type in the API. Defaults to a lowercase, dash-delimited version of the block type name. | | `_logo_url` | URL pointing to an image that should be displayed for the block type in the UI. Default to `None`. | | `_description` | Short description of block type. Defaults to docstring, if provided. | | `_code_example` | Short code snippet shown in UI for how to load/use block type. Defaults to first example provided in the docstring of the class, if provided. | ### Update custom `Block` types Here's an example of how to add a `bucket_folder` field to your custom `S3Bucket` block; it represents the default path to read and write objects from (this field exists on [our implementation](https://github.com/PrefectHQ/prefect/blob/main/src/integrations/prefect-aws/prefect_aws/s3.py)). Add the new field to the class definition: {/* pmd-metadata: notest */} ```python from typing import Optional from prefect.blocks.core import Block class S3Bucket(Block): bucket_name: str credentials: AwsCredentials bucket_folder: Optional[str] = None ... ``` Then [register the updated block type](#register-blocks) with either your Prefect Cloud account or your self-hosted Prefect server instance. If you have any existing blocks of this type that were created before the update and you'd prefer to not re-create them, migrate them to the new version of your block type by adding the missing values: {/* pmd-metadata: notest */} ```python # Bypass Pydantic validation to allow your local Block class to load the old block version my_s3_bucket_block = S3Bucket.load("my-s3-bucket", validate=False) # Set the new field to an appropriate value my_s3_bucket_block.bucket_path = "my-default-bucket-path" # Overwrite the old block values and update the expected fields on the block my_s3_bucket_block.save("my-s3-bucket", overwrite=True) ``` # How to daemonize worker processes Source: https://docs-3.prefect.io/v3/advanced/daemonize-processes Learn how Prefect flow deployments enable configuring flows for scheduled and remote execution with workers. When running workflow applications, it's helpful to create long-running processes that run at startup and are resilient to failure. This guide shows you how to set up a systemd service to create long-running Prefect processes that poll for scheduled flow runs, including how to: * create a Linux user * install and configure Prefect * set up a systemd service for the Prefect worker or `.serve` process ## Prerequisites * An environment with a linux operating system with [systemd](https://systemd.io/) and Python 3.9 or later. * A superuser account that can run `sudo` commands. * A Prefect Cloud account, or an instance of a Prefect server running on your network. If using an [AWS t2-micro EC2 instance](https://aws.amazon.com/ec2/instance-types/t2/) with an AWS Linux image, you can install Python and pip with `sudo yum install -y python3 python3-pip`. ## Steps A systemd service is ideal for running a long-lived process on a Linux VM or physical Linux server. You will use systemd and learn how to automatically start a [Prefect worker](/v3/deploy/infrastructure-concepts/workers/) or long-lived [`serve` process](/v3/how-to-guides/deployment_infra/run-flows-in-local-processes) when Linux starts. This approach provides resilience by automatically restarting the process if it crashes. ### Step 1: Add a user Create a user account on your Linux system for the Prefect process. You can run a worker or serve process as root, but it's best practice to create a dedicated user. In a terminal, run: ```bash sudo useradd -m prefect sudo passwd prefect ``` When prompted, enter a password for the `prefect` account. Next, log in to the `prefect` account by running: ```bash sudo su prefect ``` ### Step 2: Install Prefect Run: ```bash pip3 install prefect ``` This guide assumes you are installing Prefect globally, rather than a virtual environment. If running a systemd service in a virtual environment, change the ExecPath. For example, if using [venv](https://docs.python.org/3/library/venv.html), change the ExecPath to target the `prefect` application in the `bin` subdirectory of your virtual environment. Next, set up your environment so the Prefect client knows which server to connect to. If connecting to Prefect Cloud, follow [the instructions](v3/how-to-guides/cloud/connect-to-cloud) to obtain an API key, and then run the following: ```bash prefect cloud login -k YOUR_API_KEY ``` When prompted, choose the Prefect workspace to log in to. If connecting to a self-hosted Prefect server instance instead of a Prefect Cloud account, run the following command, substituting the IP address of your server: ```bash prefect config set PREFECT_API_URL=http://your-prefect-server-IP:4200 ``` Run the `exit` command to sign out of the `prefect` Linux account. This command switches you back to your sudo-enabled account where you can run the commands in the next section. ### Step 3: Set up a systemd service See the section below if you are setting up a Prefect worker. Skip to the [next section](#setting-up-a-systemd-service-for-serve) if you are setting up a Prefect `.serve` process. #### Setting up a systemd service for a Prefect worker Move into the `/etc/systemd/system` folder and open a file for editing. We use the Vim text editor below. ```bash cd /etc/systemd/system sudo vim my-prefect-service.service ``` ```txt my-prefect-service.service [Unit] Description=Prefect worker [Service] User=prefect WorkingDirectory=/home ExecStart=prefect worker start --pool YOUR_WORK_POOL_NAME Restart=always [Install] WantedBy=multi-user.target ``` Make sure you substitute your own work pool name. #### Setting up a systemd service for `.serve` Copy your flow entrypoint Python file and any other files needed for your flow to run into the `/home` directory (or the directory of your choice). Here's a basic example flow: ```python my_file.py from prefect import flow @flow(log_prints=True) def say_hi(): print("Hello!") if __name__=="__main__": say_hi.serve(name="served and daemonized deployment") ``` To make changes to your flow code without restarting your process, push your code to git-based cloud storage (GitHub, BitBucket, GitLab) and use `flow.from_source().serve()`, as in the example below. ```python my_remote_flow_code_file.py if __name__ == "__main__": flow.from_source( source="https://github.com/org/repo.git", entrypoint="path/to/my_remote_flow_code_file.py:say_hi", ).serve(name="deployment-with-github-storage") ``` Make sure you substitute your own flow code entrypoint path. If you change the flow entrypoint parameters, you must restart the process. Move into the `/etc/systemd/system` folder and open a file for editing. This example below uses Vim. ```bash cd /etc/systemd/system sudo vim my-prefect-service.service ``` ```txt my-prefect-service.service [Unit] Description=Prefect serve [Service] User=prefect WorkingDirectory=/home ExecStart=python3 my_file.py Restart=always [Install] WantedBy=multi-user.target ``` ### Step 4: Save, enable, and start the service To save the file and exit Vim, hit the escape key, type `:wq!`, then press the return key. Next, make systemd aware of your new service by running: ```bash sudo systemctl daemon-reload ``` Then, enable the service by running: ```bash sudo systemctl enable my-prefect-service ``` This command ensures it runs when your system boots. Next, start the service: ```bash sudo systemctl start my-prefect-service ``` Run your deployment from the UI and check the logs on the **Flow Runs** page. You can see if your daemonized Prefect worker or serve process is running, and the Prefect logs with `systemctl status my-prefect-service`. You now have a systemd service that starts when your system boots, which will restart if it ever crashes. # How to maintain your Prefect database Source: https://docs-3.prefect.io/v3/advanced/database-maintenance Monitor and maintain your PostgreSQL database for self-hosted Prefect deployments Self-hosted Prefect deployments require database maintenance to ensure optimal performance and manage disk usage. This guide provides monitoring queries and maintenance strategies for PostgreSQL databases. This guide is for advanced users managing production deployments. Always test maintenance operations in a non-production environment first, if possible. Exact numbers included in this guide will vary based on your workload and installation. ## Quick reference **Daily tasks:** * Check disk space and table sizes * Monitor bloat levels (> 50% requires action) * Run retention policies for old flow runs **Weekly tasks:** * Review autovacuum performance * Check index usage and bloat * Analyze high-traffic tables **Red flags requiring immediate action:** * Disk usage > 80% * Table bloat > 100% * Connection count approaching limit * Autovacuum hasn't run in 24+ hours ## Database growth monitoring Prefect stores entities like events, flow runs, task runs, and logs that accumulate over time. Monitor your database regularly to understand growth patterns specific to your usage. ### Check table sizes ```sql -- Total database size SELECT pg_size_pretty(pg_database_size('prefect')) AS database_size; -- Table sizes with row counts SELECT schemaname, relname AS tablename, pg_size_pretty(pg_total_relation_size(schemaname||'.'||relname)) AS total_size, to_char(n_live_tup, 'FM999,999,999') AS row_count FROM pg_stat_user_tables WHERE schemaname = 'public' ORDER BY pg_total_relation_size(schemaname||'.'||relname) DESC LIMIT 20; ``` ### Monitor disk space Track overall disk usage to prevent outages: ```sql -- Check database disk usage SELECT current_setting('data_directory') AS data_directory, pg_size_pretty(pg_database_size('prefect')) AS database_size, pg_size_pretty(pg_total_relation_size('public.events')) AS events_table_size, pg_size_pretty(pg_total_relation_size('public.log')) AS log_table_size; -- Check available disk space (requires pg_stat_disk extension or shell access) -- Run from shell: df -h /path/to/postgresql/data ``` Common large tables in Prefect databases: * `events` - Automatically generated for all state changes (often the largest table) * `log` - Flow and task run logs * `flow_run` and `task_run` - Execution records * `flow_run_state` and `task_run_state` - State history ### Monitor table bloat PostgreSQL tables can accumulate "dead tuples" from updates and deletes. Monitor bloat percentage to identify tables needing maintenance: ```sql SELECT schemaname, relname AS tablename, n_live_tup AS live_tuples, n_dead_tup AS dead_tuples, CASE WHEN n_live_tup > 0 THEN round(100.0 * n_dead_tup / n_live_tup, 2) ELSE 0 END AS bloat_percent, last_vacuum, last_autovacuum FROM pg_stat_user_tables WHERE schemaname = 'public' AND n_dead_tup > 1000 ORDER BY bloat_percent DESC; ``` ### Monitor index bloat Indexes can also bloat and impact performance: ```sql -- Check index sizes and bloat SELECT schemaname, relname AS tablename, indexrelname AS indexname, pg_size_pretty(pg_relation_size(indexrelid)) AS index_size, idx_scan AS index_scans, idx_tup_read AS tuples_read, idx_tup_fetch AS tuples_fetched FROM pg_stat_user_indexes WHERE schemaname = 'public' ORDER BY pg_relation_size(indexrelid) DESC LIMIT 20; ``` ## PostgreSQL VACUUM VACUUM reclaims storage occupied by dead tuples. While PostgreSQL runs autovacuum automatically, you may need manual intervention for heavily updated tables. ### Manual VACUUM For tables with high bloat percentages: ```sql -- Standard VACUUM (doesn't lock table) VACUUM ANALYZE flow_run; VACUUM ANALYZE task_run; VACUUM ANALYZE log; -- VACUUM FULL (rebuilds table, requires exclusive lock) -- WARNING: This COMPLETELY LOCKS the table - no reads or writes! -- Can take HOURS on large tables. Only use as last resort. VACUUM FULL flow_run; -- Better alternative: pg_repack (if installed) -- Rebuilds tables online without blocking -- pg_repack -t flow_run -d prefect ``` ### Monitor autovacuum Check if autovacuum is keeping up with your workload: ```sql -- Show autovacuum settings SHOW autovacuum; SHOW autovacuum_vacuum_scale_factor; SHOW autovacuum_vacuum_threshold; -- Check when tables were last vacuumed SELECT schemaname, relname AS tablename, last_vacuum, last_autovacuum, vacuum_count, autovacuum_count FROM pg_stat_user_tables WHERE schemaname = 'public' ORDER BY last_autovacuum NULLS FIRST; ``` ### Tune autovacuum for Prefect workloads Depending on your workload, your write patterns may require more aggressive autovacuum settings than defaults: ```sql -- For high-volume events table (INSERT/DELETE heavy) ALTER TABLE events SET ( autovacuum_vacuum_scale_factor = 0.05, -- Default is 0.2 autovacuum_vacuum_threshold = 1000, autovacuum_analyze_scale_factor = 0.02 -- Keep stats current ); -- For state tables (INSERT-heavy) ALTER TABLE flow_run_state SET ( autovacuum_vacuum_scale_factor = 0.1, autovacuum_analyze_scale_factor = 0.05 ); -- For frequently updated tables ALTER TABLE flow_run SET ( autovacuum_vacuum_scale_factor = 0.1, autovacuum_vacuum_threshold = 500 ); ``` ### When to take action **Bloat thresholds:** * **\< 20% bloat**: Normal, autovacuum should handle * **20-50% bloat**: Monitor closely, consider manual VACUUM * **> 50% bloat**: Manual VACUUM recommended * **> 100% bloat**: Significant performance impact, urgent action needed **Warning signs:** * Autovacuum hasn't run in > 24 hours on active tables * Query performance degrading over time * Disk space usage growing faster than data volume ## Data retention Implement data retention policies to manage database growth. The following example shows a Prefect flow that safely deletes old flow runs using the Prefect API: Using the Prefect API ensures proper cleanup of all related data, including logs and artifacts. The API handles cascade deletions and triggers necessary background tasks. {/* pmd-metadata: notest */} ```python import asyncio from datetime import datetime, timedelta, timezone from prefect import flow, task, get_run_logger from prefect.client.orchestration import get_client from prefect.client.schemas.filters import FlowRunFilter, FlowRunFilterState, FlowRunFilterStateType, FlowRunFilterStartTime from prefect.client.schemas.objects import StateType @task async def delete_old_flow_runs( days_to_keep: int = 30, batch_size: int = 100 ): """Delete completed flow runs older than specified days.""" logger = get_run_logger() async with get_client() as client: cutoff = datetime.now(timezone.utc) - timedelta(days=days_to_keep) # Create filter for old completed flow runs # Note: Using start_time because created time filtering is not available flow_run_filter = FlowRunFilter( start_time=FlowRunFilterStartTime(before_=cutoff), state=FlowRunFilterState( type=FlowRunFilterStateType( any_=[StateType.COMPLETED, StateType.FAILED, StateType.CANCELLED] ) ) ) # Get flow runs to delete flow_runs = await client.read_flow_runs( flow_run_filter=flow_run_filter, limit=batch_size ) deleted_total = 0 while flow_runs: batch_deleted = 0 failed_deletes = [] # Delete each flow run through the API for flow_run in flow_runs: try: await client.delete_flow_run(flow_run.id) deleted_total += 1 batch_deleted += 1 except Exception as e: logger.warning(f"Failed to delete flow run {flow_run.id}: {e}") failed_deletes.append(flow_run.id) # Rate limiting - adjust based on your API capacity if batch_deleted % 10 == 0: await asyncio.sleep(0.5) logger.info(f"Deleted {batch_deleted}/{len(flow_runs)} flow runs (total: {deleted_total})") if failed_deletes: logger.warning(f"Failed to delete {len(failed_deletes)} flow runs") # Get next batch flow_runs = await client.read_flow_runs( flow_run_filter=flow_run_filter, limit=batch_size ) # Delay between batches to avoid overwhelming the API await asyncio.sleep(1.0) logger.info(f"Retention complete. Total deleted: {deleted_total}") @flow(name="database-retention") async def retention_flow(): """Run database retention tasks.""" await delete_old_flow_runs( days_to_keep=30, batch_size=100 ) ``` ### Direct SQL approach In some cases, you may need to use direct SQL for performance reasons or when the API is unavailable. Be aware that direct deletion bypasses application-level cascade logic: {/* pmd-metadata: notest */} ```python # Direct SQL only deletes what's defined by database foreign keys # Logs and artifacts may be orphaned without proper cleanup async with asyncpg.connect(connection_url) as conn: await conn.execute(""" DELETE FROM flow_run WHERE created < $1 AND state_type IN ('COMPLETED', 'FAILED', 'CANCELLED') """, cutoff) ``` ### Important considerations 1. **Filtering limitation**: The current API filters by `start_time` (when the flow run began execution), not `created` time (when the flow run was created in the database). This means flows that were created but never started won't be deleted. 2. **Test first**: Run with `SELECT` instead of `DELETE` to preview what will be removed 3. **Start conservative**: Begin with longer retention periods and adjust based on needs 4. **Monitor performance**: Large deletes can impact database performance 5. **Backup**: Always backup before major cleanup operations ## Event retention Events are automatically generated for all state changes in Prefect and can quickly become the largest table in your database. Prefect includes built-in event retention that automatically removes old events. ### Configure event retention The default retention period is 7 days. For high-volume deployments, consider reducing this: ```bash # Set retention to 2 days (as environment variable) export PREFECT_EVENTS_RETENTION_PERIOD="2d" # Or in your prefect configuration prefect config set PREFECT_EVENTS_RETENTION_PERIOD="2d" ``` ### Check event table size Monitor your event table growth: ```sql -- Event table size and row count SELECT pg_size_pretty(pg_total_relation_size('public.events')) AS total_size, to_char(count(*), 'FM999,999,999') AS row_count, min(occurred) AS oldest_event, max(occurred) AS newest_event FROM events; ``` Events are used for automations and triggers. Ensure your retention period keeps events long enough for your automation needs. ## Connection monitoring Monitor connection usage to prevent exhaustion: ```sql SELECT count(*) AS total_connections, count(*) FILTER (WHERE state = 'active') AS active, count(*) FILTER (WHERE state = 'idle') AS idle, (SELECT setting::int FROM pg_settings WHERE name = 'max_connections') AS max_connections FROM pg_stat_activity; ``` ## Automating database maintenance ### Schedule maintenance tasks Schedule the retention flow to run automatically. See [how to create deployments](/v3/how-to-guides/deployments/create-deployments) for creating scheduled deployments. For example, you could run the retention flow daily at 2 AM to clean up old flow runs. ### Recommended maintenance schedule * **Hourly**: Monitor disk space and connection count * **Daily**: Run retention policies, check bloat levels * **Weekly**: Analyze tables, review autovacuum performance * **Monthly**: REINDEX heavily used indexes, full database backup ## Troubleshooting common issues ### "VACUUM is taking forever" * Check for long-running transactions blocking VACUUM: ```sql SELECT pid, age(clock_timestamp(), query_start), usename, query FROM pg_stat_activity WHERE state <> 'idle' AND query NOT ILIKE '%vacuum%' ORDER BY age DESC; ``` * Consider using `pg_repack` instead of `VACUUM FULL` * Run during low-traffic periods ### "Database is growing despite retention policies" * Verify event retention is configured: `prefect config view | grep EVENTS_RETENTION` * Check if autovacuum is running on the events table * Ensure retention flow is actually executing (check flow run history) ### "Queries are getting slower over time" * Update table statistics: `ANALYZE;` * Check for missing indexes using `pg_stat_user_tables` * Review query plans with `EXPLAIN ANALYZE` ### "Connection limit reached" * Implement connection pooling immediately * Check for connection leaks: connections in 'idle' state for hours * Reduce Prefect worker/agent connection counts ## Further reading * [PostgreSQL documentation on VACUUM](https://www.postgresql.org/docs/current/sql-vacuum.html) * [PostgreSQL routine maintenance](https://www.postgresql.org/docs/current/routine-vacuuming.html) * [Monitoring PostgreSQL](https://www.postgresql.org/docs/current/monitoring-stats.html) * [pg\_repack extension](https://github.com/reorg/pg_repack) # How to build deployments via CI/CD Source: https://docs-3.prefect.io/v3/advanced/deploy-ci-cd CI/CD resources for working with Prefect. export const home = { tf: "https://registry.terraform.io/providers/PrefectHQ/prefect/latest/docs/guides/getting-started", cli: "https://docs.prefect.io/v3/api-ref/cli/index", api: "https://app.prefect.cloud/api/docs", helm: "https://github.com/PrefectHQ/prefect-helm/tree/main/charts" }; export const TF = ({name, href}) =>

You can manage {name} with the Terraform provider for Prefect.

; Many organizations deploy Prefect workflows through their CI/CD process. Each organization has their own unique CI/CD setup, but a common pattern is to use CI/CD to manage Prefect [deployments](/v3/concepts/deployments). Combining Prefect's deployment features with CI/CD tools enables efficient management of flow code updates, scheduling changes, and container builds. This guide uses [GitHub Actions](https://docs.github.com/en/actions) to implement a CI/CD process, but these concepts are generally applicable across many CI/CD tools. Note that Prefect's primary ways for creating deployments, a `.deploy` flow method or a `prefect.yaml` configuration file, are both designed for building and pushing images to a Docker registry. ## Get started with GitHub Actions and Prefect In this example, you'll write a GitHub Actions workflow that runs each time you push to your repository's `main` branch. This workflow builds and pushes a Docker image containing your flow code to Docker Hub, then deploys the flow to Prefect Cloud. ### Repository secrets Your CI/CD process must be able to authenticate with Prefect to deploy flows. Deploy flows securely and non-interactively in your CI/CD process by saving your `PREFECT_API_URL` and `PREFECT_API_KEY` [as secrets in your repository's settings](https://docs.github.com/en/actions/security-guides/using-secrets-in-github-actions). This allows them to be accessed in your CI/CD runner's environment without exposing them in any scripts or configuration files. In this scenario, deploying flows involves building and pushing Docker images, so add `DOCKER_USERNAME` and `DOCKER_PASSWORD` as secrets to your repository as well. Create secrets for GitHub Actions in your repository under **Settings -> Secrets and variables -> Actions -> New repository secret**: Creating a GitHub Actions secret ### Write a GitHub workflow To deploy your flow through GitHub Actions, you need a workflow YAML file. GitHub looks for workflow YAML files in the `.github/workflows/` directory in the root of your repository. In their simplest form, GitHub workflow files are made up of triggers and jobs. The `on:` trigger is set to run the workflow each time a push occurs on the `main` branch of the repository. The `deploy` job is comprised of four `steps`: * **`Checkout`** clones your repository into the GitHub Actions runner so you can reference files or run scripts from your repository in later steps. * **`Log in to Docker Hub`** authenticates to DockerHub so your image can be pushed to the Docker registry in your DockerHub account. [docker/login-action](https://github.com/docker/login-action) is an existing GitHub action maintained by Docker. `with:` passes values into the Action, similar to passing parameters to a function. * **`Setup Python`** installs your selected version of Python. * **`Prefect Deploy`** installs the dependencies used in your flow, then deploys your flow. `env:` makes the `PREFECT_API_KEY` and `PREFECT_API_URL` secrets from your repository available as environment variables during this step's execution. For reference, the examples below live in their respective branches of [this repository](https://github.com/prefecthq/cicd-example). ``` . | -- .github/ | |-- workflows/ | |-- deploy-prefect-flow.yaml |-- flow.py |-- requirements.txt ``` `flow.py` ```python from prefect import flow @flow(log_prints=True) def hello(): print("Hello!") if __name__ == "__main__": hello.deploy( name="my-deployment", work_pool_name="my-work-pool", image="my_registry/my_image:my_image_tag", ) ``` `.github/workflows/deploy-prefect-flow.yaml` ```yaml name: Deploy Prefect flow on: push: branches: - main jobs: deploy: name: Deploy runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v4 - name: Log in to Docker Hub uses: docker/login-action@v3 with: username: ${{ secrets.DOCKER_USERNAME }} password: ${{ secrets.DOCKER_PASSWORD }} - name: Setup Python uses: actions/setup-python@v5 with: python-version: "3.12" - name: Prefect Deploy env: PREFECT_API_KEY: ${{ secrets.PREFECT_API_KEY }} PREFECT_API_URL: ${{ secrets.PREFECT_API_URL }} run: | pip install -r requirements.txt python flow.py ``` ``` . |-- .github/ | |-- workflows/ | |-- deploy-prefect-flow.yaml |-- flow.py |-- prefect.yaml |-- requirements.txt ``` `flow.py` ```python from prefect import flow @flow(log_prints=True) def hello(): print("Hello!") ``` `prefect.yaml` ```yaml name: cicd-example prefect-version: 3.0.0 build: - prefect_docker.deployments.steps.build_docker_image: id: build-image requires: prefect-docker>=0.3.1 image_name: my_registry/my_image tag: my_image_tag dockerfile: auto push: - prefect_docker.deployments.steps.push_docker_image: requires: prefect-docker>=0.3.1 image_name: "{{ build-image.image_name }}" tag: "{{ build-image.tag }}" pull: null deployments: - name: my-deployment entrypoint: flow.py:hello work_pool: name: my-work-pool work_queue_name: default job_variables: image: "{{ build-image.image }}" ``` `.github/workflows/deploy-prefect-flow.yaml` ```yaml name: Deploy Prefect flow on: push: branches: - main jobs: deploy: name: Deploy runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v4 - name: Log in to Docker Hub uses: docker/login-action@v3 with: username: ${{ secrets.DOCKER_USERNAME }} password: ${{ secrets.DOCKER_PASSWORD }} - name: Setup Python uses: actions/setup-python@v5 with: python-version: "3.12" - name: Prefect Deploy env: PREFECT_API_KEY: ${{ secrets.PREFECT_API_KEY }} PREFECT_API_URL: ${{ secrets.PREFECT_API_URL }} run: | pip install -r requirements.txt prefect deploy -n my-deployment ``` ### Run a GitHub workflow After pushing commits to your repository, GitHub automatically triggers a run of your workflow. Monitor the status of running and completed workflows from the **Actions** tab of your repository. A GitHub Action triggered via push View the logs from each workflow step as they run. The `Prefect Deploy` step includes output about your image build and push, and the creation/update of your deployment. ```bash Successfully built image '***/cicd-example:latest' Successfully pushed image '***/cicd-example:latest' Successfully created/updated all deployments! Deployments |-----------------------------------------| | Name | Status Details | |---------------------|---------|---------| | hello/my-deployment | applied | | |-----------------------------------------| ``` ## Advanced example In more complex scenarios, CI/CD processes often need to accommodate several additional considerations to enable a smooth development workflow: * Making code available in different environments as it advances through stages of development * Handling independent deployment of distinct groupings of work, as in a monorepo * Efficiently using build time to avoid repeated work This [example repository](https://github.com/prefecthq/cicd-example-workspaces) addresses each of these considerations with a combination of Prefect's and GitHub's capabilities. ### Deploy to multiple workspaces The deployment processes to run are automatically selected when changes are pushed, depending on two conditions: ```yaml on: push: branches: - stg - main paths: - "project_1/**" ``` * **`branches:`** - which branch has changed. This ultimately selects which Prefect workspace a deployment is created or updated in. In this example, changes on the `stg` branch deploy flows to a staging workspace, and changes on the `main` branch deploy flows to a production workspace. * **`paths:`** - which project folders' files have changed. Since each project folder contains its own flows, dependencies, and `prefect.yaml`, it represents a complete set of logic and configuration that can deploy independently. Each project in this repository gets its own GitHub Actions workflow YAML file. The `prefect.yaml` file in each project folder depends on environment variables dictated by the selected job in each CI/CD workflow; enabling external code storage for Prefect deployments that is clearly separated across projects and environments. ``` . |--- cicd-example-workspaces-prod # production bucket | |--- project_1 | |---project_2 |---cicd-example-workspaces-stg # staging bucket |--- project_1 |---project_2 ``` Deployments in this example use S3 for code storage. So it's important that push steps place flow files in separate locations depending upon their respective environment and projectβ€”so no deployment overwrites another deployment's files. ### Caching build dependencies Since building Docker images and installing Python dependencies are essential parts of the deployment process, it's useful to rely on caching to skip repeated build steps. The `setup-python` action offers [caching options](https://github.com/actions/setup-python#caching-packages-dependencies) so Python packages do not have to be downloaded on repeat workflow runs. ```yaml - name: Setup Python uses: actions/setup-python@v5 with: python-version: "3.12" cache: "pip" ``` The `build-push-action` for building Docker images also offers [caching options for GitHub Actions](https://docs.docker.com/build/cache/backends/gha/). If you are not using GitHub, other remote [cache backends](https://docs.docker.com/build/cache/backends/) are available as well. ```yaml - name: Build and push id: build-docker-image env: GITHUB_SHA: ${{ steps.get-commit-hash.outputs.COMMIT_HASH }} uses: docker/build-push-action@v5 with: context: ${{ env.PROJECT_NAME }}/ push: true tags: ${{ secrets.DOCKER_USERNAME }}/${{ env.PROJECT_NAME }}:${{ env.GITHUB_SHA }}-stg cache-from: type=gha cache-to: type=gha,mode=max ``` ``` importing cache manifest from gha:*** DONE 0.1s [internal] load build context transferring context: 70B done DONE 0.0s [2/3] COPY requirements.txt requirements.txt CACHED [3/3] RUN pip install -r requirements.txt CACHED ``` ## Prefect GitHub Actions Prefect provides its own GitHub Actions for [authentication](https://github.com/PrefectHQ/actions-prefect-auth) and [deployment creation](https://github.com/PrefectHQ/actions-prefect-deploy). These actions simplify deploying with CI/CD when using `prefect.yaml`, especially in cases where a repository contains flows used in multiple deployments across multiple Prefect Cloud workspaces. Here's an example of integrating these actions into the workflow above: ```yaml name: Deploy Prefect flow on: push: branches: - main jobs: deploy: name: Deploy runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v4 - name: Log in to Docker Hub uses: docker/login-action@v3 with: username: ${{ secrets.DOCKER_USERNAME }} password: ${{ secrets.DOCKER_PASSWORD }} - name: Setup Python uses: actions/setup-python@v5 with: python-version: "3.12" - name: Prefect Auth uses: PrefectHQ/actions-prefect-auth@v1 with: prefect-api-key: ${{ secrets.PREFECT_API_KEY }} prefect-workspace: ${{ secrets.PREFECT_WORKSPACE }} - name: Run Prefect Deploy uses: PrefectHQ/actions-prefect-deploy@v4 with: deployment-names: my-deployment requirements-file-paths: requirements.txt ``` ## Authenticate to other Docker image registries The `docker/login-action` GitHub Action supports pushing images to a wide variety of image registries. For example, if you are storing Docker images in AWS Elastic Container Registry, you can add your ECR registry URL to the `registry` key in the `with:` part of the action and use an `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` as your `username` and `password`. ```yaml - name: Login to ECR uses: docker/login-action@v3 with: registry: .dkr.ecr..amazonaws.com username: ${{ secrets.AWS_ACCESS_KEY_ID }} password: ${{ secrets.AWS_SECRET_ACCESS_KEY }} ``` ## Further reading # How to detect and respond to zombie flows Source: https://docs-3.prefect.io/v3/advanced/detect-zombie-flows Learn how to detect and respond to zombie flows. Sudden infrastructure failures (like machine crashes or container evictions) can cause flow runs to become unresponsive and appear stuck in a `Running` state. To mitigate this, flow runs triggered by deployments can emit heartbeats to drive Automations that detect and respond to these "zombie" flow runs, ensuring they are marked as `Crashed` if they stop reporting heartbeats. ### Enable flow run heartbeat events You will need to ensure you're running Prefect version 3.1.8 or greater and set `PREFECT_RUNNER_HEARTBEAT_FREQUENCY` to an integer greater than 30 to emit flow run heartbeat events. ### Create the automation To create an automation that marks zombie flow runs as crashed, run this script: ```python from datetime import timedelta from prefect.automations import Automation from prefect.client.schemas.objects import StateType from prefect.events.actions import ChangeFlowRunState from prefect.events.schemas.automations import EventTrigger, Posture from prefect.events.schemas.events import ResourceSpecification my_automation = Automation( name="Crash zombie flows", trigger=EventTrigger( after={"prefect.flow-run.heartbeat"}, expect={ "prefect.flow-run.heartbeat", "prefect.flow-run.Completed", "prefect.flow-run.Failed", "prefect.flow-run.Cancelled", "prefect.flow-run.Crashed", }, match=ResourceSpecification({"prefect.resource.id": ["prefect.flow-run.*"]}), for_each={"prefect.resource.id"}, posture=Posture.Proactive, threshold=1, within=timedelta(seconds=90), ), actions=[ ChangeFlowRunState( state=StateType.CRASHED, message="Flow run marked as crashed due to missing heartbeats.", ) ], ) if __name__ == "__main__": my_automation.create() ``` The trigger definition says that `after` each heartbeat event for a flow run we `expect` to see another heartbeat event or a terminal state event for that same flow run `within` 90 seconds of a heartbeat event. ### Adjusting behavior with settings If `PREFECT_RUNNER_HEARTBEAT_FREQUENCY` is set to `30`, the automation will trigger only after 3 heartbeats have been missed. You can adjust `within` in the trigger definition and `PREFECT_RUNNER_HEARTBEAT_FREQUENCY` to change how quickly the automation will fire after the server stops receiving flow run heartbeats. You can also add additional actions to your automation to send a notification when zombie runs are detected. # How to develop a custom worker Source: https://docs-3.prefect.io/v3/advanced/developing-a-custom-worker Learn how to create a Prefect worker to run your flows. Prefect workers are responsible for setting up execution infrastructure and starting flow runs on that infrastructure. A list of available workers can be found in the [workers documentation](/v3/concepts/workers#worker-types). What if you want to execute your flow runs on infrastructure that doesn't have an available worker type? This tutorial will walk you through creating a custom worker that can run your flows on your chosen infrastructure. ## Worker Configuration When setting up an execution environment for a flow run, a worker receives configuration for the infrastructure it is designed to work with. Examples of configuration values include memory allocation, CPU allocation, credentials, image name, etc. The worker then uses this configuration to create the execution environment and start the flow run. !!! tip "How are the configuration values populated?" The work pool that a worker polls for flow runs has a [base job template](/v3/how-to-guides/deployment_infra/manage-work-pools#base-job-template) associated with it. The template is the contract for how configuration values populate for each flow run. The keys in the `job_configuration` section of this base job template match the worker's configuration class attributes. The values in the `job_configuration` section of the base job template are used to populate the attributes of the worker's configuration class. The work pool creator gets to decide how they want to populate the values in the `job_configuration` section of the base job template. The values can be hard-coded, templated using placeholders, or a mix of these two approaches. Because you, as the worker developer, don't know how the work pool creator will populate the values, you should set sensible defaults for your configuration class attributes as a matter of best practice. ### Implementing a `BaseJobConfiguration` Subclass A worker developer defines their worker's configuration to function with a class extending [`BaseJobConfiguration`](/v3/api-ref/python/prefect-workers-base#basejobconfiguration). `BaseJobConfiguration` has attributes that are common to all workers: | Attribute | Description | | --------- | ------------------------------------------------------------------------------- | | `name` | The name to assign to the created execution environment. | | `env` | Environment variables to set in the created execution environment. | | `labels` | The labels assigned to the created execution environment for metadata purposes. | | `command` | The command to use when starting a flow run. | Prefect sets values for each attribute before giving the configuration to the worker. If you want to customize the values of these attributes, use the [`prepare_for_flow_run`](/v3/api-ref/python/prefect-workers-base#prepare-for-flow-run) method. Here's an example `prepare_for_flow_run` method that adds a label to the execution environment: ```python def prepare_for_flow_run( self, flow_run, deployment = None, flow = None, work_pool = None, worker_name = None ): super().prepare_for_flow_run(flow_run, deployment, flow, work_pool, worker_name) self.labels.append("my-custom-label") ``` A worker configuration class is a [Pydantic model](https://docs.pydantic.dev/usage/models/), so you can add additional attributes to your configuration class as Pydantic fields. For example, if you want to allow memory and CPU requests for your worker, you can do so like this: ```python from pydantic import Field from prefect.workers.base import BaseJobConfiguration class MyWorkerConfiguration(BaseJobConfiguration): memory: int = Field( default=1024, description="Memory allocation for the execution environment." ) cpu: int = Field( default=500, description="CPU allocation for the execution environment." ) ``` This configuration class will populate the `job_configuration` section of the resulting base job template. For this example, the base job template would look like this: ```yaml job_configuration: name: "{{ name }}" env: "{{ env }}" labels: "{{ labels }}" command: "{{ command }}" memory: "{{ memory }}" cpu: "{{ cpu }}" variables: type: object properties: name: title: Name description: Name given to infrastructure created by a worker. type: string env: title: Environment Variables description: Environment variables to set when starting a flow run. type: object additionalProperties: type: string labels: title: Labels description: Labels applied to infrastructure created by a worker. type: object additionalProperties: type: string command: title: Command description: The command to use when starting a flow run. In most cases, this should be left blank and the command will be automatically generated by the worker. type: string memory: title: Memory description: Memory allocation for the execution environment. type: integer default: 1024 cpu: title: CPU description: CPU allocation for the execution environment. type: integer default: 500 ``` This base job template defines what values can be provided by deployment creators on a per-deployment basis and how those provided values will be translated into the configuration values that the worker will use to create the execution environment. Notice that each attribute for the class was added in the `job_configuration` section with placeholders whose name matches the attribute name. The `variables` section was also populated with the OpenAPI schema for each attribute. If a configuration class is used without explicitly declaring any template variables, the template variables will be inferred from the configuration class attributes. ### Customizing Configuration Attribute Templates You can customize the template for each attribute for situations where the configuration values should use more sophisticated templating. For example, if you want to add units for the `memory` attribute, you can do so like this: ```python from pydantic import Field from prefect.workers.base import BaseJobConfiguration class MyWorkerConfiguration(BaseJobConfiguration): memory: str = Field( default="1024Mi", description="Memory allocation for the execution environment.", json_schema_extra=dict(template="{{ memory_request }}Mi") ) cpu: str = Field( default="500m", description="CPU allocation for the execution environment.", json_schema_extra=dict(template="{{ cpu_request }}m") ) ``` Notice that we changed the type of each attribute to `str` to accommodate the units, and we added a new `json_schema_extra` attribute to each attribute. The `template` key in `json_schema_extra` is used to populate the `job_configuration` section of the resulting base job template. For this example, the `job_configuration` section of the resulting base job template would look like this: ```yaml job_configuration: name: "{{ name }}" env: "{{ env }}" labels: "{{ labels }}" command: "{{ command }}" memory: "{{ memory_request }}Mi" cpu: "{{ cpu_request }}m" ``` Note that to use custom templates, you will need to declare the template variables used in the template because the names of those variables can no longer be inferred from the configuration class attributes. We will cover how to declare the default variable schema in the [Worker Template Variables](#worker-template-variables) section. ### Rules for Template Variable Interpolation When defining a job configuration model, it's useful to understand how template variables are interpolated into the job configuration. The templating engine follows a few simple rules: 1. If a template variable is the only value for a key in the `job_configuration` section, the key will be replaced with the value template variable. 2. If a template variable is part of a string (i.e., there is text before or after the template variable), the value of the template variable will be interpolated into the string. 3. If a template variable is the only value for a key in the `job_configuration` section and no value is provided for the template variable, the key will be removed from the `job_configuration` section. These rules allow worker developers and work pool maintainers to define template variables that can be complex types like dictionaries and lists. These rules also mean that worker developers should give reasonable default values to job configuration fields whenever possible because values are not guaranteed to be provided if template variables are unset. ### Template Variable Usage Strategies Template variables define the interface that deployment creators interact with to configure the execution environments of their deployments. The complexity of this interface can be controlled via the template variables that are defined for a base job template. This control allows work pool maintainers to find a point along the spectrum of flexibility and simplicity appropriate for their organization. There are two patterns that are represented in current worker implementations: #### Pass-Through In the pass-through pattern, template variables are passed through to the job configuration with little change. This pattern exposes complete control to deployment creators but also requires them to understand the details of the execution environment. This pattern is useful when the execution environment is simple, and the deployment creators are expected to have high technical knowledge. The [Docker worker](https://prefecthq.github.io/prefect-docker/worker/) is an example of a worker that uses this pattern. #### Infrastructure as Code Templating Depending on the infrastructure they interact with, workers can sometimes employ a declarative infrastructure syntax (i.e., infrastructure as code) to create execution environments (e.g., a Kubernetes manifest or an ECS task definition). In the IaC pattern, it's often useful to use template variables to template portions of the declarative syntax which then can be used to generate the declarative syntax into a final form. This approach allows work pool creators to provide a simpler interface to deployment creators while also controlling which portions of infrastructure are configurable by deployment creators. The [Kubernetes worker](https://prefecthq.github.io/prefect-kubernetes/worker/) is an example of a worker that uses this pattern. ### Configuring Credentials When executing flow runs within cloud services, workers will often need credentials to authenticate with those services. For example, a worker that executes flow runs in AWS Fargate will need AWS credentials. As a worker developer, you can use blocks to accept credentials configuration from the user. For example, if you want to allow the user to configure AWS credentials, you can do so like this: ```python from prefect.workers.base import BaseJobConfiguration from prefect_aws import AwsCredentials from pydantic import Field class MyWorkerConfiguration(BaseJobConfiguration): aws_credentials: AwsCredentials | None = Field( default=None, description="AWS credentials to use when creating AWS resources." ) ``` Users can create and assign a block to the `aws_credentials` attribute in the UI and the worker will use these credentials when interacting with AWS resources. ## Worker Template Variables Providing template variables for a base job template defines the fields that deployment creators can override per deployment. The work pool creator ultimately defines the template variables for a base job template, but the worker developer is able to define default template variables for the worker to make it easier to use. Default template variables for a worker are defined by implementing the `BaseVariables` class. Like the `BaseJobConfiguration` class, the `BaseVariables` class has attributes that are common to all workers: | Attribute | Description | | --------- | ---------------------------------------------------------------------------- | | `name` | The name to assign to the created execution environment. | | `env` | Environment variables to set in the created execution environment. | | `labels` | The labels assigned the created execution environment for metadata purposes. | | `command` | The command to use when starting a flow run. | Additional attributes can be added to the `BaseVariables` class to define additional template variables. For example, if you want to allow memory and CPU requests for your worker, you can do so like this: ```python from pydantic import Field from prefect.workers.base import BaseVariables class MyWorkerTemplateVariables(BaseVariables): memory_request: int = Field( default=1024, description="Memory allocation for the execution environment." ) cpu_request: int = Field( default=500, description="CPU allocation for the execution environment." ) ``` When `MyWorkerTemplateVariables` is used in conjunction with `MyWorkerConfiguration` from the [Customizing Configuration Attribute Templates](#customizing-configuration-attribute-templates) section, the resulting base job template will look like this: ```yaml job_configuration: name: "{{ name }}" env: "{{ env }}" labels: "{{ labels }}" command: "{{ command }}" memory: "{{ memory_request }}Mi" cpu: "{{ cpu_request }}m" variables: type: object properties: name: title: Name description: Name given to infrastructure created by a worker. type: string env: title: Environment Variables description: Environment variables to set when starting a flow run. type: object additionalProperties: type: string labels: title: Labels description: Labels applied to infrastructure created by a worker. type: object additionalProperties: type: string command: title: Command description: The command to use when starting a flow run. In most cases, this should be left blank and the command will be automatically generated by the worker. type: string memory_request: title: Memory Request description: Memory allocation for the execution environment. type: integer default: 1024 cpu_request: title: CPU Request description: CPU allocation for the execution environment. type: integer default: 500 ``` Note that template variable classes are never used directly. Instead, they are used to generate a schema that is used to populate the `variables` section of a base job template and validate the template variables provided by the user. We don't recommend using template variable classes within your worker implementation for validation purposes because the work pool creator ultimately defines the template variables. The configuration class should handle any necessary run-time validation. ## Worker Implementation Workers set up execution environments using provided configuration. Workers also observe the execution environment as the flow run executes and report any crashes to the Prefect API. ### Attributes To implement a worker, you must implement the `BaseWorker` class and provide it with the following attributes: | Attribute | Description | Required | | ----------------------------- | -------------------------------------------- | -------- | | `type` | The type of the worker. | Yes | | `job_configuration` | The configuration class for the worker. | Yes | | `job_configuration_variables` | The template variables class for the worker. | No | | `_documentation_url` | Link to documentation for the worker. | No | | `_logo_url` | Link to a logo for the worker. | No | | `_description` | A description of the worker. | No | ### Methods #### `run` In addition to the attributes above, you must also implement a `run` method. The `run` method is called for each flow run the worker receives for execution from the work pool. The `run` method has the following signature: ```python import anyio import anyio.abc from prefect.client.schemas.objects import FlowRun from prefect.workers.base import BaseWorkerResult, BaseJobConfiguration async def run( self, flow_run: FlowRun, configuration: BaseJobConfiguration, task_status: anyio.abc.TaskStatus = anyio.TASK_STATUS_IGNORED, ) -> BaseWorkerResult: ... ``` The `run` method is passed: the flow run to execute, the execution environment configuration for the flow run, and a task status object that allows the worker to track whether the flow run was submitted successfully. The `run` method must also return a `BaseWorkerResult` object. The `BaseWorkerResult` object returned contains information about the flow run execution. For the most part, you can implement the `BaseWorkerResult` with no modifications like so: ```python from prefect.workers.base import BaseWorkerResult class MyWorkerResult(BaseWorkerResult): """Result returned by the MyWorker.""" ``` If you would like to return more information about a flow run, then additional attributes can be added to the `BaseWorkerResult` class. ### Worker Implementation Example Below is an example of a worker implementation. This example is not intended to be a complete implementation but to illustrate the aforementioned concepts. ```python import anyio import anyio.abc from prefect.client.schemas.objects import FlowRun from prefect.workers.base import BaseWorker, BaseWorkerResult, BaseJobConfiguration, BaseVariables from pydantic import Field class MyWorkerConfiguration(BaseJobConfiguration): memory: str = Field( default="1024Mi", description="Memory allocation for the execution environment.", json_schema_extra=dict(template="{{ memory_request }}Mi") ) cpu: str = Field( default="500m", description="CPU allocation for the execution environment.", json_schema_extra=dict(template="{{ cpu_request }}m") ) class MyWorkerTemplateVariables(BaseVariables): memory_request: int = Field( default=1024, description="Memory allocation for the execution environment." ) cpu_request: int = Field( default=500, description="CPU allocation for the execution environment." ) class MyWorkerResult(BaseWorkerResult): """Result returned by the MyWorker.""" class MyWorker(BaseWorker): type = "my-worker" job_configuration = MyWorkerConfiguration job_configuration_variables = MyWorkerTemplateVariables _documentation_url = "https://example.com/docs" _logo_url = "https://example.com/logo" _description = "My worker description." async def run( self, flow_run: FlowRun, configuration: BaseJobConfiguration, task_status: anyio.abc.TaskStatus = anyio.TASK_STATUS_IGNORED, ) -> BaseWorkerResult: # Create the execution environment and start execution job = await self._create_and_start_job(configuration) # Use a unique ID to mark the run as started. This ID is later used to tear down infrastructure # if the flow run is cancelled. task_status.started(job.id) # Monitor the execution job_status = await self._watch_job(job, configuration) exit_code = job_status.exit_code if job_status else -1 # Get result of execution for reporting return MyWorkerResult( status_code=exit_code, identifier=job.id, ) ``` Most of the execution logic is omitted from the example above, but it shows that the typical order of operations in the `run` method is: 1. Create the execution environment and start the flow run execution 2. Mark the flow run as started via the passed `task_status` object 3. Monitor the execution 4. Get the execution's final status from the infrastructure and return a `BaseWorkerResult` object To see other examples of worker implementations, see the [`ProcessWorker`](/v3/api-ref/python/prefect-workers-process) and [`KubernetesWorker`](https://prefecthq.github.io/prefect-kubernetes/worker/) implementations. ### Integrating with the Prefect CLI Workers can be started via the Prefect CLI by providing the `--type` option to the `prefect worker start` CLI command. To make your worker type available via the CLI, it must be available at import time. If your worker is in a package, you can add an entry point to your setup file in the following format: ```python entry_points={ "prefect.collections": [ "my_package_name = my_worker_module", ] }, ``` Prefect will discover this entry point and load your work module in the specified module. The entry point will allow the worker to be available via the CLI. # Configure UI forms for validating workflow inputs Source: https://docs-3.prefect.io/v3/advanced/form-building Learn how to craft validated and user-friendly input forms for workflows. Parameterizing workflows is a critical part of orchestration. It allows you to create contracts between modular workflows in your organization and empower less-technical users to interact with your workflows intuitively. [Pydantic](https://docs.pydantic.dev/) is a powerful library for data validation using Python type annotations, which is used by Prefect to build a parameter schema for your workflow. This allows you to: * check runtime parameter values against the schema (from the UI or the SDK) * build a user-friendly form in the Prefect UI * easily reuse parameter types in similar workflows In this tutorial, we'll craft a workflow signature that the Prefect UI will render as a self-documenting form. ## Motivation Let's say you have a workflow that triggers a marketing email blast which looks like: {/* pmd-metadata: notest */} ```python @flow def send_marketing_email( mailing_lists: list[str], subject: str, body: str, test_mode: bool = False, attachments: list[str] | None = None ): """ Send a marketing email blast to the given lists. Args: mailing_lists: A list of lists to email. subject: The subject of the email. body: The body of the email. test_mode: Whether to send a test email. attachments: A list of attachments to include in the email. """ ... ``` When you deploy this flow, Prefect will automatically inspect your function signature and generate a form for you: initial form This is good enough for many cases, but consider these additional constraints that could arise from business needs or tech stack restrictions: * there are only a few valid values for `mailing_lists` * the `subject` must not exceed 30 characters * no more than 5 `attachments` are allowed You *can* simply check these constraints in the body of your flow function: {/* pmd-metadata: notest */} ```python @flow def send_marketing_email(...): if len(subject) > 30: raise ValueError("Subject must be less than 30 characters") if mailing_lists not in ["newsletter", "customers", "beta-testers"]: raise ValueError("Invalid list to email") if len(attachments) > 5: raise ValueError("Too many attachments") # etc... ``` but there are several downsides to this: * you have to spin up the infrastructure associated with your flow in order to check the constraints, which is wasteful if it turns out that bad parameters were provided * this might get duplicative, especially if you have similarly constrained parameters for different workflows To improve on this, we will use `pydantic` to build a convenient, self-documenting, and reusable flow signature that the Prefect UI can build a better form from. ## Building a convenient flow signature Let's address the constraints on `mailing_lists`, `subject`, and `attachments`. ### Using `Literal` to restrict valid values > there are only a few valid values for `mailing_lists` Say our valid mailing lists are: `["newsletter", "customers", "beta-testers"]` We can define a `Literal` to specify the valid values for the `mailing_lists` parameter. ```python from typing import Literal MailingList = Literal["newsletter", "customers", "beta-testers"] ``` You can use an `Enum` to achieve the same effect. ```python from enum import Enum class MailingList(Enum): NEWSLETTER = "newsletter" CUSTOMERS = "customers" BETA_TESTERS = "beta-testers" ``` ### Using a `BaseModel` subclass to group and constrain parameters Both the `subject` and `attachments` parameters have constraints that we want to enforce. > the `subject` must not exceed 30 characters > the `attachments` must not exceed 5 items Additionally, the `subject`, `body`, and `attachments` parameters are all related to the same thing: the content of the email. We can define a `BaseModel` subclass to group these parameters together and apply these constraints. {/* pmd-metadata: notest */} ```python from pydantic import BaseModel, Field class EmailContent(BaseModel): subject: str = Field(max_length=30) body: str = Field(default=...) attachments: list[str] = Field(default_factory=list, max_length=5) ``` `pydantic.Field` accepts a `description` kwarg that is displayed in the form above the field input. {/* pmd-metadata: notest */} ```python subject: str = Field(description="The subject of the email", max_length=30) ``` field description Similarly, you can: * pass `title` to `Field` to override the field name in the form * define a docstring for `EmailContent` to add a description to this group of parameters in the form ### Rewriting the flow signature Now that we have defined the `MailingList` and `EmailContent` types, we can use them in our flow signature: {/* pmd-metadata: notest */} ```python @flow def send_marketing_email( mailing_lists: list[MailingList], content: EmailContent, test_mode: bool = False, ): ... ``` The resulting form looks like this: improved form where the `mailing_lists` parameter renders as a multi-select dropdown that only allows the `Literal` values from our `MailingList` type. multi-select and any constraints you've defined on the `EmailContent` fields will be enforced before the run is submitted. early validation failure toast {/* pmd-metadata: notest */} ```python from typing import Literal from prefect import flow from pydantic import BaseModel, Field MailingList = Literal["newsletter", "customers", "beta-testers"] class EmailContent(BaseModel): subject: str = Field(max_length=30) body: str = Field(default=...) attachments: list[str] = Field(default_factory=list, max_length=5) @flow def send_marketing_email( mailing_list: list[MailingList], content: EmailContent, test_mode: bool = False, ): pass if __name__ == "__main__": send_marketing_email.serve() ``` ### Using `json_schema_extra` to order fields in the form By default, your flow parameters are rendered in the order defined by your `@flow` function signature. Within a given `BaseModel` subclass, parameters are rendered in the following order: * parameters with a `default` value are rendered first, alphabetically * parameters without a `default` value are rendered next, alphabetically You can control the order of the parameters within a `BaseModel` subclass by passing `json_schema_extra` to the `Field` constructor with a `position` key. Taking our `EmailContent` model from the previous example, let's enforce that `subject` should be displayed first, then `body`, then `attachments`. {/* pmd-metadata: notest */} ```python class EmailContent(BaseModel): subject: str = Field( max_length=30, description="The subject of the email", json_schema_extra=dict(position=0), ) body: str = Field(default=..., json_schema_extra=dict(position=1)) attachments: list[str] = Field( default_factory=list, max_length=5, json_schema_extra=dict(position=2), ) ``` The resulting form looks like this: custom form layout ## Recap We have now embedded the constraints on our parameters in the types that describe our flow signature, which means: * the UI can enforce these constraints before the run is submitted - **less wasted infra cycles** * workflow inputs are **self-documenting**, both in the UI and in the code defining your workflow * the types used in this signature can be **easily reused** for other similar workflows ## Debugging and related resources As you craft a schema for your flow signature, you may want to inspect the raw OpenAPI schema that `pydantic` generates, as it is what the Prefect UI uses to build the form. Call `model_json_schema()` on your `BaseModel` subclass to inspect the raw schema. {/* pmd-metadata: notest */} ```python from rich import print as pprint from pydantic import BaseModel, Field class EmailContent(BaseModel): subject: str = Field(max_length=30) body: str = Field(default=...) attachments: list[str] = Field(default_factory=list, max_length=5) pprint(EmailContent.model_json_schema()) ``` ``` { 'properties': { 'subject': {'maxLength': 30, 'title': 'Subject', 'type': 'string'}, 'body': {'title': 'Body', 'type': 'string'}, 'attachments': {'items': {'type': 'string'}, 'maxItems': 5, 'title': 'Attachments', 'type': 'array'} }, 'required': ['subject', 'body'], 'title': 'EmailContent', 'type': 'object' } ``` For more on constrained types and validation features available in `pydantic`, see their documentation on [models](https://docs.pydantic.dev/latest/concepts/models/) and [types](https://docs.pydantic.dev/latest/concepts/types/). # Advanced Source: https://docs-3.prefect.io/v3/advanced/index ## Sections Learn advanced workflow patterns and optimization techniques. Learn advanced patterns for working with events, triggers, and automations. Learn advanced infrastructure management and deployment strategies. Learn advanced strategies for managing your data platform. Learn how to scale self-hosted Prefect deployments for high availability. Learn how to extend Prefect with custom blocks and API integrations. # How to manage Prefect resources with Terraform and Helm Source: https://docs-3.prefect.io/v3/advanced/infrastructure-as-code Declaratively manage Prefect resources with additional tools export const HELM = ({name, href}) =>

You can manage {name} with the Prefect Helm charts.

; export const home = { tf: "https://registry.terraform.io/providers/PrefectHQ/prefect/latest/docs/guides/getting-started", cli: "https://docs.prefect.io/v3/api-ref/cli/index", api: "https://app.prefect.cloud/api/docs", helm: "https://github.com/PrefectHQ/prefect-helm/tree/main/charts" }; export const TF = ({name, href}) =>

You can manage {name} with the Terraform provider for Prefect.

; You can manage many Prefect resources with tools like [Terraform](https://www.terraform.io/) and [Helm](https://helm.sh/). These options are a viable alternative to Prefect's CLI and UI. ## Terraform This documentation represents all Prefect resources that are supported by Terraform. This Terraform provider is maintained by the Prefect team, and is undergoing active development to reach [parity with the Prefect API](https://github.com/PrefectHQ/terraform-provider-prefect/milestone/1). The Prefect team welcomes contributions, feature requests, and bug reports via our [issue tracker](https://github.com/PrefectHQ/terraform-provider-prefect/issues). ## Helm Each Helm chart subdirectory contains usage documentation. There are two main charts: * The `prefect-server` chart is used to a deploy a Prefect server. This is an alternative to using [Prefect Cloud](https://app.prefect.cloud/). * The `prefect-worker` chart is used to deploy a [Prefect worker](/v3/deploy/infrastructure-concepts/workers). Finally, there is a `prefect-prometheus-exporter` chart that is used to deploy a Prometheus exporter, exposing Prefect metrics for monitoring and alerting. # How to write interactive workflows Source: https://docs-3.prefect.io/v3/advanced/interactive Flows can pause or suspend execution and automatically resume when they receive type-checked input in Prefect's UI. Flows can also send and receive type-checked input at any time while runningβ€”without pausing or suspending. This guide explains how to use these features to build *interactive workflows*. ## Pause or suspend a flow until it receives input You can pause or suspend a flow until it receives input from a user in Prefect's UI. This is useful when you need to ask for additional information or feedback before resuming a flow. These workflows are often called [human-in-the-loop](https://hai.stanford.edu/news/humans-loop-design-interactive-ai-systems) (HITL) systems. **Human-in-the-loop interactivity** Approval workflows that pause to ask a human to confirm whether a workflow should continue are very common in the business world. Certain types of [machine learning training](https://link.springer.com/article/10.1007/s10462-022-10246-w) and artificial intelligence workflows benefit from incorporating HITL design. ### Wait for input To receive input while paused or suspended use the `wait_for_input` parameter in the `pause_flow_run` or `suspend_flow_run` functions. This parameter accepts one of the following: * A built-in type like `int` or `str`, or a built-in collection like `List[int]` * A `pydantic.BaseModel` subclass * A subclass of `prefect.input.RunInput` When to use a `RunModel` or `BaseModel` instead of a built-in type" There are a few reasons to use a `RunModel` or `BaseModel`. The first is that when you let Prefect automatically create one of these classes for your input type, the field that users see in Prefect's UI when they click "Resume" on a flow run is named `value` and has no help text to suggest what the field is. If you create a `RunInput` or `BaseModel`, you can change details like the field name, help text, and default value, and users see those reflected in the "Resume" form. The simplest way to pause or suspend and wait for input is to pass a built-in type: ```python from prefect import flow from prefect.flow_runs import pause_flow_run from prefect.logging import get_run_logger @flow def greet_user(): logger = get_run_logger() user = pause_flow_run(wait_for_input=str) logger.info(f"Hello, {user}!") ``` In this example, the flow run pauses until a user clicks the Resume button in the Prefect UI, enters a name, and submits the form. **Types can you pass for `wait_for_input`** When you pass a built-in type such as `int` as an argument for the `wait_for_input` parameter to `pause_flow_run` or `suspend_flow_run`, Prefect automatically creates a Pydantic model containing one field annotated with the type you specified. This means you can use [any type annotation that Pydantic accepts for model fields](https://docs.pydantic.dev/1.10/usage/types/) with these functions. Instead of a built-in type, you can pass in a `pydantic.BaseModel` class. This is useful if you already have a `BaseModel` you want to use: ```python from prefect import flow from prefect.flow_runs import pause_flow_run from prefect.logging import get_run_logger from pydantic import BaseModel class User(BaseModel): name: str age: int @flow async def greet_user(): logger = get_run_logger() user = await pause_flow_run(wait_for_input=User) logger.info(f"Hello, {user.name}!") ``` **`BaseModel` classes are upgraded to `RunInput` classes automatically** When you pass a `pydantic.BaseModel` class as the `wait_for_input` argument to `pause_flow_run` or `suspend_flow_run`, Prefect automatically creates a `RunInput` class with the same behavior as your `BaseModel` and uses that instead. `RunInput` classes contain extra logic that allows flows to send and receive them at runtime. You shouldn't notice any difference. For advanced use cases such as overriding how Prefect stores flow run inputs, create a `RunInput` class: ```python from prefect import flow from prefect.logging import get_run_logger from prefect.input import RunInput class UserInput(RunInput): name: str age: int # Imagine overridden methods here. def override_something(self, *args, **kwargs): super().override_something(*args, **kwargs) @flow async def greet_user(): logger = get_run_logger() user = await pause_flow_run(wait_for_input=UserInput) logger.info(f"Hello, {user.name}!") ``` ### Provide initial data Set default values for fields in your model with the `with_initial_data` method. This is useful for providing default values for the fields in your own `RunInput` class. Expanding on the example above, you can make the `name` field default to "anonymous": ```python from prefect import flow from prefect.logging import get_run_logger from prefect.input import RunInput class UserInput(RunInput): name: str age: int @flow async def greet_user(): logger = get_run_logger() user_input = await pause_flow_run( wait_for_input=UserInput.with_initial_data(name="anonymous") ) if user_input.name == "anonymous": logger.info("Hello, stranger!") else: logger.info(f"Hello, {user_input.name}!") ``` When a user sees the form for this input, the name field contains "anonymous" as the default. ### Provide a description with runtime data You can provide a dynamic, Markdown description that appears in the Prefect UI when the flow run pauses. This feature enables context-specific prompts, enhancing clarity and user interaction. Building on the example above: ```python from datetime import datetime from prefect import flow from prefect.flow_runs import pause_flow_run from prefect.logging import get_run_logger from prefect.input import RunInput class UserInput(RunInput): name: str age: int @flow async def greet_user(): logger = get_run_logger() current_date = datetime.now().strftime("%B %d, %Y") description_md = f""" **Welcome to the User Greeting Flow!** Today's Date: {current_date} Please enter your details below: - **Name**: What should we call you? - **Age**: Just a number, nothing more. """ user_input = await pause_flow_run( wait_for_input=UserInput.with_initial_data( description=description_md, name="anonymous" ) ) if user_input.name == "anonymous": logger.info("Hello, stranger!") else: logger.info(f"Hello, {user_input.name}!") ``` When a user sees the form for this input, the given Markdown appears above the input fields. ### Handle custom validation Prefect uses the fields and type hints on your `RunInput` or `BaseModel` class to validate the general structure of input your flow receives. If you require more complex validation, use Pydantic [model\_validators](https://docs.pydantic.dev/latest/concepts/validators/#model-validators). **Calling custom validation runs after the flow resumes** Prefect transforms the type annotations in your `RunInput` or `BaseModel` class to a JSON schema and uses that schema in the UI for client-side validation. However, custom validation requires running *Python* logic defined in your `RunInput` class. Because of this, validation happens *after the flow resumes*, so you should handle it explicitly in your flow. Continue reading for an example best practice. The following is an example `RunInput` class that uses a custom `model_validator`: ```python from typing import Literal import pydantic from prefect.input import RunInput class ShirtOrder(RunInput): size: Literal["small", "medium", "large", "xlarge"] color: Literal["red", "green", "black"] @pydantic.model_validator(mode="after") def validate_age(self): if self.color == "green" and self.size == "small": raise ValueError( "Green is only in-stock for medium, large, and XL sizes." ) return self ``` In the example, we use Pydantic's `model_validator` decorator to define custom validation for our `ShirtOrder` class. You can use it in a flow like this: ```python from typing import Literal import pydantic from prefect import flow, pause_flow_run from prefect.input import RunInput class ShirtOrder(RunInput): size: Literal["small", "medium", "large", "xlarge"] color: Literal["red", "green", "black"] @pydantic.model_validator(mode="after") def validate_age(self): if self.color == "green" and self.size == "small": raise ValueError( "Green is only in-stock for medium, large, and XL sizes." ) return self @flow def get_shirt_order(): shirt_order = pause_flow_run(wait_for_input=ShirtOrder) ``` If a user chooses any size and color combination other than `small` and `green`, the flow run resumes successfully. However, if the user chooses size `small` and color `green`, the flow run will resume, and `pause_flow_run` raises a `ValidationError` exception. This causes the flow run to fail and log the error. To avoid a flow run failure, use a `while` loop and pause again if the `ValidationError` exception is raised: ```python from typing import Literal import pydantic from prefect import flow from prefect.flow_runs import pause_flow_run from prefect.logging import get_run_logger from prefect.input import RunInput class ShirtOrder(RunInput): size: Literal["small", "medium", "large", "xlarge"] color: Literal["red", "green", "black"] @pydantic.model_validator(mode="after") def validate_age(self): if self.color == "green" and self.size == "small": raise ValueError( "Green is only in-stock for medium, large, and XL sizes." ) return self @flow def get_shirt_order(): logger = get_run_logger() shirt_order = None while shirt_order is None: try: shirt_order = pause_flow_run(wait_for_input=ShirtOrder) except pydantic.ValidationError as exc: logger.error(f"Invalid size and color combination: {exc}") logger.info( f"Shirt order: {shirt_order.size}, {shirt_order.color}" ) ``` This code causes the flow run to continually pause until the user enters a valid age. As an additional step, you can use an [automation](/v3/automate/events/automations-triggers) to alert the user to the error. ## Send and receive input at runtime Use the `send_input` and `receive_input` functions to send input to a flow or receive input from a flow at runtime. You don't need to pause or suspend the flow to send or receive input. **Reasons to send or receive input without pausing or suspending** You might want to send or receive input without pausing or suspending in scenarios where the flow run is designed to handle real-time data. For example, in a live monitoring system, you might need to update certain parameters based on the incoming data without interrupting the flow. Another example is having a long-running flow that continually responds to runtime input with low latency. For example, if you're building a chatbot, you could have a flow that starts a GPT Assistant and manages a conversation thread. The most important parameter to the `send_input` and `receive_input` functions is `run_type`, which should be one of the following: * A built-in type such as `int` or `str` * A `pydantic.BaseModel` class * A `prefect.input.RunInput` class **When to use a `BaseModel` or `RunInput` instead of a built-in type** Most built-in types and collections of built-in types should work with `send_input` and `receive_input`, but there is a caveat with nested collection types, such as lists of tuples. For example, `List[Tuple[str, float]])`. In this case, validation may happen after your flow receives the data, so calling `receive_input` may raise a `ValidationError`. You can plan to catch this exception, and consider placing the field in an explicit `BaseModel` or `RunInput` so your flow only receives exact type matches. See examples below of `receive_input`, `send_input`, and the two functions working together. ### Receiving input The following flow uses `receive_input` to continually receive names and print a personalized greeting for each name it receives: ```python from prefect import flow from prefect.input.run_input import receive_input @flow async def greeter_flow(): async for name_input in receive_input(str, timeout=None): # Prints "Hello, andrew!" if another flow sent "andrew" print(f"Hello, {name_input}!") ``` When you pass a type such as `str` into `receive_input`, Prefect creates a `RunInput` class to manage your input automatically. When a flow sends input of this type, Prefect uses the `RunInput` class to validate the input. If the validation succeeds, your flow receives the input in the type you specified. In this example, if the flow received a valid string as input, the variable `name_input` contains the string value. If, instead, you pass a `BaseModel`, Prefect upgrades your `BaseModel` to a `RunInput` class, and the variable your flow sees (in this case, `name_input`), is a `RunInput` instance that behaves like a `BaseModel`. If you pass in a `RunInput` class, no upgrade is needed and you'll get a `RunInput` instance. A simpler approach is to pass types such as `str` into `receive_input` . If you need access to the generated `RunInput` that contains the received value, pass `with_metadata=True` to `receive_input`: ```python from prefect import flow from prefect.input.run_input import receive_input @flow async def greeter_flow(): async for name_input in receive_input( str, timeout=None, with_metadata=True ): # Input will always be in the field "value" on this object. print(f"Hello, {name_input.value}!") ``` **When to use `with_metadata=True`** The primary uses of accessing the `RunInput` object for a receive input are to respond to the sender with the `RunInput.respond()` function, or to access the unique key for an input. Notice that the printing of `name_input.value`. When Prefect generates a `RunInput` for you from a built-in type, the `RunInput` class has a single field, `value`, that uses a type annotation matching the type you specified. So if you call `receive_input` like this: `receive_input(str, with_metadata=True)`, it's equivalent to manually creating the following `RunInput` class and `receive_input` call: ```python from prefect import flow from prefect.input.run_input import RunInput class GreeterInput(RunInput): value: str @flow async def greeter_flow(): async for name_input in receive_input(GreeterInput, timeout=None): print(f"Hello, {name_input.value}!") ``` **The type used in `receive_input` and `send_input` must match** For a flow to receive input, the sender must use the same type that the receiver is receiving. This means that if the receiver is receiving `GreeterInput`, the sender must send `GreeterInput`. If the receiver is receiving `GreeterInput` and the sender sends the `str` input that Prefect automatically upgrades to a `RunInput` class, the types won't match; which means the receiving flow run won't receive the input. However, the input will wait for if the flow ever calls `receive_input(str)`. ### Keep track of inputs you've already seen By default, each time you call `receive_input`, you get an iterator that iterates over all known inputs to a specific flow run, starting with the first received. The iterator keeps track of your current position as you iterate over it, or you can call `next()` to explicitly get the next input. If you're using the iterator in a loop, you should assign it to a variable: ```python from prefect import flow, get_client from prefect.deployments import run_deployment from prefect.input.run_input import receive_input, send_input EXIT_SIGNAL = "__EXIT__" @flow async def sender(): greeter_flow_run = await run_deployment( "greeter/send-receive", timeout=0, as_subflow=False ) client = get_client() # Assigning the `receive_input` iterator to a variable # outside of the the `while True` loop allows us to continue # iterating over inputs in subsequent passes through the # while loop without losing our position. receiver = receive_input( str, with_metadata=True, timeout=None, poll_interval=0.1 ) while True: name = input("What is your name? ") if not name: continue if name == "q" or name == "quit": await send_input( EXIT_SIGNAL, flow_run_id=greeter_flow_run.id ) print("Goodbye!") break await send_input(name, flow_run_id=greeter_flow_run.id) # Saving the iterator outside of the while loop and # calling next() on each iteration of the loop ensures # that we're always getting the newest greeting. If we # had instead called `receive_input` here, we would # always get the _first_ greeting this flow received, # print it, and then ask for a new name. greeting = await receiver.next() print(greeting) ``` An iterator helps keep track of the inputs your flow has already received. If you want your flow to suspend and then resume later, save the keys of the inputs you've seen so the flow can read them back out when it resumes. Consider using a [block](/v3/develop/blocks/), such as a `JSONBlock`. The following flow receives input for 30 seconds then suspends itself, which exits the flow and tears down infrastructure: ```python from prefect import flow from prefect.logging import get_run_logger from prefect.flow_runs import suspend_flow_run from prefect.blocks.system import JSON from prefect.context import get_run_context from prefect.input.run_input import receive_input EXIT_SIGNAL = "__EXIT__" @flow async def greeter(): logger = get_run_logger() run_context = get_run_context() assert run_context.flow_run, "Could not see my flow run ID" block_name = f"{run_context.flow_run.id}-seen-ids" try: seen_keys_block = await JSON.load(block_name) except ValueError: seen_keys_block = JSON( value=[], ) try: async for name_input in receive_input( str, with_metadata=True, poll_interval=0.1, timeout=30, exclude_keys=seen_keys_block.value ): if name_input.value == EXIT_SIGNAL: print("Goodbye!") return await name_input.respond(f"Hello, {name_input.value}!") seen_keys_block.value.append(name_input.metadata.key) await seen_keys_block.save( name=block_name, overwrite=True ) except TimeoutError: logger.info("Suspending greeter after 30 seconds of idle time") await suspend_flow_run(timeout=10000) ``` As this flow processes name input, it adds the *key* of the flow run input to the `seen_keys_block`. When the flow later suspends and then resumes, it reads the keys it has already seen out of the JSON block and passes them as the `exlude_keys` parameter to `receive_input`. ### Respond to the input's sender When your flow receives input from another flow, Prefect knows the sending flow run ID, so the receiving flow can respond by calling the `respond` method on the `RunInput` instance the flow received. There are a couple of requirements: * Pass in a `BaseModel` or `RunInput`, or use `with_metadata=True`. * The flow you are responding to must receive the same type of input you send to see it. The `respond` method is equivalent to calling `send_input(..., flow_run_id=sending_flow_run.id)`. But with `respond`, your flow doesn't need to know the sending flow run's ID. Next, make the `greeter_flow` respond to name inputs instead of printing them: ```python from prefect import flow from prefect.input.run_input import receive_input @flow async def greeter(): async for name_input in receive_input( str, with_metadata=True, timeout=None ): await name_input.respond(f"Hello, {name_input.value}!") ``` However, this flow runs forever unless there's a signal that it should exit. Here's how to make it to look for a special string: ```python from prefect import flow from prefect.input.run_input import receive_input EXIT_SIGNAL = "__EXIT__" @flow async def greeter(): async for name_input in receive_input( str, with_metadata=True, poll_interval=0.1, timeout=None ): if name_input.value == EXIT_SIGNAL: print("Goodbye!") return await name_input.respond(f"Hello, {name_input.value}!") ``` With a `greeter` flow in place, create the flow that sends `greeter` names. ### Send input Send input to a flow with the `send_input` function. This works similarly to `receive_input` and, like that function, accepts the same `run_input` argument. This can be a built-in type such as `str`, or else a `BaseModel` or `RunInput` subclass. **When to send input to a flow run** Send input to a flow run as soon as you have the flow run's ID. The flow does not have to be receiving input for you to send input. If you send a flow input before it is receiving, it will see your input when it calls `receive_input` (as long as the types in the `send_input` and `receive_input` calls match). Next, create a `sender` flow that starts a `greeter` flow run and then enters a loopβ€”continuously getting input from the terminal and sending it to the greeter flow: ```python from prefect import flow from prefect.deployments import run_deployment @flow async def sender(): greeter_flow_run = await run_deployment( "greeter/send-receive", timeout=0, as_subflow=False ) receiver = receive_input(str, timeout=None, poll_interval=0.1) client = get_client() while True: flow_run = await client.read_flow_run(greeter_flow_run.id) if not flow_run.state or not flow_run.state.is_running(): continue name = input("What is your name? ") if not name: continue if name == "q" or name == "quit": await send_input( EXIT_SIGNAL, flow_run_id=greeter_flow_run.id ) print("Goodbye!") break await send_input(name, flow_run_id=greeter_flow_run.id) greeting = await receiver.next() print(greeting) ``` First, `run_deployment` starts a `greeter` flow run. This requires a deployed flow running in a process. That process begins running `greeter` while `sender` continues to execute. Calling `run_deployment(..., timeout=0)` ensures that `sender` won't wait for the `greeter` flow run to complete, because it's running a loop and only exits when sending `EXIT_SIGNAL`. Next, the iterator returned by `receive_input` as `receiver` is captured. This flow works by entering a loop. On each iteration of the loop, the flow asks for terminal input, sends that to the `greeter` flow, and then runs `receiver.next()` to wait until it receives the response from `greeter`. Next, the terminal user who ran this flow is allowed to exit by entering the string `q` or `quit`. When that happens, the `greeter` flow is sent an exit signal to shut down, too. Finally, the new name is sent to `greeter`. `greeter` sends back a greeting as a string. When you receive the greeting, print it and continue the loop that gets terminal input. ### A complete example For a complete example of using `send_input` and `receive_input`, here is what the `greeter` and `sender` flows look like together: ```python import asyncio import sys from prefect import flow, get_client from prefect.blocks.system import JSON from prefect.context import get_run_context from prefect.deployments import run_deployment from prefect.input.run_input import receive_input, send_input EXIT_SIGNAL = "__EXIT__" @flow async def greeter(): run_context = get_run_context() assert run_context.flow_run, "Could not see my flow run ID" block_name = f"{run_context.flow_run.id}-seen-ids" try: seen_keys_block = await JSON.load(block_name) except ValueError: seen_keys_block = JSON( value=[], ) async for name_input in receive_input( str, with_metadata=True, poll_interval=0.1, timeout=None ): if name_input.value == EXIT_SIGNAL: print("Goodbye!") return await name_input.respond(f"Hello, {name_input.value}!") seen_keys_block.value.append(name_input.metadata.key) await seen_keys_block.save( name=block_name, overwrite=True ) @flow async def sender(): greeter_flow_run = await run_deployment( "greeter/send-receive", timeout=0, as_subflow=False ) receiver = receive_input(str, timeout=None, poll_interval=0.1) client = get_client() while True: flow_run = await client.read_flow_run(greeter_flow_run.id) if not flow_run.state or not flow_run.state.is_running(): continue name = input("What is your name? ") if not name: continue if name == "q" or name == "quit": await send_input( EXIT_SIGNAL, flow_run_id=greeter_flow_run.id ) print("Goodbye!") break await send_input(name, flow_run_id=greeter_flow_run.id) greeting = await receiver.next() print(greeting) if __name__ == "__main__": if sys.argv[1] == "greeter": asyncio.run(greeter.serve(name="send-receive")) elif sys.argv[1] == "sender": asyncio.run(sender()) ``` To run the example, you need a Python environment with Prefect installed, pointed at either a Prefect Cloud account or a self-hosted Prefect server instance. With your environment set up, start a flow runner in one terminal with the following command: ```bash python my_file_name greeter ``` For example, with Prefect Cloud, you should see output like this: ```bash ______________________________________________________________________ | Your flow 'greeter' is being served and polling for scheduled runs | | | | To trigger a run for this flow, use the following command: | | | | $ prefect deployment run 'greeter/send-receive' | | | | You can also run your flow via the Prefect UI: | | https://app.prefect.cloud/account/...(a URL for your account) | | | ______________________________________________________________________ ``` Then start the greeter process in another terminal: ```bash python my_file_name sender ``` You should see output like this: ```bash 11:38:41.800 | INFO | prefect.engine - Created flow run 'gregarious-owl' for flow 'sender' 11:38:41.802 | INFO | Flow run 'gregarious-owl' - View at https://app.prefect.cloud/account/... What is your name? ``` Type a name and press the enter key to see a greeting to see sending and receiving in action: ```bash What is your name? andrew Hello, andrew! ``` # How to customize Prefect's logging configuration Source: https://docs-3.prefect.io/v3/advanced/logging-customization Prefect relies on [the standard Python implementation of logging configuration](https://docs.python.org/3/library/logging.config.html). The full specification of the default logging configuration for any version of Prefect can always be inspected [here](https://github.com/PrefectHQ/prefect/blob/main/src/prefect/logging/logging.yml). The default logging level is `INFO`. ### Customize logging configuration Prefect provides several settings to configure the logging level and individual loggers. Any value in [Prefect's logging configuration file](https://github.com/PrefectHQ/prefect/blob/main/src/prefect/logging/logging.yml) can be overridden through a Prefect setting of the form `PREFECT_LOGGING_[PATH]_[TO]_[KEY]=value` corresponding to the nested address of the field you are configuring. For example, to change the default logging level for flow runs but not task runs, update your profile with: ```bash prefect config set PREFECT_LOGGING_LOGGERS_PREFECT_FLOW_RUNS_LEVEL="ERROR" ``` or set the corresponding environment variable: ```bash export PREFECT_LOGGING_LOGGERS_PREFECT_FLOW_RUNS_LEVEL="ERROR" ``` You can also configure the "root" Python logger. The root logger receives logs from all loggers unless they explicitly opt out by disabling propagation. By default, the root logger is configured to output `WARNING` level logs to the console. As with other logging settings, you can override this from the environment or in the logging configuration file. For example, you can change the level with the `PREFECT_LOGGING_ROOT_LEVEL` environment variable. In some situations you may want to completely overhaul the Prefect logging configuration by providing your own `logging.yml` file. You can create your own version of `logging.yml` in one of two ways: 1. Create a `logging.yml` file in your `PREFECT_HOME` directory (default is `~/.prefect`). 2. Specify a custom path to your `logging.yml` file using the `PREFECT_LOGGING_SETTINGS_PATH` setting. If Prefect cannot find the `logging.yml` file at the specified location, it will fall back to using the default logging configuration. See the Python [Logging configuration](https://docs.python.org/3/library/logging.config.html#logging.config.dictConfig) documentation for more information about the configuration options and syntax used by `logging.yml`. As with all Prefect settings, logging settings are loaded at runtime. This means that to customize Prefect logging in a remote environment requires setting the appropriate environment variables and/or profile in that environment. ### Formatters Prefect log formatters specify the format of log messages. The default formatting for task and flow run records is `"%(asctime)s.%(msecs)03d | %(levelname)-7s | Task run %(task_run_name)r - %(message)s"` for tasks and similarly `"%(asctime)s.%(msecs)03d | %(levelname)-7s | Flow run %(flow_run_name)r - %(message)s"` for flows. The variables available to interpolate in log messages vary by logger. In addition to the run context, message string, and any keyword arguments, flow and task run loggers have access to additional variables. The flow run logger has the following variables available for formatting: * `flow_run_name` * `flow_run_id` * `flow_name` The task run logger has the following variables available for formatting: * `task_run_id` * `flow_run_id` * `task_run_name` * `task_name` * `flow_run_name` * `flow_name` You can specify custom formatting by setting the relevant environment variable or by modifying the formatter in a custom `logging.yml` file as described earlier. For example, the following changes the formatting for the flow runs formatter: ```bash PREFECT_LOGGING_FORMATTERS_STANDARD_FLOW_RUN_FMT="%(asctime)s.%(msecs)03d | %(levelname)-7s | %(flow_run_id)s - %(message)s" ``` The resulting messages, using the flow run ID instead of name, look like this: ```bash 10:40:01.211 | INFO | e43a5a80-417a-41c4-a39e-2ef7421ee1fc - Created task run 'othertask-1c085beb-3' for task 'othertask' ``` ### Styles By default, Prefect highlights specific keywords in the console logs with a variety of colors. You can toggle highlighting on/off with the `PREFECT_LOGGING_COLORS` setting: ```bash PREFECT_LOGGING_COLORS=False ``` You can also change what gets highlighted and even adjust the colors by updating the styles - see the `styles` section of [the Prefect logging configuration file](https://github.com/PrefectHQ/prefect/blob/main/src/prefect/logging/logging.yml) for available keys. Note that these style settings only impact the display within a terminal, not the Prefect UI. You can even build your own handler with a [custom highlighter](https://rich.readthedocs.io/en/stable/highlighting.html#custom-highlighters). For example, to additionally highlight emails: 1. Copy and paste the following code into `my_package_or_module.py` (rename as needed) in the same directory as the flow run script; or ideally as part of a Python package so it's available in `site-packages` and accessible anywhere within your environment. ```python import logging from typing import Dict, Union from rich.highlighter import Highlighter from prefect.logging.handlers import PrefectConsoleHandler from prefect.logging.highlighters import PrefectConsoleHighlighter class CustomConsoleHighlighter(PrefectConsoleHighlighter): base_style = "log." highlights = PrefectConsoleHighlighter.highlights + [ # ?P is naming this expression as `email` r"(?P[\w-]+@([\w-]+\.)+[\w-]+)", ] class CustomConsoleHandler(PrefectConsoleHandler): def __init__( self, highlighter: Highlighter = CustomConsoleHighlighter, styles: Dict[str, str] = None, level: Union[int, str] = logging.NOTSET, ): super().__init__(highlighter=highlighter, styles=styles, level=level) ``` 2. Update `~/.prefect/logging.yml` to use `my_package_or_module.CustomConsoleHandler` and additionally reference the base\_style and named expression: `log.email`. ```yaml console_flow_runs: level: 0 class: my_package_or_module.CustomConsoleHandler formatter: flow_runs styles: log.email: magenta # other styles can be appended here, e.g. # log.completed_state: green ``` 3. On your next flow run, text that looks like an email is highlighted. For example, `my@email.com` is colored in magenta below: ```python from prefect import flow from prefect.logging import get_run_logger @flow def log_email_flow(): logger = get_run_logger() logger.info("my@email.com") log_email_flow() ``` ### Apply markup in logs To use [Rich's markup](https://rich.readthedocs.io/en/stable/markup.html#console-markup) in Prefect logs, first configure `PREFECT_LOGGING_MARKUP`: ```bash PREFECT_LOGGING_MARKUP=True ``` The following will highlight "fancy" in red: ```python from prefect import flow from prefect.logging import get_run_logger @flow def my_flow(): logger = get_run_logger() logger.info("This is [bold red]fancy[/]") my_flow() ``` **Inaccurate logs could result** If enabled, strings that contain square brackets may be inaccurately interpreted and lead to incomplete output. For example, `DROP TABLE [dbo].[SomeTable];"` outputs `DROP TABLE .[SomeTable];`. ## Include logs from other libraries By default, Prefect won't capture log statements from libraries that your flows and tasks use. You can tell Prefect to include logs from these libraries with the `PREFECT_LOGGING_EXTRA_LOGGERS` setting. To use this setting, specify one or more Python library names to include, separated by commas. For example, if you want Prefect to capture Dask and SciPy logging statements with your flow and task run logs, use: `PREFECT_LOGGING_EXTRA_LOGGERS=dask,scipy` Configure this setting as an environment variable or in a profile. See [Settings](/v3/develop/settings-and-profiles/) for more details about how to use settings. # How to persist workflow results Source: https://docs-3.prefect.io/v3/advanced/results Results represent the data returned by a flow or a task and enable features such as caching. Results are the bedrock of many Prefect features - most notably [transactions](/v3/develop/transactions) and [caching](/v3/concepts/caching) - and are foundational to the resilient execution paradigm that Prefect enables. Any return value from a task or a flow is a result. By default these results are not persisted and no reference to them is maintained in the API. Enabling result persistence allows you to fully benefit from Prefect's orchestration features. **Turn on persistence globally by default** The simplest way to turn on result persistence globally is through the `PREFECT_RESULTS_PERSIST_BY_DEFAULT` setting: ```bash prefect config set PREFECT_RESULTS_PERSIST_BY_DEFAULT=true ``` See [settings](/v3/develop/settings-and-profiles) for more information on how settings are managed. ## Configuring result persistence There are four categories of configuration for result persistence: * [whether to persist results at all](#enabling-result-persistence): this is configured through various keyword arguments, the `PREFECT_RESULTS_PERSIST_BY_DEFAULT` setting, and the `PREFECT_TASKS_DEFAULT_PERSIST_RESULT` setting for tasks specifically. * [what filesystem to persist results to](#result-storage): this is configured through the `result_storage` keyword and the `PREFECT_DEFAULT_RESULT_STORAGE_BLOCK` setting. * [how to serialize and deserialize results](#result-serialization): this is configured through the `result_serializer` keyword and the `PREFECT_RESULTS_DEFAULT_SERIALIZER` setting. * [what filename to use](#result-filenames): this is configured through one of `result_storage_key`, `cache_policy`, or `cache_key_fn`. ### Default persistence configuration Once result persistence is enabled - whether through the `PREFECT_RESULTS_PERSIST_BY_DEFAULT` setting or through any of the mechanisms [described below](#enabling-result-persistence) - Prefect's default result storage configuration is activated. If you enable result persistence and don't specify a filesystem block, your results will be stored locally. By default, results are persisted to `~/.prefect/storage/`. You can configure the location of these results through the `PREFECT_LOCAL_STORAGE_PATH` setting. ```bash prefect config set PREFECT_LOCAL_STORAGE_PATH='~/.my-results/' ``` ### Enabling result persistence In addition to the `PREFECT_RESULTS_PERSIST_BY_DEFAULT` and `PREFECT_TASKS_DEFAULT_PERSIST_RESULT` settings, result persistence can also be enabled or disabled on both individual flows and individual tasks. Specifying a non-null value for any of the following keywords on the task decorator will enable result persistence for that task: * `persist_result`: a boolean that allows you to explicitly enable or disable result persistence. * `result_storage`: accepts either a string reference to a storage block or a storage block class that specifies where results should be stored. * `result_storage_key`: a string that specifies the filename of the result within the task's result storage. * `result_serializer`: a string or serializer that configures how the data should be serialized and deserialized. * `cache_policy`: a [cache policy](/v3/concepts/caching#cache-policies) specifying the behavior of the task's cache. * `cache_key_fn`: [a function](/v3/concepts/caching#cache-key-functions) that configures a custom cache policy. Similarly, setting `persist_result=True`, `result_storage`, or `result_serializer` on a flow will enable persistence for that flow. **Enabling persistence on a flow enables persistence by default for its tasks** Enabling result persistence on a flow through any of the above keywords will also enable it for all tasks called within that flow by default. Any settings *explicitly* set on a task take precedence over the flow settings. Additionally, the `PREFECT_TASKS_DEFAULT_PERSIST_RESULT` environment variable can be used to globally control the default persistence behavior for tasks, overriding the default behavior set by a parent flow or task. ### Result storage You can configure the system of record for your results through the `result_storage` keyword argument. This keyword accepts an instantiated [filesystem block](/v3/develop/blocks/), or a block slug. Find your blocks' slugs with `prefect block ls`. Note that if you want your tasks to share a common cache, your result storage should be accessible by the infrastructure in which those tasks run. [Integrations](/integrations/integrations) have cloud-specific storage blocks. For example, a common distributed filesystem for result storage is AWS S3. Additionally, you can control the default persistence behavior for task results using the `default_persist_result` setting. This setting allows you to specify whether results should be persisted by default for all tasks. You can set this to `True` to enable persistence by default, or `False` to disable it. This setting can be overridden at the individual task or flow level. ```python from prefect import flow, task from prefect_aws.s3 import S3Bucket test_block = S3Bucket(bucket_name='test-bucket') test_block.save('test-block', overwrite=True) # define three tasks # with different result persistence configuration @task def my_task(): return 42 unpersisted_task = my_task.with_options(persist_result=False) other_storage_task = my_task.with_options(result_storage=test_block) @flow(result_storage='s3-bucket/my-dev-block') def my_flow(): # this task will use the flow's result storage my_task() # this task will not persist results at all unpersisted_task() # this task will persist results to its own bucket using a different S3 block other_storage_task() ``` #### Specifying a default filesystem Alternatively, you can specify a different filesystem through the `PREFECT_DEFAULT_RESULT_STORAGE_BLOCK` setting. Specifying a block document slug here will enable result persistence using that filesystem as the default. For example: ```bash prefect config set PREFECT_DEFAULT_RESULT_STORAGE_BLOCK='s3-bucket/my-prod-block' ``` Note that any explicit configuration of `result_storage` on either a flow or task will override this default. #### Result filenames By default, the filename of a task's result is computed based on the task's cache policy, which is typically a hash of various pieces of data and metadata. For flows, the filename is a random UUID. You can configure the filename of the result file within result storage using either: * `result_storage_key`: a templated string that can use any of the fields within `prefect.runtime` and the task's individual parameter values. These templated values will be populated at runtime. * `cache_key_fn`: a function that accepts the task run context and its runtime parameters and returns a string. See [task caching documentation](/v3/concepts/caching#cache-key-functions) for more information. If both `result_storage_key` and `cache_key_fn` are provided, only the `result_storage_key` will be used. The following example writes three different result files based on the `name` parameter passed to the task: ```python from prefect import flow, task @task(result_storage_key="hello-{parameters[name]}.pickle") def hello_world(name: str = "world"): return f"hello {name}" @flow def my_flow(): hello_world() hello_world(name="foo") hello_world(name="bar") ``` If a result exists at a given storage key in the storage location, the task will load it without running. To learn more about caching mechanics in Prefect, see the [caching documentation](/v3/concepts/caching). ### Result serialization You can configure how results are serialized to storage using result serializers. These can be set using the `result_serializer` keyword on both tasks and flows. A default value can be set using the `PREFECT_RESULTS_DEFAULT_SERIALIZER` setting, which defaults to `pickle`. Current built-in options include `"pickle"`, `"json"`, `"compressed/pickle"` and `"compressed/json"`. The `result_serializer` accepts both a string identifier or an instance of a `ResultSerializer` class, allowing you to customize serialization behavior. ## Caching results in memory When running workflows, Prefect keeps the results of all tasks and flows in memory so they can be passed downstream. In some cases, it is desirable to override this behavior. For example, if you are returning a large amount of data from a task, it can be costly to keep it in memory for the entire duration of the flow run. Flows and tasks both include an option to drop the result from memory once the result has been committed with `cache_result_in_memory`: ```python from prefect import flow, task @flow(cache_result_in_memory=False) def foo(): return "pretend this is large data" @task(cache_result_in_memory=False) def bar(): return "pretend this is biiiig data" ``` # How to secure a self-hosted Prefect server Source: https://docs-3.prefect.io/v3/advanced/security-settings Learn about the Prefect settings that add security to your self-hosted server. Prefect provides a number of [settings](/v3/concepts/settings-and-profiles) that help secure a self-hosted Prefect server. ## Basic Authentication Self-hosted Prefect servers can be equipped with Basic Authentication through two settings: * **`server.api.auth_string="admin:pass"`**: this setting should be set with an administrator / password combination, separated by a colon, on any process that hosts the Prefect webserver (for example `prefect server start`). * **`api.auth_string="admin:pass"`**: this setting should be set with the same administrator / password combination as the server on any client process that needs to communicate with the Prefect API (for example, any process that runs a workflow). With these settings, the UI will prompt for the full authentication string `"admin:pass"` (no quotes) upon first load. It is recommended to store this information in a secure way, such as a Kubernetes Secret or in a private `.env` file. **Note on API keys** API keys are only used for [authenticating with Prefect Cloud](/v3/how-to-guides/cloud/manage-users/api-keys). If both `PREFECT_API_KEY` and `PREFECT_API_AUTH_STRING` are set on the client, `PREFECT_API_KEY` will take precedence. If you plan to use a self-hosted Prefect server, make sure `PREFECT_API_KEY` is not set in your active profile or as an environment variable, otherwise authentication will fail (`HTTP 401 Unauthorized`). Example `.env` file: ```bash .env PREFECT_SERVER_API_AUTH_STRING="admin:pass" PREFECT_API_AUTH_STRING="admin:pass" ``` ## Host the UI behind a reverse proxy When using a reverse proxy (such as [Nginx](https://nginx.org) or [Traefik](https://traefik.io)) to proxy traffic to a hosted Prefect UI instance, you must also configure the self-hosted Prefect server instance to connect to the API. The [`ui.api_url`](/v3/develop/settings-ref/#api_url) setting should be set to the external proxy URL. For example, if your external URL is `https://prefect-server.example.com` then you can configure a `prefect.toml` file for your server like this: ```toml prefect.toml [ui] api_url = "https://prefect-server.example.com/api" ``` If you do not set `ui.api_url`, then `api.url` will be used as a fallback. ## CSRF protection settings If using self-hosted Prefect server, you can configure CSRF protection settings. * [`server.api.csrf_protection_enabled`](/v3/develop/settings-ref/#csrf-protection-enabled): activates CSRF protection on the server, requiring valid CSRF tokens for applicable requests. Recommended for production to prevent CSRF attacks. Defaults to `False`. * [`server.api.csrf_token_expiration`](/v3/develop/settings-ref/#csrf-token-expiration): sets the expiration duration for server-issued CSRF tokens, influencing how often tokens need to be refreshed. The default is 1 hour. * [`client.csrf_support_enabled`](/v3/develop/settings-ref/#csrf-support-enabled): enables or disables CSRF token handling in the Prefect client. When enabled, the client manages CSRF tokens for state-changing API requests. Defaults to `True`. By default clients expect that CSRF protection is enabled on the server. If you are running a server without CSRF protection, you can disable CSRF support in the client. ## CORS settings If using self-hosted Prefect server, you can configure CORS settings to control which origins are allowed to make cross-origin requests to your server. * [`server.api.cors_allowed_origins`](/v3/develop/settings-ref/#cors-allowed-origins): a list of origins that are allowed to make cross-origin requests. * [`server.api.cors_allowed_methods`](/v3/develop/settings-ref/#cors-allowed-methods): a list of HTTP methods that are allowed to be used during cross-origin requests. * [`server.api.cors_allowed_headers`](/v3/develop/settings-ref/#cors-allowed-headers): a list of headers that are allowed to be used during cross-origin requests. ## Custom client headers The [`client.custom_headers`](/v3/develop/settings-ref/#custom-headers) setting allows you to configure custom HTTP headers that are included with every API request. This is particularly useful for authentication with proxies, CDNs, or security services that protect your Prefect server. ```bash Environment variable export PREFECT_CLIENT_CUSTOM_HEADERS='{ "Proxy-Authorization": "Bearer your-proxy-token", "X-Corporate-ID": "your-corp-identifier" }' ``` ```bash CLI prefect config set PREFECT_CLIENT_CUSTOM_HEADERS='{"Proxy-Authorization": "Bearer your-proxy-token", "X-Corporate-ID": "your-corp-ID"}' ``` ```toml prefect.toml [client] custom_headers = '''{ "Proxy-Authorization": "Bearer your-proxy-token", "X-Corporate-ID": "your-corp-identifier" }''' ``` Certain headers are protected and cannot be overridden for security reasons: * **`User-Agent`**: Managed by Prefect to identify client version and capabilities * **`Prefect-Csrf-Token`**: Used for CSRF protection when enabled * **`Prefect-Csrf-Client`**: Used for CSRF client identification If you attempt to override these protected headers, Prefect will log a warning and ignore the custom value to maintain security. **Store credentials securely** When using custom headers for authentication, ensure that sensitive values like API keys and tokens are stored securely using environment variables, secrets management systems, or encrypted configuration files. Avoid hardcoding credentials in your source code. # How to scale self-hosted Prefect Source: https://docs-3.prefect.io/v3/advanced/self-hosted Learn how to run multiple Prefect server instances for high availability and load distribution Running multiple Prefect server instances enables high availability and distributes load across your infrastructure. This guide covers configuration and deployment patterns for scaling self-hosted Prefect. ## Requirements Multi-server deployments require: * PostgreSQL database version 14.9 or higher (SQLite does not support multi-server synchronization) * Redis for event messaging * Load balancer for API traffic distribution ## Architecture A scaled Prefect deployment typically includes: * **Multiple API server instances** - Handle UI and API requests * **Background services** - Runs the scheduler, automation triggers, and other loop services * **[PostgreSQL](https://www.postgresql.org/) database** - Stores all persistent data and synchronizes state across servers * **[Redis](https://redis.io/)** - Distributes events between services * **Load balancer** - Routes traffic to healthy API instances (e.g. [NGINX](https://www.f5.com/go/product/welcome-to-nginx) or [Traefik](https://doc.traefik.io/traefik/)) ```mermaid %%{ init: { 'theme': 'neutral', 'flowchart': { 'curve' : 'linear', 'rankSpacing': 120, 'nodeSpacing': 80 } } }%% flowchart TB %% Style definitions classDef userClass fill:#ede7f6db,stroke:#4527a0db,stroke-width:2px classDef lbClass fill:#e3f2fddb,stroke:#1565c0db,stroke-width:2px classDef apiClass fill:#1860f2db,stroke:#1860f2db,stroke-width:2px classDef bgClass fill:#7c3aeddb,stroke:#7c3aeddb,stroke-width:2px classDef dataClass fill:#16a34adb,stroke:#16a34adb,stroke-width:2px classDef workerClass fill:#f59e0bdb,stroke:#f59e0bdb,stroke-width:2px %% Nodes subgraph clients[Client Side] direction TB Users[Users / UI / API Clients]:::userClass Workers[Workers poll any available API server
Process / K8s / Docker / Serverless]:::workerClass end LB[Load Balancer
NGINX / HAProxy / ALB
Port 4200]:::lbClass subgraph servers[Prefect Server Components] direction TB subgraph api[API Servers - Horizontal Scaling] direction LR API1[API Server 1
--no-services]:::apiClass API2[API Server 2
--no-services]:::apiClass API3[API Server N...
--no-services]:::apiClass end BG[Background Services
prefect server services start

β€’ Event Processing
β€’ Automation Triggers
β€’ Schedule Management]:::bgClass end subgraph data[Data Layer] direction LR PG[(PostgreSQL
β€’ Flow/Task State
β€’ Configuration
β€’ History)]:::dataClass Redis[(Redis
β€’ Events
β€’ Automations
β€’ Real-time Updates)]:::dataClass end %% Connections Users --> |HTTPS| LB LB --> |Round Robin| api api --> |Read/Write| PG api --> |Publish| Redis BG --> |Read/Write| PG BG --> |Subscribe| Redis Workers -.-> |Poll Work| api ``` ## Configuration ### Database setup Configure PostgreSQL as your database backend: ```bash export PREFECT_API_DATABASE_CONNECTION_URL="postgresql+asyncpg://user:password@host:5432/prefect" ``` PostgreSQL version 14.9 or higher is required for multi-server deployments. SQLite does not support the features needed for state synchronization across multiple servers. ### Redis setup Configure Redis as your server's message broker, cache, and lease storage: ```bash export PREFECT_MESSAGING_BROKER="prefect_redis.messaging" export PREFECT_MESSAGING_CACHE="prefect_redis.messaging" export PREFECT_SERVER_EVENTS_CAUSAL_ORDERING="prefect_redis.ordering" export PREFECT_SERVER_CONCURRENCY_LEASE_STORAGE="prefect_redis.lease_storage" export PREFECT_REDIS_MESSAGING_HOST="redis-host" export PREFECT_REDIS_MESSAGING_PORT="6379" export PREFECT_REDIS_MESSAGING_DB="0" ``` If your Redis instance requires authentication, you may configure a username and password: ```bash export PREFECT_REDIS_MESSAGING_USERNAME="marvin" export PREFECT_REDIS_MESSAGING_PASSWORD="dontpanic!" ``` For Redis instances that require an encrypted connection, you can enable SSL/TLS: ```bash export PREFECT_REDIS_MESSAGING_SSL="true" ``` ### Service separation For optimal performance, run API servers and background services separately: **API servers** (multiple instances): ```bash prefect server start --host 0.0.0.0 --port 4200 --no-services ``` **Background services**: ```bash prefect server services start ``` ### Database migrations Disable automatic migrations in multi-server deployments: ```bash export PREFECT_API_DATABASE_MIGRATE_ON_START="false" ``` Run migrations separately before deployment: ```bash prefect server database upgrade -y ``` ### Load balancer configuration Configure health checks for your load balancer: * **Health endpoint**: `/api/health` * **Expected response**: HTTP 200 with JSON `{"status": "healthy"}` * **Check interval**: 5-10 seconds Example NGINX configuration: ```nginx upstream prefect_api { least_conn; server prefect-api-1:4200 max_fails=3 fail_timeout=30s; server prefect-api-2:4200 max_fails=3 fail_timeout=30s; server prefect-api-3:4200 max_fails=3 fail_timeout=30s; } server { listen 4200; location /api/health { proxy_pass http://prefect_api; proxy_connect_timeout 1s; proxy_read_timeout 1s; } location / { proxy_pass http://prefect_api; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } ``` ### Reverse proxy configuration When hosting Prefect behind a reverse proxy, ensure proper header forwarding: ```nginx server { listen 80; server_name prefect.example.com; location / { return 301 https://$host$request_uri; } } server { listen 443 ssl http2; server_name prefect.example.com; ssl_certificate /path/to/ssl/certificate.pem; ssl_certificate_key /path/to/ssl/certificate_key.pem; location /api { proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; # WebSocket support proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; # Authentication headers proxy_set_header Authorization $http_authorization; proxy_pass_header Authorization; proxy_pass http://prefect_api; } location / { proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://prefect_api; } } ``` #### UI proxy settings When self-hosting the UI behind a proxy: * `PREFECT_UI_API_URL`: Connection URL from UI to API * `PREFECT_UI_SERVE_BASE`: Base URL path to serve the UI * `PREFECT_UI_URL`: URL for clients to access the UI #### SSL certificates For self-signed certificates: 1. Add certificate to system bundle and set: ```bash export SSL_CERT_FILE=/path/to/certificate.pem ``` 2. Or disable verification (testing only): ```bash export PREFECT_API_TLS_INSECURE_SKIP_VERIFY=True ``` #### Environment proxy settings Prefect respects standard proxy environment variables: ```bash export HTTPS_PROXY=http://proxy.example.com:8080 export HTTP_PROXY=http://proxy.example.com:8080 export NO_PROXY=localhost,127.0.0.1,.internal ``` ## Deployment examples ### Docker Compose ```yaml services: postgres: image: postgres:15 environment: POSTGRES_USER: prefect POSTGRES_PASSWORD: prefect POSTGRES_DB: prefect volumes: - postgres_data:/var/lib/postgresql/data healthcheck: test: pg_isready -h localhost -U $$POSTGRES_USER interval: 2s timeout: 5s retries: 15 redis: image: redis:7 migrate: image: prefecthq/prefect:3-latest depends_on: postgres: condition: service_healthy command: prefect server database upgrade -y environment: PREFECT_API_DATABASE_CONNECTION_URL: postgresql+asyncpg://prefect:prefect@postgres:5432/prefect prefect-api: image: prefecthq/prefect:3-latest depends_on: migrate: condition: service_completed_successfully postgres: condition: service_healthy redis: condition: service_started deploy: replicas: 3 command: prefect server start --host 0.0.0.0 --no-services environment: PREFECT_API_DATABASE_CONNECTION_URL: postgresql+asyncpg://prefect:prefect@postgres:5432/prefect PREFECT_API_DATABASE_MIGRATE_ON_START: "false" PREFECT_MESSAGING_BROKER: prefect_redis.messaging PREFECT_MESSAGING_CACHE: prefect_redis.messaging PREFECT_SERVER_EVENTS_CAUSAL_ORDERING: prefect_redis.ordering PREFECT_SERVER_CONCURRENCY_LEASE_STORAGE: prefect_redis.lease_storage PREFECT_REDIS_MESSAGING_HOST: redis PREFECT_REDIS_MESSAGING_PORT: "6379" ports: - "4200-4202:4200" # Maps to different ports for each replica prefect-background: image: prefecthq/prefect:3-latest depends_on: migrate: condition: service_completed_successfully postgres: condition: service_healthy redis: condition: service_started command: prefect server services start environment: PREFECT_API_DATABASE_CONNECTION_URL: postgresql+asyncpg://prefect:prefect@postgres:5432/prefect PREFECT_API_DATABASE_MIGRATE_ON_START: "false" PREFECT_MESSAGING_BROKER: prefect_redis.messaging PREFECT_MESSAGING_CACHE: prefect_redis.messaging PREFECT_SERVER_EVENTS_CAUSAL_ORDERING: prefect_redis.ordering PREFECT_SERVER_CONCURRENCY_LEASE_STORAGE: prefect_redis.lease_storage PREFECT_REDIS_MESSAGING_HOST: redis PREFECT_REDIS_MESSAGING_PORT: "6379" volumes: postgres_data: ``` Deploying Prefect self-hosted somehow else? Consider [opening a PR](/contribute/docs-contribute) to add your deployment pattern to this guide. ## Operations ### Migration considerations #### Handling large databases When running migrations on large database instances (especially where tables like `events`, `flow_runs`, or `task_runs` can reach millions of rows), the default database timeout of 10 seconds may not be sufficient for creating indexes. If you encounter a `TimeoutError` during migrations, increase the database timeout: ```bash # Set timeout to 10 minutes (adjust based on your database size) export PREFECT_API_DATABASE_TIMEOUT=600 # Then run the migration prefect server database upgrade -y ``` For Docker deployments: ```bash docker run -e PREFECT_API_DATABASE_TIMEOUT=600 prefecthq/prefect:latest prefect server database upgrade -y ``` Index creation time scales with table size. A database with millions of events may require 30+ minutes for some migrations. If a migration fails due to timeout, you may need to manually clean up any partially created indexes before retrying. #### Recovering from failed migrations If a migration times out while creating indexes, you may need to manually complete it. For example, if migration `7a73514ca2d6` fails: 1. First, check which indexes were partially created: ```sql SELECT indexname FROM pg_indexes WHERE tablename = 'events' AND indexname LIKE 'ix_events%'; ``` 2. Manually create the missing indexes using `CONCURRENTLY` to avoid blocking: ```sql -- Drop any partial indexes from the failed migration DROP INDEX IF EXISTS ix_events__event_related_occurred; DROP INDEX IF EXISTS ix_events__related_resource_ids; -- Create the new indexes CREATE INDEX CONCURRENTLY ix_events__related_gin ON events USING gin(related); CREATE INDEX CONCURRENTLY ix_events__event_occurred ON events (event, occurred); CREATE INDEX CONCURRENTLY ix_events__related_resource_ids_gin ON events USING gin(related_resource_ids); ``` 3. Mark the migration as complete: ```sql UPDATE alembic_version SET version_num = '7a73514ca2d6'; ``` Only use manual recovery if increasing the timeout and retrying the migration doesn't work. Always verify the correct migration version and index definitions from the migration files. ### Monitoring Monitor your multi-server deployment: * **Database connections**: Watch for connection pool exhaustion * **Redis memory**: Ensure adequate memory for message queues * **API response times**: Track latency across different endpoints * **Background service lag**: Monitor time between event creation and processing ### Best practices 1. **Start with 2-3 API instances** and scale based on load 2. **Use connection pooling** to manage database connections efficiently 3. **Monitor extensively** before scaling further (e.g. [Prometheus](https://prometheus.io/) + [Grafana](https://grafana.com/) or [Logfire](https://logfire.pydantic.dev/docs/why/)) 4. **Test failover scenarios** regularly ## Further reading * [Server concepts](/v3/concepts/server) * Deploy [Helm charts](/v3/advanced/server-helm) for Kubernetes # How to self-host the Prefect Server with Helm Source: https://docs-3.prefect.io/v3/advanced/server-helm Self-host your own Prefect server and connect a Prefect worker to it with Helm. You can use Helm to manage a [self-hosted Prefect server](https://github.com/PrefectHQ/prefect-helm/tree/main/charts/prefect-server) and a [worker](https://github.com/PrefectHQ/prefect-helm/tree/main/charts/prefect-worker). ## Prerequisites * A Kubernetes cluster * Install the [Helm CLI](https://helm.sh/docs/intro/install/) ## Deploy a server with Helm Configuring ingress or publicly exposing Prefect from the cluster is business dependent and not covered in this tutorial. For details on Ingress configuration, consult the [Kubernetes documentation](https://kubernetes.io/docs/concepts/services-networking/ingress/). ### Add the Prefect Helm repository: ```bash helm repo add prefect https://prefecthq.github.io/prefect-helm helm repo update ``` ### Create a namespace Create a new namespace for this tutorial (all commands will use this namespace): ```bash kubectl create namespace prefect kubectl config set-context --current --namespace=prefect ``` ### Deploy the server For a simple deployment using only the default values defined in the chart: ```bash helm install prefect-server prefect/prefect-server --namespace prefect ``` For a customized deployment, first create a `server-values.yaml` file for the server (see [values.yaml template](https://github.com/PrefectHQ/prefect-helm/blob/main/charts/prefect-server/values.yaml)): ```yaml server: basicAuth: enabled: true existingSecret: server-auth-secret ``` #### Create a secret for the API basic authentication username and password: ```bash kubectl create secret generic server-auth-secret \ --namespace prefect --from-literal auth-string='admin:password123' ``` #### Install the server: ```bash helm install prefect-server prefect/prefect-server \ --namespace prefect \ -f server-values.yaml ``` Expected output: ``` NAME: prefect-server LAST DEPLOYED: Tue Mar 4 09:08:07 2025 NAMESPACE: prefect STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Run the following command to port-forward the UI to your localhost: $ kubectl --namespace prefect port-forward svc/prefect-server 4200:4200 Visit http://localhost:4200 to use Prefect! ``` ### Access the Prefect UI: ```bash kubectl --namespace prefect port-forward svc/prefect-server 4200:4200 ``` Open `localhost:4200` in your browser. If using basic authentication, sign in with `admin:password123`. ## Deploy a worker with Helm To connect a worker to your self-hosted Prefect server in the same cluster: Create a `worker-values.yaml` file for the worker (see [values.yaml template](https://github.com/PrefectHQ/prefect-helm/blob/main/charts/prefect-worker/values.yaml)): ```yaml worker: apiConfig: selfHostedServer config: workPool: kube-test selfHostedServerApiConfig: apiUrl: http://prefect-server.prefect.svc.cluster.local:4200/api ``` #### Install the worker: ```bash helm install prefect-worker prefect/prefect-worker \ --namespace prefect \ -f worker-values.yaml ``` Create a `worker-values.yaml` file for the worker (see [values.yaml template](https://github.com/PrefectHQ/prefect-helm/blob/main/charts/prefect-worker/values.yaml)): ```yaml worker: apiConfig: selfHostedServer config: workPool: kube-test selfHostedServerApiConfig: apiUrl: http://prefect-server.prefect.svc.cluster.local:4200/api basicAuth: enabled: true existingSecret: worker-auth-secret ``` #### Create a secret for the API basic authentication username and password: ```bash kubectl create secret generic worker-auth-secret \ --namespace prefect --from-literal auth-string='admin:password123' ``` #### Install the worker: ```bash helm install prefect-worker prefect/prefect-worker \ --namespace prefect \ -f worker-values.yaml ``` Expected output: ``` Release "prefect-worker" has been installed. Happy Helming! NAME: prefect-worker LAST DEPLOYED: Tue Mar 4 11:26:21 2025 NAMESPACE: prefect STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: ``` ## Cleanup To uninstall the self-hosted Prefect server and Prefect worker: ```bash helm uninstall prefect-worker helm uninstall prefect-server ``` ## Troubleshooting If you see this error: ``` Error from server (BadRequest): container "prefect-server" in pod "prefect-server-7c87b7f7cf-sgqj2" is waiting to start: CreateContainerConfigError ``` Run `kubectl events` and confirm that the `authString` is correct. If you see this error: ``` prefect.exceptions.PrefectHTTPStatusError: Client error '401 Unauthorized' for url 'http://prefect-server.prefect.svc.cluster.local:4200/api/work_pools/kube-test' Response: {'exception_message': 'Unauthorized'} For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401 An exception occurred. ``` Ensure `basicAuth` is configured in the `worker-values.yaml` file. If you see this error: ``` File "/usr/local/lib/python3.11/site-packages/httpcore/_backends/anyio.py", line 113, in connect_tcp with map_exceptions(exc_map): File "/usr/local/lib/python3.11/contextlib.py", line 158, in __exit__ self.gen.throw(typ, value, traceback) File "/usr/local/lib/python3.11/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions raise to_exc(exc) from exc httpcore.ConnectError: [Errno -2] Name or service not known ``` Ensure the `PREFECT_API_URL` environment variable is properly templated by running the following command: ```bash helm template prefect-worker prefect/prefect-worker -f worker-values.yaml ``` The URL format should look like the following: ``` http://prefect-server.prefect.svc.cluster.local:4200/api ``` If the worker is not in the same cluster and namespace, the precise format will vary. For additional troubleshooting and configuration, review the [Prefect Worker Helm Chart](https://github.com/PrefectHQ/prefect-helm/tree/main/charts/prefect-worker). # How to submit flows directly to dynamic infrastructure Source: https://docs-3.prefect.io/v3/advanced/submit-flows-directly-to-dynamic-infrastructure Submit flows directly to different infrastructure types without a deployment **Beta Feature** This feature is currently in beta. While we encourage you to try it out and provide feedback, please be aware that the API may change in future releases, potentially including breaking changes. Prefect allows you to submit workflows directly to different infrastructure types without requiring a deployment. This enables you to dynamically choose where your workflows run based on their requirements, such as: * Training machine learning models that require GPUs * Processing large datasets that need significant memory * Running lightweight tasks that can use minimal resources ## Benefits Submitting workflows directly to dynamic infrastructure provides several advantages: * **Dynamic resource allocation**: Choose infrastructure based on workflow requirements at runtime * **Cost efficiency**: Use expensive infrastructure only when needed * **Consistency**: Ensure workflows always run on the appropriate infrastructure type * **Simplified workflow management**: No need to create and maintain deployments for different infrastructure types ## Supported infrastructure Direct submission of workflows is currently supported for the following infrastructures: | Infrastructure | Required Package | Decorator | | ------------------------- | -------------------- | --------------------------- | | Docker | `prefect-docker` | `@docker` | | Kubernetes | `prefect-kubernetes` | `@kubernetes` | | AWS ECS | `prefect-aws` | `@ecs` | | Google Cloud Run | `prefect-gcp` | `@cloud_run` | | Google Vertex AI | `prefect-gcp` | `@vertex_ai` | | Azure Container Instances | `prefect-azure` | `@azure_container_instance` | Each package can be installed using pip, for example: ```bash pip install prefect-docker ``` ## Prerequisites Before submitting workflows to specific infrastructure, you'll need: 1. A work pool for each infrastructure type you want to use 2. Object storage to associate with your work pool(s) ## Setting up work pools and storage ### Creating a work pool Create work pools for each infrastructure type using the Prefect CLI: ```bash prefect work-pool create NAME --type WORK_POOL_TYPE ``` For detailed information on creating and configuring work pools, refer to the [work pools documentation](/v3/deploy/infrastructure-concepts/work-pools). ### Configuring work pool storage To enable Prefect to run workflows in remote infrastructure, work pools need an associated storage location to store serialized versions of submitted workflows and results from workflow runs. Configure storage for your work pools using one of the supported storage types: ```bash S3 prefect work-pool storage configure s3 WORK_POOL_NAME \ --bucket BUCKET_NAME \ --aws-credentials-block-name BLOCK_NAME ``` ```bash Google Cloud Storage prefect work-pool storage configure gcs WORK_POOL_NAME \ --bucket BUCKET_NAME \ --gcp-credentials-block-name BLOCK_NAME ``` ```base Azure Blob Storage prefect work-pool storage configure azure-blob-storage WORK_POOL_NAME \ --container CONTAINER_NAME \ --azure-blob-storage-credentials-block-name BLOCK_NAME ``` To allow Prefect to upload and download serialized workflows, you can [create a block](/v3/develop/blocks) containing credentials with permission to access your configured storage location. If a credentials block is not provided, Prefect will use the default credentials (e.g., a local profile or an IAM role) as determined by the corresponding cloud provider. You can inspect your storage configuration using: ```bash prefect work-pool storage inspect WORK_POOL_NAME ``` **Local storage for `@docker`** When using the `@docker` decorator with a local Docker engine, you can use volume mounts to share data between your Docker container and host machine. Here's an example: ```python from prefect import flow from prefect.filesystems import LocalFileSystem from prefect_docker.experimental import docker result_storage = LocalFileSystem(basepath="/tmp/results") result_storage.save("result-storage", overwrite=True) @docker( work_pool="above-ground", volumes=["/tmp/results:/tmp/results"], ) @flow(result_storage=result_storage) def run_in_docker(name: str): return(f"Hello, {name}!") print(run_in_docker("world")) # prints "Hello, world!" ``` To use local storage, ensure that: 1. The volume mount path is identical on both the host and container side 2. The `LocalFileSystem` block's `basepath` matches the path specified in the volume mount ## Submitting workflows to specific infrastructure To submit a flow to specific infrastructure, use the appropriate decorator for that infrastructure type. Here's an example using `@kubernetes`: ```python from prefect import flow from prefect_kubernetes.experimental.decorators import kubernetes # Submit `my_remote_flow` to run in a Kubernetes job @kubernetes(work_pool="olympic") @flow def my_remote_flow(name: str): print(f"Hello {name}!") @flow def my_flow(): my_remote_flow("Marvin") # Run the flow my_flow() ``` When you run this code on your machine, `my_flow` will execute locally, while `my_remote_flow` will be submitted to run in a Kubernetes job. **Parameters must be serializable** Parameters passed to infrastructure-bound flows are serialized with `cloudpickle` to allow them to be transported to the destination infrastructure. Most Python objects can be serialized with `cloudpickle`, but objects like database connections cannot be serialized. For parameters that cannot be serialized, you'll need to create the object inside your infrastructure-bound workflow. ## Customizing infrastructure configuration You can override the default configuration by providing additional kwargs to the infrastructure decorator: ```python from prefect import flow from prefect_kubernetes.experimental.decorators import kubernetes @kubernetes( work_pool="my-kubernetes-pool", namespace="custom-namespace" ) @flow def custom_namespace_flow(): pass ``` Any kwargs passed to the infrastructure decorator will override the corresponding default value in the [base job template](/v3/how-to-guides/deployment_infra/manage-work-pools#base-job-template) for the specified work pool. ## Further reading * [Work pools](/v3/concepts/work-pools) concept page # How to write transactional workflows Source: https://docs-3.prefect.io/v3/advanced/transactions Prefect supports *transactional semantics* in your workflows that allow you to rollback on task failure and configure groups of tasks that run as an atomic unit. A *transaction* in Prefect corresponds to a job that needs to be done. A transaction runs at most one time, and produces a result record upon completion at a unique address specified by a dynamically computed cache key. These records can be shared across tasks and flows. Under the hood, every Prefect task run is governed by a transaction. In the default mode of task execution, all you need to understand about transactions are [the policies determining the task's cache key computation](/v3/concepts/caching). **Transactions and states** Transactions and states are similar but different in important ways. Transactions determine whether a task should or should not execute, whereas states enable visibility into code execution status. ## Write your first transaction Tasks can be grouped into a common transaction using the `transaction` context manager: ```python import os from time import sleep from prefect import task, flow from prefect.transactions import transaction @task def write_file(contents: str): "Writes to a file." with open("side-effect.txt", "w") as f: f.write(contents) @write_file.on_rollback def del_file(transaction): "Deletes file." os.unlink("side-effect.txt") @task def quality_test(): "Checks contents of file." with open("side-effect.txt", "r") as f: data = f.readlines() if len(data) < 2: raise ValueError("Not enough data!") @flow def pipeline(contents: str): with transaction(): write_file(contents) sleep(2) # sleeping to give you a chance to see the file quality_test() if __name__ == "__main__": pipeline(contents="hello world") ``` If you run this flow `pipeline(contents="hello world!")` it will fail. Importantly, after the flow has exited, there is no `"side-effect.txt"` file in your working directory. This is because the `write_file` task's `on_rollback` hook was executed due to the transaction failing. **`on_rollback` hooks are different than `on_failure` hooks** Note that the `on_rollback` hook is executed when the `quality_test` task fails, not the `write_file` task that it is associated with it, which succeeded. This is because rollbacks occur whenever the transaction a task is participating in fails, even if that failure is outside the task's local scope. This behavior makes transactions a valuable pattern for managing pipeline failure. ## Transaction lifecycle Every transaction goes through at most four lifecycle stages: * **BEGIN**: in this phase, the transaction's key is computed and looked up. If a record already exists at the key location the transaction considers itself committed. * **STAGE**: in this phase, the transaction stages a piece of data to be committed to its result location. Whether this data is committed or rolled back depends on the commit mode of the transaction. * **ROLLBACK**: if the transaction encounters *any* error **after** staging, it rolls itself back and does not proceed to commit anything. * **COMMIT**: in this final phase, the transaction writes its record to its configured location. At this point the transaction is complete. It is important to note that rollbacks only occur *after* the transaction has been staged. Revisiting our example from above, there are actually *three* transactions at play: * the larger transaction that begins when `with transaction()` is executed; this transaction remains active throughout the duration of the subtransactions within it. * the transaction associated with the `write_file` task. Upon completion of the `write_file` task, this transaction is now **STAGED**. * the transaction associated with the `quality_test` task. This transaction fails before it can be staged, causing a rollback in its parent transaction which then rolls back any staged subtransactions. In particular, the staged `write_file`'s transaction is rolled back. **Tasks also have `on_commit` lifecycle hooks** In addition to the `on_rollback` hook, a task can also register `on_commit` hooks that execute whenever its transaction is committed. A task run persists its result only at transaction commit time, which could be significantly after the task's completion time if it is within a long running transaction. The signature for an `on_commit` hook is the same as that of an `on_rollback` hook: {/* pmd-metadata: continuation */} ```python @write_file.on_commit def confirmation(transaction): print("committing a record now using the task's cache key!") ``` ## Idempotency You can ensure sections of code are functionally idempotent by wrapping them in a transaction. By specifying a `key` for your transaction, you can ensure that your code is executed only once. For example, here's a flow that downloads some data from an API and writes it to a file: ```python from prefect import task, flow from prefect.transactions import transaction @task def download_data(): """Imagine this downloads some data from an API""" return "some data" @task def write_data(data: str): """This writes the data to a file""" with open("data.txt", "w") as f: f.write(data) @flow(log_prints=True) def pipeline(): with transaction(key="download-and-write-data") as txn: if txn.is_committed(): print("Data file has already been written. Exiting early.") return data = download_data() write_data(data) if __name__ == "__main__": pipeline() ``` If you run this flow, it will write data to a file the first time, but it will exit early on subsequent runs because the transaction has already been committed. Giving the transaction a `key` will cause the transaction to write a record on commit signifying that the transaction has completed. The call to `txn.is_committed()` will return `True` only if the persisted record exists. ### Handling race conditions Persisting transaction records works well to ensure sequential executions are idempotent, but what about when about when multiple transactions with the same key run at the same time? By default, transactions have an isolation level of `READ_COMMITED` which means that they can see any previously committed records, but they are not prevented from overwriting a record that was created by another transaction between the time they started and the time they committed. To see this behavior in action in the following script: ```python import threading from prefect import flow, task from prefect.transactions import transaction @task def download_data(): return f"{threading.current_thread().name} is the winner!" @task def write_file(contents: str): "Writes to a file." with open("race-condition.txt", "w") as f: f.write(contents) @flow def pipeline(transaction_key: str): with transaction(key=transaction_key) as txn: if txn.is_committed(): print("Data file has already been written. Exiting early.") return data = download_data() write_file(data) if __name__ == "__main__": # Run the pipeline twice to see the race condition transaction_key = f"race-condition-{uuid.uuid4()}" thread_1 = threading.Thread(target=pipeline, name="Thread 1", args=(transaction_key,)) thread_2 = threading.Thread(target=pipeline, name="Thread 2", args=(transaction_key,)) thread_1.start() thread_2.start() thread_1.join() thread_2.join() ``` If you run this script, you will see that sometimes "Thread 1 is the winner!" is written to the file and sometimes "Thread 2 is the winner!" is written **even though the transactions have the same key**. You can ensure subsequent runs don't exit early by changing the `key` argument between runs. To prevent race conditions, you can set the `isolation_level` of a transaction to `SERIALIZABLE`. This will cause each transaction to take a lock on the provided key. This will prevent other transactions from starting until the first transaction has completed. Here's an updated example that uses `SERIALIZABLE` isolation: ```python import threading import uuid from prefect import flow, task from prefect.locking.filesystem import FileSystemLockManager from prefect.results import ResultStore from prefect.settings import PREFECT_HOME from prefect.transactions import IsolationLevel, transaction @task def download_data(): return f"{threading.current_thread().name} is the winner!" @task def write_file(contents: str): "Writes to a file." with open("race-condition.txt", "w") as f: f.write(contents) @flow def pipeline(transaction_key: str): with transaction( key=transaction_key, isolation_level=IsolationLevel.SERIALIZABLE, store=ResultStore( lock_manager=FileSystemLockManager( lock_files_directory=PREFECT_HOME.value() / "locks" ) ), ) as txn: if txn.is_committed(): print("Data file has already been written. Exiting early.") return data = download_data() write_file(data) if __name__ == "__main__": transaction_key = f"race-condition-{uuid.uuid4()}" thread_1 = threading.Thread(target=pipeline, name="Thread 1", args=(transaction_key,)) thread_2 = threading.Thread(target=pipeline, name="Thread 2", args=(transaction_key,)) thread_1.start() thread_2.start() thread_1.join() thread_2.join() ``` To use a transaction with the `SERIALIZABLE` isolation level, you must also provide a `lock_manager` to the `transaction` context manager. The lock manager is responsible for acquiring and releasing locks on the transaction key. In the example above, we use a `FileSystemLockManager` which will manage locks as files on the current instance's filesystem. Prefect offers several lock managers for different concurrency use cases: | Lock Manager | Storage | Supports | Module/Package | | ------------------- | -------------- | ------------------------------------------- | ---------------------------- | | `MemoryLockManager` | In-memory | Single-process workflows using threads | `prefect.locking.memory` | | `FileLockManager` | Filesystem | Multi-process workflows on a single machine | `prefect.locking.filesystem` | | `RedisLockManager` | Redis database | Distributed workflows | `prefect-redis` | ## Access data within transactions Key-value pairs can be set within a transaction and accessed elsewhere within the transaction, including within the `on_rollback` hook. The code below shows how to set a key-value pair within a transaction and access it within the `on_rollback` hook: ```python import os from time import sleep from prefect import task, flow from prefect.transactions import transaction @task def write_file(filename: str, contents: str): "Writes to a file." with open(filename, "w") as f: f.write(contents) @write_file.on_rollback def del_file(txn): "Deletes file." os.unlink(txn.get("filename")) @task def quality_test(filename): "Checks contents of file." with open(filename, "r") as f: data = f.readlines() if len(data) < 2: raise ValueError(f"Not enough data!") @flow def pipeline(filename: str, contents: str): with transaction() as txn: txn.set("filename", filename) write_file(filename, contents) sleep(2) # sleeping to give you a chance to see the file quality_test(filename) if __name__ == "__main__": pipeline( filename="side-effect.txt", contents="hello world", ) ``` The value of `contents` is accessible within the `on_rollback` hook. Use `get_transaction()` to access the transaction object and `txn.get("key")` to access the value of the key. # How to emit and use custom events Source: https://docs-3.prefect.io/v3/advanced/use-custom-event-grammar Learn how to define specific trigger conditions based on custom event grammar. ## Motivating custom events Imagine you are running an e-commerce platform and you want to trigger a deployment when a customer completes an order. There might be a number of events that occur during an order on your platform, for example: * `order.created` * `order.item.added` * `order.payment-method.confirmed` * `order.shipping-method.added` * `order.complete` **Event grammar** The above choices of event names are arbitrary. With Prefect events, you're free to select any event grammar that best represents your use case. In this case, we want to trigger a deployment when a user completes an order, so our trigger should: * `expect` an `order.complete` event * `after` an `order.created` event * evaluate these conditions `for_each` user id Finally, it should pass the `user_id` as a parameter to the deployment. ### Define the trigger Here's how this looks in code: ```python post_order_deployment.py from prefect import flow from prefect.events.schemas.deployment_triggers import DeploymentEventTrigger order_complete = DeploymentEventTrigger( expect={"order.complete"}, after={"order.created"}, for_each={"prefect.resource.id"}, parameters={"user_id": "{{ event.resource.id }}"}, ) @flow(log_prints=True) def post_order_complete(user_id: str): print(f"User {user_id} has completed an order -- doing stuff now") if __name__ == "__main__": post_order_complete.serve(triggers=[order_complete]) ``` **Specify multiple events or resources** The `expect` and `after` fields accept a `set` of event names, so you can specify multiple events for each condition. Similarly, the `for_each` field accepts a `set` of resource ids. ### Simulate events To simulate users causing order status events, run the following in a Python shell or script: ```python simulate_events.py import time from prefect.events import emit_event user_id_1, user_id_2 = "123", "456" for event_name, user_id in [ ("order.created", user_id_1), ("order.created", user_id_2), # other user ("order.complete", user_id_1), ]: event = emit_event( event=event_name, resource={"prefect.resource.id": user_id}, ) time.sleep(1) print(f"{user_id} emitted {event_name}") ``` In the above example: * `user_id_1` creates and then completes an order, triggering a run of our deployment. * `user_id_2` creates an order, but no completed event is emitted so no deployment is triggered. # How to configure worker healthchecks Source: https://docs-3.prefect.io/v3/advanced/worker-healthchecks Learn how to monitor worker health and automatically restart workers when they become unresponsive. Worker healthchecks provide a way to monitor whether your Prefect workers are functioning properly and polling for work as expected. This is particularly useful in production environments where you need to ensure workers are available to execute scheduled flow runs. ## Overview Worker healthchecks work by: 1. **Tracking polling activity**: Workers record when they last polled for flow runs from their work pool 2. **Exposing a health endpoint**: When enabled, workers start an HTTP server that provides health status 3. **Detecting unresponsive workers**: The health endpoint returns an error status if the worker hasn't polled recently This allows external monitoring systems, container orchestrators, or process managers to detect and restart unhealthy workers automatically. ## Enabling Healthchecks Start a worker with healthchecks enabled using the `--with-healthcheck` flag: ```bash prefect worker start --pool "my-pool" --with-healthcheck ``` This starts both the worker and a lightweight HTTP health server that exposes a `/health` endpoint. When enabled, the worker exposes an HTTP endpoint at: ``` http://localhost:8080/health ``` For GET requests the endpoint returns: * **200 OK** with `{"message": "OK"}` when the worker is healthy * **503 Service Unavailable** with `{"message": "Worker may be unresponsive at this time"}` when the worker is unhealthy ### Configuring the Health Server You can customize the health server's host and port using environment variables: ```bash export PREFECT_WORKER_WEBSERVER_HOST=0.0.0.0 export PREFECT_WORKER_WEBSERVER_PORT=9090 prefect worker start --pool "my-pool" --with-healthcheck ``` ## Health Detection Logic A worker is considered unhealthy if it hasn't polled for flow runs within a specific timeframe defined by its configured polling interval. The health check algorithm works as follows: If a worker hasn't made a successful poll within the time window of `PREFECT_WORKER_QUERY_SECONDS * 30` seconds, it is considered unhealthy and its health endpoint will return 503 (Service Unavailable). With default settings, a worker is unhealthy if it hasn't polled in 450 seconds (7.5 minutes). This generous threshold accounts for temporary network issues, API unavailability, or brief worker pauses without triggering false alarms. ## Production Deployment Patterns ### Docker with Health Checks Use Docker's built-in health check functionality by including these lines in your Dockerfile: ```dockerfile # Health check configuration HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=3 \ CMD python -c "import urllib.request as u; u.urlopen('http://localhost:8080/health', timeout=1)" # Start worker with healthcheck CMD ["prefect", "worker", "start", "--pool", "my-pool", "--with-healthcheck"] ``` For more information see [Docker's reference guide](https://docs.docker.com/reference/dockerfile/#healthcheck). ### Kubernetes with Liveness Probes Configure Kubernetes to automatically restart unhealthy worker pods by including this configuration in your worker deployment: ```yaml livenessProbe: httpGet: path: /health port: 8080 initialDelaySeconds: 60 periodSeconds: 30 timeoutSeconds: 10 failureThreshold: 3 ``` This is enabled by default when using [Prefect's Helm Chart](v3/advanced/server-helm). ### Docker Compose with Health Checks Use Docker Compose's built-in health check functionality by including these lines in your Docker compose file: ```yaml version: '3.8' services: prefect-worker: image: my-prefect-worker:latest command: ["prefect", "worker", "start", "--pool", "my-pool", "--with-healthcheck"] healthcheck: test: ["CMD", "python", "-c", "import urllib.request as u; u.urlopen('http://localhost:8080/health', timeout=1)"] interval: 30s timeout: 10s retries: 3 start_period: 60s restart: unless-stopped ``` For more information see [Docker Compose's reference guide](https://docs.docker.com/reference/compose-file/services/#healthcheck). ## Troubleshooting ### Health endpoint not accessible * Verify the worker was started with `--with-healthcheck` * Check that the configured host/port is accessible (default: `http://localhost:8080/health`) * Ensure no firewall rules are blocking the health port * Port 8080 may conflict with other services - change with `PREFECT_WORKER_WEBSERVER_PORT` * Verify configuration: `prefect config view --show-defaults | grep WORKER` ### Worker appears healthy but not picking up flows * Health checks only verify polling activity, not successful flow execution * Check work pool and work queue configuration: ensure worker is polling the correct pool/queue * Verify deployment configuration matches worker capabilities * Review flow run states - flows may be stuck in PENDING due to concurrency limits * Enable debug logging: set `PREFECT_LOGGING_LEVEL=DEBUG` on the worker to see detailed polling activity * Increase polling frequency temporarily: `PREFECT_WORKER_QUERY_SECONDS=5` ### False positive health failures * Increase `PREFECT_WORKER_QUERY_SECONDS` if your API has high latency * Check for network connectivity issues between worker and Prefect API * Review worker logs for authentication or authorization errors (API key issues) * Verify `PREFECT_API_URL` is correctly configured and accessible * Check for temporary API outages or [rate limiting](v3/concepts/rate-limits) ## Related Configuration Relevant settings for worker health and polling behavior: * `PREFECT_WORKER_HEARTBEAT_SECONDS`: How often workers send heartbeats to the API (default: 30) * `PREFECT_WORKER_QUERY_SECONDS`: How often workers poll for new flow runs (default: 15) * `PREFECT_WORKER_PREFETCH_SECONDS`: How far in advance to submit flow runs (default: 10) * `PREFECT_WORKER_WEBSERVER_HOST`: Health server host (default: 127.0.0.1) * `PREFECT_WORKER_WEBSERVER_PORT`: Health server port (default: 8080) ## Further Reading For more information on worker configuration, see the [Workers concept guide](/v3/concepts/workers/). # Source: https://docs-3.prefect.io/v3/api-ref/cli/artifact # `prefect artifact` ```command prefect artifact [OPTIONS] COMMAND [ARGS]... ``` Inspect and delete artifacts. ## `prefect artifact ls` ```command prefect artifact ls [OPTIONS] ``` List artifacts. The maximum number of artifacts to return. Whether or not to only return the latest version of each artifact. ## `prefect artifact inspect` ```command prefect artifact inspect [OPTIONS] KEY ``` View details about an artifact. \[required] The maximum number of artifacts to return. Specify an output format. Currently supports: json **Example:** `$ prefect artifact inspect "my-artifact"` ```json [ { 'id': 'ba1d67be-0bd7-452e-8110-247fe5e6d8cc', 'created': '2023-03-21T21:40:09.895910+00:00', 'updated': '2023-03-21T21:40:09.895910+00:00', 'key': 'my-artifact', 'type': 'markdown', 'description': None, 'data': 'my markdown', 'metadata_': None, 'flow_run_id': '8dc54b6f-6e24-4586-a05c-e98c6490cb98', 'task_run_id': None }, { 'id': '57f235b5-2576-45a5-bd93-c829c2900966', 'created': '2023-03-27T23:16:15.536434+00:00', 'updated': '2023-03-27T23:16:15.536434+00:00', 'key': 'my-artifact', 'type': 'markdown', 'description': 'my-artifact-description', 'data': 'my markdown', 'metadata_': None, 'flow_run_id': 'ffa91051-f249-48c1-ae0f-4754fcb7eb29', 'task_run_id': None } ] ``` ## `prefect artifact delete` ```command prefect artifact delete [OPTIONS] [KEY] ``` Delete an artifact. The key of the artifact to delete. The ID of the artifact to delete. **Example:** `$ prefect artifact delete "my-artifact"` # Source: https://docs-3.prefect.io/v3/api-ref/cli/automation # `prefect automation` ```command prefect automation [OPTIONS] COMMAND [ARGS]... ``` Manage automations. ## `prefect automation ls` ```command prefect automation ls [OPTIONS] ``` List all automations. ## `prefect automation inspect` ```command prefect automation inspect [OPTIONS] [NAME] ``` Inspect an automation. Arguments: name: the name of the automation to inspect id: the id of the automation to inspect yaml: output as YAML json: output as JSON Examples: `$ prefect automation inspect "my-automation"` `$ prefect automation inspect --id "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"` `$ prefect automation inspect "my-automation" --yaml` `$ prefect automation inspect "my-automation" --output json` `$ prefect automation inspect "my-automation" --output yaml` An automation's name An automation's id Output as YAML Output as JSON Specify an output format. Currently supports: json, yaml ## `prefect automation resume` ```command prefect automation resume [OPTIONS] [NAME] ``` Resume an automation. Arguments: name: the name of the automation to resume id: the id of the automation to resume Examples: `$ prefect automation resume "my-automation"` `$ prefect automation resume --id "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"` An automation's name An automation's id ## `prefect automation enable` ```command prefect automation enable [OPTIONS] [NAME] ``` Resume an automation. Arguments: name: the name of the automation to resume id: the id of the automation to resume Examples: `$ prefect automation resume "my-automation"` `$ prefect automation resume --id "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"` An automation's name An automation's id ## `prefect automation pause` ```command prefect automation pause [OPTIONS] [NAME] ``` Pause an automation. Arguments: name: the name of the automation to pause id: the id of the automation to pause Examples: `$ prefect automation pause "my-automation"` `$ prefect automation pause --id "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"` An automation's name An automation's id ## `prefect automation disable` ```command prefect automation disable [OPTIONS] [NAME] ``` Pause an automation. Arguments: name: the name of the automation to pause id: the id of the automation to pause Examples: `$ prefect automation pause "my-automation"` `$ prefect automation pause --id "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"` An automation's name An automation's id ## `prefect automation delete` ```command prefect automation delete [OPTIONS] [NAME] ``` Delete an automation. An automation's name An automation's id **Example:** `$ prefect automation delete "my-automation"` `$ prefect automation delete --id "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"` ## `prefect automation create` ```command prefect automation create [OPTIONS] ``` Create one or more automations from a file or JSON string. Path to YAML or JSON file containing automation(s) JSON string containing automation(s) **Example:** `$ prefect automation create --from-file automation.yaml` `$ prefect automation create -f automation.json` `$ prefect automation create --from-json '{"name": "my-automation", "trigger": {...}, "actions": [...]}'` `$ prefect automation create -j '[{"name": "auto1", ...}, {"name": "auto2", ...}]'` # Source: https://docs-3.prefect.io/v3/api-ref/cli/block # `prefect block` ```command prefect block [OPTIONS] COMMAND [ARGS]... ``` Manage blocks. ## `prefect block register` ```command prefect block register [OPTIONS] ``` Register blocks types within a module or file. This makes the blocks available for configuration via the UI. If a block type has already been registered, its registration will be updated to match the block's current definition.  Examples:  Register block types in a Python module: \$ prefect block register -m prefect\_aws.credentials  Register block types in a .py file: \$ prefect block register -f my\_blocks.py Python module containing block types to be registered Path to .py file containing block types to be registered ## `prefect block ls` ```command prefect block ls [OPTIONS] ``` View all configured blocks. ## `prefect block delete` ```command prefect block delete [OPTIONS] [SLUG] ``` Delete a configured block. A block slug. Formatted as '\/\' A block id. ## `prefect block create` ```command prefect block create [OPTIONS] BLOCK_TYPE_SLUG ``` Generate a link to the Prefect UI to create a block. A block type slug. View available types with: prefect block type ls \[required] ## `prefect block inspect` ```command prefect block inspect [OPTIONS] [SLUG] ``` Displays details about a configured block. A Block slug: \/\ A Block id to search for if no slug is given ## `prefect block types` ```command prefect block types [OPTIONS] COMMAND [ARGS]... ``` Inspect and delete block types. ### `prefect block types ls` ```command prefect block types ls [OPTIONS] ``` List all block types. ### `prefect block types inspect` ```command prefect block types inspect [OPTIONS] SLUG ``` Display details about a block type. A block type slug \[required] ### `prefect block types delete` ```command prefect block types delete [OPTIONS] SLUG ``` Delete an unprotected Block Type. A Block type slug \[required] ## `prefect block type` ```command prefect block type [OPTIONS] COMMAND [ARGS]... ``` Inspect and delete block types. ### `prefect block type ls` ```command prefect block type ls [OPTIONS] ``` List all block types. ### `prefect block type inspect` ```command prefect block type inspect [OPTIONS] SLUG ``` Display details about a block type. A block type slug \[required] ### `prefect block type delete` ```command prefect block type delete [OPTIONS] SLUG ``` Delete an unprotected Block Type. A Block type slug \[required] # Source: https://docs-3.prefect.io/v3/api-ref/cli/concurrency-limit # `prefect concurrency-limit` ```command prefect concurrency-limit [OPTIONS] COMMAND [ARGS]... ``` Manage task-level concurrency limits. ## `prefect concurrency-limit create` ```command prefect concurrency-limit create [OPTIONS] TAG CONCURRENCY_LIMIT ``` Create a concurrency limit against a tag. This limit controls how many task runs with that tag may simultaneously be in a Running state. \[required] \[required] ## `prefect concurrency-limit inspect` ```command prefect concurrency-limit inspect [OPTIONS] TAG ``` View details about a concurrency limit. `active_slots` shows a list of TaskRun IDs which are currently using a concurrency slot. \[required] Specify an output format. Currently supports: json ## `prefect concurrency-limit ls` ```command prefect concurrency-limit ls [OPTIONS] ``` View all concurrency limits. ## `prefect concurrency-limit reset` ```command prefect concurrency-limit reset [OPTIONS] TAG ``` Resets the concurrency limit slots set on the specified tag. \[required] ## `prefect concurrency-limit delete` ```command prefect concurrency-limit delete [OPTIONS] TAG ``` Delete the concurrency limit set on the specified tag. \[required] # Source: https://docs-3.prefect.io/v3/api-ref/cli/config # `prefect config` ```command prefect config [OPTIONS] COMMAND [ARGS]... ``` View and set Prefect profiles. ## `prefect config set` ```command prefect config set [OPTIONS] SETTINGS... ``` Change the value for a setting by setting the value in the current profile. \[required] ## `prefect config validate` ```command prefect config validate [OPTIONS] ``` Read and validate the current profile. Deprecated settings will be automatically converted to new names unless both are set. ## `prefect config unset` ```command prefect config unset [OPTIONS] SETTING_NAMES... ``` Restore the default value for a setting. Removes the setting from the current profile. \[required] ## `prefect config view` ```command prefect config view [OPTIONS] ``` Display the current settings. Toggle display of default settings. \--show-defaults displays all settings, even if they are not changed from the default values. \--hide-defaults displays only settings that are changed from default values. Toggle display of the source of a value for a setting. The value for a setting can come from the current profile, environment variables, or the defaults. Toggle display of secrets setting values. # Source: https://docs-3.prefect.io/v3/api-ref/cli/dashboard # `prefect dashboard` ```command prefect dashboard [OPTIONS] COMMAND [ARGS]... ``` Commands for interacting with the Prefect UI. ## `prefect dashboard open` ```command prefect dashboard open [OPTIONS] ``` Open the Prefect UI in the browser. # Source: https://docs-3.prefect.io/v3/api-ref/cli/deployment # `prefect deployment` ```command prefect deployment [OPTIONS] COMMAND [ARGS]... ``` Manage deployments. ## `prefect deployment inspect` ```command prefect deployment inspect [OPTIONS] NAME ``` View details about a deployment. \[required] Specify an output format. Currently supports: json **Example:** `$ prefect deployment inspect "hello-world/my-deployment"` ```python { 'id': '610df9c3-0fb4-4856-b330-67f588d20201', 'created': '2022-08-01T18:36:25.192102+00:00', 'updated': '2022-08-01T18:36:25.188166+00:00', 'name': 'my-deployment', 'description': None, 'flow_id': 'b57b0aa2-ef3a-479e-be49-381fb0483b4e', 'schedules': None, 'parameters': {'name': 'Marvin'}, 'tags': ['test'], 'parameter_openapi_schema': { 'title': 'Parameters', 'type': 'object', 'properties': { 'name': { 'title': 'name', 'type': 'string' } }, 'required': ['name'] }, 'storage_document_id': '63ef008f-1e5d-4e07-a0d4-4535731adb32', 'infrastructure_document_id': '6702c598-7094-42c8-9785-338d2ec3a028', 'infrastructure': { 'type': 'process', 'env': {}, 'labels': {}, 'name': None, 'command': ['python', '-m', 'prefect.engine'], 'stream_output': True } } ``` ## `prefect deployment ls` ```command prefect deployment ls [OPTIONS] ``` View all deployments or deployments for specific flows. ## `prefect deployment run` ```command prefect deployment run [OPTIONS] [NAME] ``` Create a flow run for the given flow and deployment. The flow run will be scheduled to run immediately unless `--start-in` or `--start-at` is specified. The flow run will not execute until a worker starts. To watch the flow run until it reaches a terminal state, use the `--watch` flag. A deployed flow's name: \/\ A deployment id to search for if no name is given A key, value pair (key=value) specifying a flow run job variable. The value will be interpreted as JSON. May be passed multiple times to specify multiple job variable values. A key, value pair (key=value) specifying a flow parameter. The value will be interpreted as JSON. May be passed multiple times to specify multiple parameter values. A mapping of parameters to values. To use a stdin, pass '-'. Any parameters passed with `--param` will take precedence over these values. A human-readable string specifying a time interval to wait before starting the flow run. E.g. 'in 5 minutes', 'in 1 hour', 'in 2 days'. A human-readable string specifying a time to start the flow run. E.g. 'at 5:30pm', 'at 2022-08-01 17:30', 'at 2022-08-01 17:30:00'. Tag(s) to be applied to flow run. Whether to poll the flow run until a terminal state is reached. How often to poll the flow run for state changes (in seconds). Timeout for --watch. Custom name to give the flow run. ## `prefect deployment delete` ```command prefect deployment delete [OPTIONS] [NAME] ``` Delete a deployment. A deployed flow's name: \/\ A deployment id to search for if no name is given Delete all deployments **Example:** ```bash $ prefect deployment delete test_flow/test_deployment $ prefect deployment delete --id dfd3e220-a130-4149-9af6-8d487e02fea6 ``` ## `prefect deployment schedule` ```command prefect deployment schedule [OPTIONS] COMMAND [ARGS]... ``` Manage deployment schedules. ### `prefect deployment schedule create` ```command prefect deployment schedule create [OPTIONS] NAME ``` Create a schedule for a given deployment. \[required] An interval to schedule on, specified in seconds The anchor date for an interval schedule Deployment schedule rrule string Deployment schedule cron string Control how croniter handles `day` and `day_of_week` entries Deployment schedule timezone string e.g. 'America/New\_York' Whether the schedule is active. Defaults to True. Replace the deployment's current schedule(s) with this new schedule. Accept the confirmation prompt without prompting ### `prefect deployment schedule delete` ```command prefect deployment schedule delete [OPTIONS] DEPLOYMENT_NAME SCHEDULE_ID ``` Delete a deployment schedule. \[required] \[required] Accept the confirmation prompt without prompting ### `prefect deployment schedule pause` ```command prefect deployment schedule pause [OPTIONS] DEPLOYMENT_NAME SCHEDULE_ID ``` Pause a deployment schedule. \[required] \[required] ### `prefect deployment schedule resume` ```command prefect deployment schedule resume [OPTIONS] DEPLOYMENT_NAME SCHEDULE_ID ``` Resume a deployment schedule. \[required] \[required] ### `prefect deployment schedule ls` ```command prefect deployment schedule ls [OPTIONS] DEPLOYMENT_NAME ``` View all schedules for a deployment. \[required] ### `prefect deployment schedule clear` ```command prefect deployment schedule clear [OPTIONS] DEPLOYMENT_NAME ``` Clear all schedules for a deployment. \[required] Accept the confirmation prompt without prompting # Source: https://docs-3.prefect.io/v3/api-ref/cli/dev # `prefect dev` ```command prefect dev [OPTIONS] COMMAND [ARGS]... ``` Internal Prefect development. Note that many of these commands require extra dependencies (such as npm and MkDocs) to function properly. ## `prefect dev build-docs` ```command prefect dev build-docs [OPTIONS] ``` Builds REST API reference documentation for static display. ## `prefect dev build-ui` ```command prefect dev build-ui [OPTIONS] ``` Installs dependencies and builds UI locally. Requires npm. ## `prefect dev ui` ```command prefect dev ui [OPTIONS] ``` Starts a hot-reloading development UI. ## `prefect dev api` ```command prefect dev api [OPTIONS] ``` Starts a hot-reloading development API. ## `prefect dev start` ```command prefect dev start [OPTIONS] ``` Starts a hot-reloading development server with API, UI, and agent processes. Each service has an individual command if you wish to start them separately. Each service can be excluded here as well. ## `prefect dev build-image` ```command prefect dev build-image [OPTIONS] ``` Build a docker image for development. The architecture to build the container for. Defaults to the architecture of the host Python. \[default: arm64] The Python version to build the container for. Defaults to the version of the host Python. \[default: 3.10] An alternative flavor to build, for example 'conda'. Defaults to the standard Python base image ## `prefect dev container` ```command prefect dev container [OPTIONS] ``` Run a docker container with local code mounted and installed. # Source: https://docs-3.prefect.io/v3/api-ref/cli/event # `prefect event` ```command prefect event [OPTIONS] COMMAND [ARGS]... ``` Stream events. ## `prefect event stream` ```command prefect event stream [OPTIONS] ``` Subscribes to the event stream of a workspace, printing each event as it is received. By default, events are printed as JSON, but can be printed as text by passing `--format text`. Output format (json or text) File to write events to Stream events for entire account, including audit logs Stream only one event # Source: https://docs-3.prefect.io/v3/api-ref/cli/flow # `prefect flow` ```command prefect flow [OPTIONS] COMMAND [ARGS]... ``` View and serve flows. ## `prefect flow ls` ```command prefect flow ls [OPTIONS] ``` View flows. ## `prefect flow serve` ```command prefect flow serve [OPTIONS] ENTRYPOINT ``` Serve a flow via an entrypoint. The path to a file containing a flow and the name of the flow function in the format `./path/to/file.py:flow_func_name`. \[required] The name to give the deployment created for the flow. The description to give the created deployment. If not provided, the description will be populated from the flow's description. A version to give the created deployment. One or more optional tags to apply to the created deployment. A cron string that will be used to set a schedule for the created deployment. An integer specifying an interval (in seconds) between scheduled runs of the flow. The start date for an interval schedule. An RRule that will be used to set a schedule for the created deployment. Timezone to used scheduling flow runs e.g. 'America/New\_York' If set, provided schedule will be paused when the serve command is stopped. If not set, the schedules will continue running. The maximum number of runs that can be executed concurrently by the created runner; only applies to this served flow. To apply a limit across multiple served flows, use global\_limit. The maximum number of concurrent runs allowed across all served flow instances associated with the same deployment. # Source: https://docs-3.prefect.io/v3/api-ref/cli/flow-run # `prefect flow-run` ```command prefect flow-run [OPTIONS] COMMAND [ARGS]... ``` Interact with flow runs. ## `prefect flow-run inspect` ```command prefect flow-run inspect [OPTIONS] ID ``` View details about a flow run. \[required] Open the flow run in a web browser. Specify an output format. Currently supports: json ## `prefect flow-run ls` ```command prefect flow-run ls [OPTIONS] ``` View recent flow runs or flow runs for specific flows. Arguments: flow\_name: Name of the flow limit: Maximum number of flow runs to list. Defaults to 15. state: Name of the flow run's state. Can be provided multiple times. Options are 'SCHEDULED', 'PENDING', 'RUNNING', 'COMPLETED', 'FAILED', 'CRASHED', 'CANCELLING', 'CANCELLED', 'PAUSED', 'SUSPENDED', 'AWAITINGRETRY', 'RETRYING', and 'LATE'. state\_type: Type of the flow run's state. Can be provided multiple times. Options are 'SCHEDULED', 'PENDING', 'RUNNING', 'COMPLETED', 'FAILED', 'CRASHED', 'CANCELLING', 'CANCELLED', 'CRASHED', and 'PAUSED'. Examples: \$ prefect flow-runs ls --state Running \$ prefect flow-runs ls --state Running --state late \$ prefect flow-runs ls --state-type RUNNING \$ prefect flow-runs ls --state-type RUNNING --state-type FAILED Name of the flow Maximum number of flow runs to list Name of the flow run's state Type of the flow run's state ## `prefect flow-run delete` ```command prefect flow-run delete [OPTIONS] ID ``` Delete a flow run by ID. \[required] ## `prefect flow-run cancel` ```command prefect flow-run cancel [OPTIONS] ID ``` Cancel a flow run by ID. \[required] ## `prefect flow-run logs` ```command prefect flow-run logs [OPTIONS] ID ``` View logs for a flow run. \[required] Show the first 20 logs instead of all logs. Number of logs to show when using the --head or --tail flag. If None, defaults to 20. Reverse the logs order to print the most recent logs first Show the last 20 logs instead of all logs. ## `prefect flow-run execute` ```command prefect flow-run execute [OPTIONS] [ID] ``` ID of the flow run to execute # Source: https://docs-3.prefect.io/v3/api-ref/cli/global-concurrency-limit # `prefect global-concurrency-limit` ```command prefect global-concurrency-limit [OPTIONS] COMMAND [ARGS]... ``` Manage global concurrency limits. ## `prefect global-concurrency-limit ls` ```command prefect global-concurrency-limit ls [OPTIONS] ``` List all global concurrency limits. ## `prefect global-concurrency-limit inspect` ```command prefect global-concurrency-limit inspect [OPTIONS] NAME ``` Inspect a global concurrency limit. The name of the global concurrency limit to inspect. \[required] Specify an output format. Currently supports: json Path to .json file to write the global concurrency limit output to. ## `prefect global-concurrency-limit delete` ```command prefect global-concurrency-limit delete [OPTIONS] NAME ``` Delete a global concurrency limit. The name of the global concurrency limit to delete. \[required] ## `prefect global-concurrency-limit enable` ```command prefect global-concurrency-limit enable [OPTIONS] NAME ``` Enable a global concurrency limit. The name of the global concurrency limit to enable. \[required] ## `prefect global-concurrency-limit disable` ```command prefect global-concurrency-limit disable [OPTIONS] NAME ``` Disable a global concurrency limit. The name of the global concurrency limit to disable. \[required] ## `prefect global-concurrency-limit update` ```command prefect global-concurrency-limit update [OPTIONS] NAME ``` Update a global concurrency limit. The name of the global concurrency limit to update. \[required] Enable the global concurrency limit. Disable the global concurrency limit. The limit of the global concurrency limit. The number of active slots. The slot decay per second. **Example:** $prefect global-concurrency-limit update my-gcl --limit 10$ prefect gcl update my-gcl --active-slots 5 $prefect gcl update my-gcl --slot-decay-per-second 0.5$ prefect gcl update my-gcl --enable \$ prefect gcl update my-gcl --disable --limit 5 ## `prefect global-concurrency-limit create` ```command prefect global-concurrency-limit create [OPTIONS] NAME ``` Create a global concurrency limit. Arguments: name (str): The name of the global concurrency limit to create. limit (int): The limit of the global concurrency limit. disable (Optional\[bool]): Create an inactive global concurrency limit. active\_slots (Optional\[int]): The number of active slots. slot\_decay\_per\_second (Optional\[float]): The slot decay per second. Examples: \$ prefect global-concurrency-limit create my-gcl --limit 10 \$ prefect gcl create my-gcl --limit 5 --active-slots 3 \$ prefect gcl create my-gcl --limit 5 --active-slots 3 --slot-decay-per-second 0.5 \$ prefect gcl create my-gcl --limit 5 --inactive The name of the global concurrency limit to create. \[required] The limit of the global concurrency limit. Create an inactive global concurrency limit. The number of active slots. The slot decay per second. # Source: https://docs-3.prefect.io/v3/api-ref/cli/init # `prefect init` ```command prefect init [OPTIONS] ``` Initialize a new deployment configuration recipe. One or more fields to pass to the recipe (e.g., image\_name) in the format of key=value. # Source: https://docs-3.prefect.io/v3/api-ref/cli/profile # `prefect profile` ```command prefect profile [OPTIONS] COMMAND [ARGS]... ``` Select and manage Prefect profiles. ## `prefect profile ls` ```command prefect profile ls [OPTIONS] ``` List profile names. ## `prefect profile create` ```command prefect profile create [OPTIONS] NAME ``` Create a new profile. \[required] Copy an existing profile. ## `prefect profile use` ```command prefect profile use [OPTIONS] NAME ``` Set the given profile to active. \[required] ## `prefect profile delete` ```command prefect profile delete [OPTIONS] NAME ``` Delete the given profile. \[required] ## `prefect profile rename` ```command prefect profile rename [OPTIONS] NAME NEW_NAME ``` Change the name of a profile. \[required] \[required] ## `prefect profile inspect` ```command prefect profile inspect [OPTIONS] [NAME] ``` Display settings from a given profile; defaults to active. Name of profile to inspect; defaults to active profile. Specify an output format. Currently supports: json ## `prefect profile populate-defaults` ```command prefect profile populate-defaults [OPTIONS] ``` Populate the profiles configuration with default base profiles, preserving existing user profiles. # Source: https://docs-3.prefect.io/v3/api-ref/cli/profiles # `prefect profiles` ```command prefect profiles [OPTIONS] COMMAND [ARGS]... ``` Select and manage Prefect profiles. ## `prefect profiles ls` ```command prefect profiles ls [OPTIONS] ``` List profile names. ## `prefect profiles create` ```command prefect profiles create [OPTIONS] NAME ``` Create a new profile. \[required] Copy an existing profile. ## `prefect profiles use` ```command prefect profiles use [OPTIONS] NAME ``` Set the given profile to active. \[required] ## `prefect profiles delete` ```command prefect profiles delete [OPTIONS] NAME ``` Delete the given profile. \[required] ## `prefect profiles rename` ```command prefect profiles rename [OPTIONS] NAME NEW_NAME ``` Change the name of a profile. \[required] \[required] ## `prefect profiles inspect` ```command prefect profiles inspect [OPTIONS] [NAME] ``` Display settings from a given profile; defaults to active. Name of profile to inspect; defaults to active profile. Specify an output format. Currently supports: json ## `prefect profiles populate-defaults` ```command prefect profiles populate-defaults [OPTIONS] ``` Populate the profiles configuration with default base profiles, preserving existing user profiles. # Source: https://docs-3.prefect.io/v3/api-ref/cli/server # `prefect server` ```command prefect server [OPTIONS] COMMAND [ARGS]... ``` Start a Prefect server instance and interact with the database ## `prefect server start` ```command prefect server start [OPTIONS] ``` Start a Prefect server instance Only run the webserver API and UI Run the server in the background ## `prefect server stop` ```command prefect server stop [OPTIONS] ``` Stop a Prefect server instance running in the background ## `prefect server database` ```command prefect server database [OPTIONS] COMMAND [ARGS]... ``` Interact with the database. ### `prefect server database reset` ```command prefect server database reset [OPTIONS] ``` Drop and recreate all Prefect database tables ### `prefect server database upgrade` ```command prefect server database upgrade [OPTIONS] ``` Upgrade the Prefect database The revision to pass to `alembic upgrade`. If not provided, runs all migrations. Flag to show what migrations would be made without applying them. Will emit sql statements to stdout. ### `prefect server database downgrade` ```command prefect server database downgrade [OPTIONS] ``` Downgrade the Prefect database The revision to pass to `alembic downgrade`. If not provided, downgrades to the most recent revision. Use 'base' to run all migrations. Flag to show what migrations would be made without applying them. Will emit sql statements to stdout. ### `prefect server database revision` ```command prefect server database revision [OPTIONS] ``` Create a new migration for the Prefect database A message to describe the migration. ### `prefect server database stamp` ```command prefect server database stamp [OPTIONS] REVISION ``` Stamp the revision table with the given revision; don't run any migrations \[required] ## `prefect server services` ```command prefect server services [OPTIONS] COMMAND [ARGS]... ``` Interact with server loop services. ### `prefect server services manager` ```command prefect server services manager [OPTIONS] ``` This is an internal entrypoint used by `prefect server services start --background`. Users do not call this directly. We do everything in sync so that the child won't exit until the user kills it. ### `prefect server services list-services` ```command prefect server services list-services [OPTIONS] ``` List all available services and their status. ### `prefect server services ls` ```command prefect server services ls [OPTIONS] ``` List all available services and their status. ### `prefect server services start-services` ```command prefect server services start-services [OPTIONS] ``` Start all enabled Prefect services in one process. Run the services in the background ### `prefect server services start` ```command prefect server services start [OPTIONS] ``` Start all enabled Prefect services in one process. Run the services in the background ### `prefect server services stop-services` ```command prefect server services stop-services [OPTIONS] ``` Stop any background Prefect services that were started. ### `prefect server services stop` ```command prefect server services stop [OPTIONS] ``` Stop any background Prefect services that were started. # Source: https://docs-3.prefect.io/v3/api-ref/cli/shell # `prefect shell` ```command prefect shell [OPTIONS] COMMAND [ARGS]... ``` Serve and watch shell commands as Prefect flows. ## `prefect shell watch` ```command prefect shell watch [OPTIONS] COMMAND ``` Executes a shell command and observes it as Prefect flow. \[required] Log the output of the command to Prefect logs. Name of the flow run. Name of the flow. Stream the output of the command. Optional tags for the flow run. ## `prefect shell serve` ```command prefect shell serve [OPTIONS] COMMAND ``` Creates and serves a Prefect deployment that runs a specified shell command according to a cron schedule or ad hoc. This function allows users to integrate shell command execution into Prefect workflows seamlessly. It provides options for scheduled execution via cron expressions, flow and deployment naming for better management, and the application of tags for easier categorization and filtering within the Prefect UI. Additionally, it supports streaming command output to Prefect logs, setting concurrency limits to control flow execution, and optionally running the deployment once for ad-hoc tasks. \[required] Name of the flow Name of the deployment Tag for the deployment (can be provided multiple times) Stream the output of the command Cron schedule for the flow Timezone for the schedule The maximum number of flow runs that can execute at the same time Run the agent loop once, instead of forever. # Source: https://docs-3.prefect.io/v3/api-ref/cli/task # `prefect task` ```command prefect task [OPTIONS] COMMAND [ARGS]... ``` Work with task scheduling. ## `prefect task serve` ```command prefect task serve [OPTIONS] [ENTRYPOINTS]... ``` Serve the provided tasks so that their runs may be submitted to and executed in the engine. The paths to one or more tasks, in the form of `./path/to/file.py:task_func_name`. The module(s) to import the tasks from. The maximum number of tasks that can be run concurrently. Defaults to 10. # Source: https://docs-3.prefect.io/v3/api-ref/cli/task-run # `prefect task-run` ```command prefect task-run [OPTIONS] COMMAND [ARGS]... ``` View and inspect task runs. ## `prefect task-run inspect` ```command prefect task-run inspect [OPTIONS] ID ``` View details about a task run. \[required] Open the task run in a web browser. Specify an output format. Currently supports: json ## `prefect task-run ls` ```command prefect task-run ls [OPTIONS] ``` View recent task runs Name of the task Maximum number of task runs to list Name of the task run's state Type of the task run's state ## `prefect task-run logs` ```command prefect task-run logs [OPTIONS] ID ``` View logs for a task run. \[required] Show the first 20 logs instead of all logs. Number of logs to show when using the --head or --tail flag. If None, defaults to 20. Reverse the logs order to print the most recent logs first Show the last 20 logs instead of all logs. # Source: https://docs-3.prefect.io/v3/api-ref/cli/variable # `prefect variable` ```command prefect variable [OPTIONS] COMMAND [ARGS]... ``` Manage variables. ## `prefect variable ls` ```command prefect variable ls [OPTIONS] ``` List variables. The maximum number of variables to return. ## `prefect variable inspect` ```command prefect variable inspect [OPTIONS] NAME ``` View details about a variable. \[required] Specify an output format. Currently supports: json ## `prefect variable get` ```command prefect variable get [OPTIONS] NAME ``` Get a variable's value. \[required] ## `prefect variable set` ```command prefect variable set [OPTIONS] NAME VALUE ``` Set a variable. If the variable already exists, use `--overwrite` to update it. \[required] \[required] Overwrite the variable if it already exists. Tag to associate with the variable. ## `prefect variable unset` ```command prefect variable unset [OPTIONS] NAME ``` Unset a variable. \[required] ## `prefect variable delete` ```command prefect variable delete [OPTIONS] NAME ``` Unset a variable. \[required] # Source: https://docs-3.prefect.io/v3/api-ref/cli/version # `prefect version` ```command prefect version [OPTIONS] ``` Get the current Prefect version and integration information. Omit integration information # Source: https://docs-3.prefect.io/v3/api-ref/cli/work-pool # `prefect work-pool` ```command prefect work-pool [OPTIONS] COMMAND [ARGS]... ``` Manage work pools. ## `prefect work-pool create` ```command prefect work-pool create [OPTIONS] NAME ``` Create a new work pool or update an existing one.  Examples:  Create a Kubernetes work pool in a paused state:  \$ prefect work-pool create "my-pool" --type kubernetes --paused  Create a Docker work pool with a custom base job template:  \$ prefect work-pool create "my-pool" --type docker --base-job-template ./base-job-template.json  Update an existing work pool:  \$ prefect work-pool create "existing-pool" --base-job-template ./base-job-template.json --overwrite The name of the work pool. \[required] The path to a JSON file containing the base job template to use. If unspecified, Prefect will use the default base job template for the given worker type. Whether or not to create the work pool in a paused state. The type of work pool to create. Whether or not to use the created work pool as the local default for deployment. Whether or not to provision infrastructure for the work pool if supported for the given work pool type. Whether or not to overwrite an existing work pool with the same name. ## `prefect work-pool ls` ```command prefect work-pool ls [OPTIONS] ``` List work pools.  Examples: \$ prefect work-pool ls Show additional information about work pools. ## `prefect work-pool inspect` ```command prefect work-pool inspect [OPTIONS] NAME ``` Inspect a work pool.  Examples: \$ prefect work-pool inspect "my-pool" \$ prefect work-pool inspect "my-pool" --output json The name of the work pool to inspect. \[required] Specify an output format. Currently supports: json ## `prefect work-pool pause` ```command prefect work-pool pause [OPTIONS] NAME ``` Pause a work pool.  Examples: \$ prefect work-pool pause "my-pool" The name of the work pool to pause. \[required] ## `prefect work-pool resume` ```command prefect work-pool resume [OPTIONS] NAME ``` Resume a work pool.  Examples: \$ prefect work-pool resume "my-pool" The name of the work pool to resume. \[required] ## `prefect work-pool update` ```command prefect work-pool update [OPTIONS] NAME ``` Update a work pool.  Examples: \$ prefect work-pool update "my-pool" The name of the work pool to update. \[required] The path to a JSON file containing the base job template to use. If unspecified, Prefect will use the default base job template for the given worker type. If None, the base job template will not be modified. The concurrency limit for the work pool. If None, the concurrency limit will not be modified. The description for the work pool. If None, the description will not be modified. ## `prefect work-pool provision-infrastructure` ```command prefect work-pool provision-infrastructure [OPTIONS] NAME ``` Provision infrastructure for a work pool.  Examples: \$ prefect work-pool provision-infrastructure "my-pool" \$ prefect work-pool provision-infra "my-pool" The name of the work pool to provision infrastructure for. \[required] ## `prefect work-pool provision-infra` ```command prefect work-pool provision-infra [OPTIONS] NAME ``` Provision infrastructure for a work pool.  Examples: \$ prefect work-pool provision-infrastructure "my-pool" \$ prefect work-pool provision-infra "my-pool" The name of the work pool to provision infrastructure for. \[required] ## `prefect work-pool delete` ```command prefect work-pool delete [OPTIONS] NAME ``` Delete a work pool.  Examples: \$ prefect work-pool delete "my-pool" The name of the work pool to delete. \[required] ## `prefect work-pool set-concurrency-limit` ```command prefect work-pool set-concurrency-limit [OPTIONS] NAME CONCURRENCY_LIMIT ``` Set the concurrency limit for a work pool.  Examples: \$ prefect work-pool set-concurrency-limit "my-pool" 10 The name of the work pool to update. \[required] The new concurrency limit for the work pool. \[required] ## `prefect work-pool clear-concurrency-limit` ```command prefect work-pool clear-concurrency-limit [OPTIONS] NAME ``` Clear the concurrency limit for a work pool.  Examples: \$ prefect work-pool clear-concurrency-limit "my-pool" The name of the work pool to update. \[required] ## `prefect work-pool get-default-base-job-template` ```command prefect work-pool get-default-base-job-template [OPTIONS] ``` Get the default base job template for a given work pool type.  Examples: \$ prefect work-pool get-default-base-job-template --type kubernetes The type of work pool for which to get the default base job template. If set, write the output to a file. ## `prefect work-pool preview` ```command prefect work-pool preview [OPTIONS] [NAME] ``` Preview the work pool's scheduled work for all queues.  Examples: \$ prefect work-pool preview "my-pool" --hours 24 The name or ID of the work pool to preview The number of hours to look ahead; defaults to 1 hour ## `prefect work-pool storage` ```command prefect work-pool storage [OPTIONS] COMMAND [ARGS]... ``` EXPERIMENTAL: Manage work pool storage. ### `prefect work-pool storage inspect` ```command prefect work-pool storage inspect [OPTIONS] WORK_POOL_NAME ``` EXPERIMENTAL: Inspect the storage configuration for a work pool. The name of the work pool to display storage configuration for. \[required] Specify an output format. Currently supports: json **Example:** $prefect work-pool storage inspect "my-pool"$ prefect work-pool storage inspect "my-pool" --output json ### `prefect work-pool storage configure` ```command prefect work-pool storage configure [OPTIONS] COMMAND [ARGS]... ``` EXPERIMENTAL: Configure work pool storage. #### `prefect work-pool storage configure s3` ```command prefect work-pool storage configure s3 [OPTIONS] WORK_POOL_NAME ``` EXPERIMENTAL: Configure AWS S3 storage for a work pool.  Examples: \$ prefect work-pool storage configure s3 "my-pool" --bucket my-bucket --aws-credentials-block-name my-credentials The name of the work pool to configure storage for. \[required] The name of the S3 bucket to use. The name of the AWS credentials block to use. #### `prefect work-pool storage configure gcs` ```command prefect work-pool storage configure gcs [OPTIONS] WORK_POOL_NAME ``` EXPERIMENTAL: Configure Google Cloud storage for a work pool.  Examples: \$ prefect work-pool storage configure gcs "my-pool" --bucket my-bucket --gcp-credentials-block-name my-credentials The name of the work pool to configure storage for. \[required] The name of the Google Cloud Storage bucket to use. The name of the Google Cloud credentials block to use. #### `prefect work-pool storage configure azure-blob-storage` ```command prefect work-pool storage configure azure-blob-storage [OPTIONS] WORK_POOL_NAME ``` EXPERIMENTAL: Configure Azure Blob Storage for a work pool.  Examples: \$ prefect work-pool storage configure azure-blob-storage "my-pool" --container my-container --azure-blob-storage-credentials-block-name my-credentials The name of the work pool to configure storage for. \[required] The name of the Azure Blob Storage container to use. The name of the Azure Blob Storage credentials block to use. # Source: https://docs-3.prefect.io/v3/api-ref/cli/work-queue # `prefect work-queue` ```command prefect work-queue [OPTIONS] COMMAND [ARGS]... ``` Manage work queues. ## `prefect work-queue create` ```command prefect work-queue create [OPTIONS] NAME ``` Create a work queue. The unique name to assign this work queue \[required] The concurrency limit to set on the queue. The name of the work pool to create the work queue in. The associated priority for the created work queue ## `prefect work-queue set-concurrency-limit` ```command prefect work-queue set-concurrency-limit [OPTIONS] NAME LIMIT ``` Set a concurrency limit on a work queue. The name or ID of the work queue \[required] The concurrency limit to set on the queue. \[required] The name of the work pool that the work queue belongs to. ## `prefect work-queue clear-concurrency-limit` ```command prefect work-queue clear-concurrency-limit [OPTIONS] NAME ``` Clear any concurrency limits from a work queue. The name or ID of the work queue to clear \[required] The name of the work pool that the work queue belongs to. ## `prefect work-queue pause` ```command prefect work-queue pause [OPTIONS] NAME ``` Pause a work queue. The name or ID of the work queue to pause \[required] The name of the work pool that the work queue belongs to. ## `prefect work-queue resume` ```command prefect work-queue resume [OPTIONS] NAME ``` Resume a paused work queue. The name or ID of the work queue to resume \[required] The name of the work pool that the work queue belongs to. ## `prefect work-queue inspect` ```command prefect work-queue inspect [OPTIONS] [NAME] ``` Inspect a work queue by ID. The name or ID of the work queue to inspect The name of the work pool that the work queue belongs to. Specify an output format. Currently supports: json ## `prefect work-queue ls` ```command prefect work-queue ls [OPTIONS] ``` View all work queues. Display more information. Will match work queues with names that start with the specified prefix string The name of the work pool containing the work queues to list. ## `prefect work-queue preview` ```command prefect work-queue preview [OPTIONS] [NAME] ``` Preview a work queue. The name or ID of the work queue to preview The number of hours to look ahead; defaults to 1 hour The name of the work pool that the work queue belongs to. ## `prefect work-queue delete` ```command prefect work-queue delete [OPTIONS] NAME ``` Delete a work queue by ID. The name or ID of the work queue to delete \[required] The name of the work pool containing the work queue to delete. ## `prefect work-queue read-runs` ```command prefect work-queue read-runs [OPTIONS] NAME ``` Get runs in a work queue. Note that this will trigger an artificial poll of the work queue. The name or ID of the work queue to poll \[required] The name of the work pool containing the work queue to poll. # Source: https://docs-3.prefect.io/v3/api-ref/cli/worker # `prefect worker` ```command prefect worker [OPTIONS] COMMAND [ARGS]... ``` Start and interact with workers. ## `prefect worker start` ```command prefect worker start [OPTIONS] ``` Start a worker process to poll a work pool for flow runs. The name to give to the started worker. If not provided, a unique name will be generated. The work pool the started worker should poll. One or more work queue names for the worker to pull from. If not provided, the worker will pull from all work queues in the work pool. The type of worker to start. If not provided, the worker type will be inferred from the work pool. Number of seconds to look into the future for scheduled flow runs. Only run worker polling once. By default, the worker runs forever. Maximum number of flow runs to start simultaneously. Start a healthcheck server for the worker. Install policy to use workers from Prefect integration packages. The path to a JSON file containing the base job template to use. If unspecified, Prefect will use the default base job template for the given worker type. If the work pool already exists, this will be ignored. # API & SDK References Source: https://docs-3.prefect.io/v3/api-ref/index Explore Prefect's auto-generated API & SDK reference documentation. Prefect auto-generates reference documentation for the following components: * **[Prefect Python SDK](/v3/api-ref/python)**: used to build, test, and execute workflows. * **[Prefect REST API](/v3/api-ref/rest-api)**: used by workflow clients and the Prefect UI for orchestration and data retrieval. * Prefect Cloud REST API documentation: [https://app.prefect.cloud/api/docs](https://app.prefect.cloud/api/docs). * Self-hosted Prefect server [REST API documentation](/v3/api-ref/rest-api/server/). Additionally, if self-hosting a Prefect server instance, you can access REST API documentation at the `/docs` endpoint of your [`PREFECT_API_URL`](/v3/develop/settings-and-profiles/). For example, if you run `prefect server start` with no additional configuration you can find this reference at [http://localhost:4200/docs](http://localhost:4200/docs). # artifacts Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-artifacts # `prefect.artifacts` Interface for creating and reading artifacts. ## Functions ### `acreate_link_artifact` ```python acreate_link_artifact(link: str, link_text: str | None = None, key: str | None = None, description: str | None = None, client: 'PrefectClient | None' = None) -> UUID ``` Create a link artifact. **Args:** * `link`: The link to create. * `link_text`: The link text. * `key`: A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes. * `description`: A user-specified description of the artifact. **Returns:** * The table artifact ID. ### `create_link_artifact` ```python create_link_artifact(link: str, link_text: str | None = None, key: str | None = None, description: str | None = None, client: 'PrefectClient | None' = None) -> UUID ``` Create a link artifact. **Args:** * `link`: The link to create. * `link_text`: The link text. * `key`: A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes. * `description`: A user-specified description of the artifact. **Returns:** * The table artifact ID. ### `acreate_markdown_artifact` ```python acreate_markdown_artifact(markdown: str, key: str | None = None, description: str | None = None) -> UUID ``` Create a markdown artifact. **Args:** * `markdown`: The markdown to create. * `key`: A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes. * `description`: A user-specified description of the artifact. **Returns:** * The table artifact ID. ### `create_markdown_artifact` ```python create_markdown_artifact(markdown: str, key: str | None = None, description: str | None = None) -> UUID ``` Create a markdown artifact. **Args:** * `markdown`: The markdown to create. * `key`: A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes. * `description`: A user-specified description of the artifact. **Returns:** * The table artifact ID. ### `acreate_table_artifact` ```python acreate_table_artifact(table: dict[str, list[Any]] | list[dict[str, Any]] | list[list[Any]], key: str | None = None, description: str | None = None) -> UUID ``` Create a table artifact asynchronously. **Args:** * `table`: The table to create. * `key`: A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes. * `description`: A user-specified description of the artifact. **Returns:** * The table artifact ID. ### `create_table_artifact` ```python create_table_artifact(table: dict[str, list[Any]] | list[dict[str, Any]] | list[list[Any]], key: str | None = None, description: str | None = None) -> UUID ``` Create a table artifact. **Args:** * `table`: The table to create. * `key`: A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes. * `description`: A user-specified description of the artifact. **Returns:** * The table artifact ID. ### `acreate_progress_artifact` ```python acreate_progress_artifact(progress: float, key: str | None = None, description: str | None = None) -> UUID ``` Create a progress artifact asynchronously. **Args:** * `progress`: The percentage of progress represented by a float between 0 and 100. * `key`: A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes. * `description`: A user-specified description of the artifact. **Returns:** * The progress artifact ID. ### `create_progress_artifact` ```python create_progress_artifact(progress: float, key: str | None = None, description: str | None = None) -> UUID ``` Create a progress artifact. **Args:** * `progress`: The percentage of progress represented by a float between 0 and 100. * `key`: A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes. * `description`: A user-specified description of the artifact. **Returns:** * The progress artifact ID. ### `aupdate_progress_artifact` ```python aupdate_progress_artifact(artifact_id: UUID, progress: float, description: str | None = None, client: 'PrefectClient | None' = None) -> UUID ``` Update a progress artifact asynchronously. **Args:** * `artifact_id`: The ID of the artifact to update. * `progress`: The percentage of progress represented by a float between 0 and 100. * `description`: A user-specified description of the artifact. **Returns:** * The progress artifact ID. ### `update_progress_artifact` ```python update_progress_artifact(artifact_id: UUID, progress: float, description: str | None = None, client: 'PrefectClient | None' = None) -> UUID ``` Update a progress artifact. **Args:** * `artifact_id`: The ID of the artifact to update. * `progress`: The percentage of progress represented by a float between 0 and 100. * `description`: A user-specified description of the artifact. **Returns:** * The progress artifact ID. ### `acreate_image_artifact` ```python acreate_image_artifact(image_url: str, key: str | None = None, description: str | None = None) -> UUID ``` Create an image artifact asynchronously. **Args:** * `image_url`: The URL of the image to display. * `key`: A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes. * `description`: A user-specified description of the artifact. **Returns:** * The image artifact ID. ### `create_image_artifact` ```python create_image_artifact(image_url: str, key: str | None = None, description: str | None = None) -> UUID ``` Create an image artifact. **Args:** * `image_url`: The URL of the image to display. * `key`: A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes. * `description`: A user-specified description of the artifact. **Returns:** * The image artifact ID. ## Classes ### `Artifact` An artifact is a piece of data that is created by a flow or task run. [https://docs.prefect.io/latest/develop/artifacts](https://docs.prefect.io/latest/develop/artifacts) **Args:** * `type`: A string identifying the type of artifact. * `key`: A user-provided string identifier. The key must only contain lowercase letters, numbers, and dashes. * `description`: A user-specified description of the artifact. * `data`: A JSON payload that allows for a result to be retrieved. **Methods:** #### `acreate` ```python acreate(self, client: 'PrefectClient | None' = None) -> 'ArtifactResponse' ``` An async method to create an artifact. **Args:** * `client`: The PrefectClient **Returns:** * The created artifact. #### `aformat` ```python aformat(self) -> str | float | int | dict[str, Any] ``` #### `aget` ```python aget(cls, key: str | None = None, client: 'PrefectClient | None' = None) -> 'ArtifactResponse | None' ``` A async method to get an artifact. **Args:** * `key`: The key of the artifact to get. * `client`: A client to use when calling the Prefect API. **Returns:** * The artifact (if found). #### `aget_or_create` ```python aget_or_create(cls, key: str | None = None, description: str | None = None, data: dict[str, Any] | Any | None = None, client: 'PrefectClient | None' = None, **kwargs: Any) -> tuple['ArtifactResponse', bool] ``` A async method to get or create an artifact. **Args:** * `key`: The key of the artifact to get or create. * `description`: The description of the artifact to create. * `data`: The data of the artifact to create. * `client`: The PrefectClient * `**kwargs`: Additional keyword arguments to use when creating the artifact. **Returns:** * The artifact, either retrieved or created. #### `create` ```python create(self: Self, client: 'PrefectClient | None' = None) -> 'ArtifactResponse' ``` A method to create an artifact. **Args:** * `client`: The PrefectClient **Returns:** * The created artifact. #### `format` ```python format(self) -> str | float | int | dict[str, Any] ``` #### `get` ```python get(cls, key: str | None = None, client: 'PrefectClient | None' = None) -> 'ArtifactResponse | None' ``` A method to get an artifact. **Args:** * `key`: The key of the artifact to get. * `client`: A client to use when calling the Prefect API. **Returns:** * The artifact (if found). #### `get_or_create` ```python get_or_create(cls, key: str | None = None, description: str | None = None, data: dict[str, Any] | Any | None = None, client: 'PrefectClient | None' = None, **kwargs: Any) -> tuple['ArtifactResponse', bool] ``` A method to get or create an artifact. **Args:** * `key`: The key of the artifact to get or create. * `description`: The description of the artifact to create. * `data`: The data of the artifact to create. * `client`: The PrefectClient * `**kwargs`: Additional keyword arguments to use when creating the artifact. **Returns:** * The artifact, either retrieved or created. ### `LinkArtifact` **Methods:** #### `acreate` ```python acreate(self, client: 'PrefectClient | None' = None) -> 'ArtifactResponse' ``` An async method to create an artifact. **Args:** * `client`: The PrefectClient **Returns:** * The created artifact. #### `aformat` ```python aformat(self) -> str ``` #### `aformat` ```python aformat(self) -> str | float | int | dict[str, Any] ``` #### `aget` ```python aget(cls, key: str | None = None, client: 'PrefectClient | None' = None) -> 'ArtifactResponse | None' ``` A async method to get an artifact. **Args:** * `key`: The key of the artifact to get. * `client`: A client to use when calling the Prefect API. **Returns:** * The artifact (if found). #### `aget_or_create` ```python aget_or_create(cls, key: str | None = None, description: str | None = None, data: dict[str, Any] | Any | None = None, client: 'PrefectClient | None' = None, **kwargs: Any) -> tuple['ArtifactResponse', bool] ``` A async method to get or create an artifact. **Args:** * `key`: The key of the artifact to get or create. * `description`: The description of the artifact to create. * `data`: The data of the artifact to create. * `client`: The PrefectClient * `**kwargs`: Additional keyword arguments to use when creating the artifact. **Returns:** * The artifact, either retrieved or created. #### `create` ```python create(self: Self, client: 'PrefectClient | None' = None) -> 'ArtifactResponse' ``` A method to create an artifact. **Args:** * `client`: The PrefectClient **Returns:** * The created artifact. #### `format` ```python format(self) -> str ``` #### `format` ```python format(self) -> str | float | int | dict[str, Any] ``` #### `get` ```python get(cls, key: str | None = None, client: 'PrefectClient | None' = None) -> 'ArtifactResponse | None' ``` A method to get an artifact. **Args:** * `key`: The key of the artifact to get. * `client`: A client to use when calling the Prefect API. **Returns:** * The artifact (if found). #### `get_or_create` ```python get_or_create(cls, key: str | None = None, description: str | None = None, data: dict[str, Any] | Any | None = None, client: 'PrefectClient | None' = None, **kwargs: Any) -> tuple['ArtifactResponse', bool] ``` A method to get or create an artifact. **Args:** * `key`: The key of the artifact to get or create. * `description`: The description of the artifact to create. * `data`: The data of the artifact to create. * `client`: The PrefectClient * `**kwargs`: Additional keyword arguments to use when creating the artifact. **Returns:** * The artifact, either retrieved or created. ### `MarkdownArtifact` **Methods:** #### `acreate` ```python acreate(self, client: 'PrefectClient | None' = None) -> 'ArtifactResponse' ``` An async method to create an artifact. **Args:** * `client`: The PrefectClient **Returns:** * The created artifact. #### `aformat` ```python aformat(self) -> str ``` #### `aformat` ```python aformat(self) -> str | float | int | dict[str, Any] ``` #### `aget` ```python aget(cls, key: str | None = None, client: 'PrefectClient | None' = None) -> 'ArtifactResponse | None' ``` A async method to get an artifact. **Args:** * `key`: The key of the artifact to get. * `client`: A client to use when calling the Prefect API. **Returns:** * The artifact (if found). #### `aget_or_create` ```python aget_or_create(cls, key: str | None = None, description: str | None = None, data: dict[str, Any] | Any | None = None, client: 'PrefectClient | None' = None, **kwargs: Any) -> tuple['ArtifactResponse', bool] ``` A async method to get or create an artifact. **Args:** * `key`: The key of the artifact to get or create. * `description`: The description of the artifact to create. * `data`: The data of the artifact to create. * `client`: The PrefectClient * `**kwargs`: Additional keyword arguments to use when creating the artifact. **Returns:** * The artifact, either retrieved or created. #### `create` ```python create(self: Self, client: 'PrefectClient | None' = None) -> 'ArtifactResponse' ``` A method to create an artifact. **Args:** * `client`: The PrefectClient **Returns:** * The created artifact. #### `format` ```python format(self) -> str ``` #### `format` ```python format(self) -> str | float | int | dict[str, Any] ``` #### `get` ```python get(cls, key: str | None = None, client: 'PrefectClient | None' = None) -> 'ArtifactResponse | None' ``` A method to get an artifact. **Args:** * `key`: The key of the artifact to get. * `client`: A client to use when calling the Prefect API. **Returns:** * The artifact (if found). #### `get_or_create` ```python get_or_create(cls, key: str | None = None, description: str | None = None, data: dict[str, Any] | Any | None = None, client: 'PrefectClient | None' = None, **kwargs: Any) -> tuple['ArtifactResponse', bool] ``` A method to get or create an artifact. **Args:** * `key`: The key of the artifact to get or create. * `description`: The description of the artifact to create. * `data`: The data of the artifact to create. * `client`: The PrefectClient * `**kwargs`: Additional keyword arguments to use when creating the artifact. **Returns:** * The artifact, either retrieved or created. ### `TableArtifact` **Methods:** #### `acreate` ```python acreate(self, client: 'PrefectClient | None' = None) -> 'ArtifactResponse' ``` An async method to create an artifact. **Args:** * `client`: The PrefectClient **Returns:** * The created artifact. #### `aformat` ```python aformat(self) -> str ``` #### `aformat` ```python aformat(self) -> str | float | int | dict[str, Any] ``` #### `aget` ```python aget(cls, key: str | None = None, client: 'PrefectClient | None' = None) -> 'ArtifactResponse | None' ``` A async method to get an artifact. **Args:** * `key`: The key of the artifact to get. * `client`: A client to use when calling the Prefect API. **Returns:** * The artifact (if found). #### `aget_or_create` ```python aget_or_create(cls, key: str | None = None, description: str | None = None, data: dict[str, Any] | Any | None = None, client: 'PrefectClient | None' = None, **kwargs: Any) -> tuple['ArtifactResponse', bool] ``` A async method to get or create an artifact. **Args:** * `key`: The key of the artifact to get or create. * `description`: The description of the artifact to create. * `data`: The data of the artifact to create. * `client`: The PrefectClient * `**kwargs`: Additional keyword arguments to use when creating the artifact. **Returns:** * The artifact, either retrieved or created. #### `create` ```python create(self: Self, client: 'PrefectClient | None' = None) -> 'ArtifactResponse' ``` A method to create an artifact. **Args:** * `client`: The PrefectClient **Returns:** * The created artifact. #### `format` ```python format(self) -> str ``` #### `format` ```python format(self) -> str | float | int | dict[str, Any] ``` #### `get` ```python get(cls, key: str | None = None, client: 'PrefectClient | None' = None) -> 'ArtifactResponse | None' ``` A method to get an artifact. **Args:** * `key`: The key of the artifact to get. * `client`: A client to use when calling the Prefect API. **Returns:** * The artifact (if found). #### `get_or_create` ```python get_or_create(cls, key: str | None = None, description: str | None = None, data: dict[str, Any] | Any | None = None, client: 'PrefectClient | None' = None, **kwargs: Any) -> tuple['ArtifactResponse', bool] ``` A method to get or create an artifact. **Args:** * `key`: The key of the artifact to get or create. * `description`: The description of the artifact to create. * `data`: The data of the artifact to create. * `client`: The PrefectClient * `**kwargs`: Additional keyword arguments to use when creating the artifact. **Returns:** * The artifact, either retrieved or created. ### `ProgressArtifact` **Methods:** #### `acreate` ```python acreate(self, client: 'PrefectClient | None' = None) -> 'ArtifactResponse' ``` An async method to create an artifact. **Args:** * `client`: The PrefectClient **Returns:** * The created artifact. #### `aformat` ```python aformat(self) -> float ``` #### `aformat` ```python aformat(self) -> str | float | int | dict[str, Any] ``` #### `aget` ```python aget(cls, key: str | None = None, client: 'PrefectClient | None' = None) -> 'ArtifactResponse | None' ``` A async method to get an artifact. **Args:** * `key`: The key of the artifact to get. * `client`: A client to use when calling the Prefect API. **Returns:** * The artifact (if found). #### `aget_or_create` ```python aget_or_create(cls, key: str | None = None, description: str | None = None, data: dict[str, Any] | Any | None = None, client: 'PrefectClient | None' = None, **kwargs: Any) -> tuple['ArtifactResponse', bool] ``` A async method to get or create an artifact. **Args:** * `key`: The key of the artifact to get or create. * `description`: The description of the artifact to create. * `data`: The data of the artifact to create. * `client`: The PrefectClient * `**kwargs`: Additional keyword arguments to use when creating the artifact. **Returns:** * The artifact, either retrieved or created. #### `create` ```python create(self: Self, client: 'PrefectClient | None' = None) -> 'ArtifactResponse' ``` A method to create an artifact. **Args:** * `client`: The PrefectClient **Returns:** * The created artifact. #### `format` ```python format(self) -> float ``` #### `format` ```python format(self) -> str | float | int | dict[str, Any] ``` #### `get` ```python get(cls, key: str | None = None, client: 'PrefectClient | None' = None) -> 'ArtifactResponse | None' ``` A method to get an artifact. **Args:** * `key`: The key of the artifact to get. * `client`: A client to use when calling the Prefect API. **Returns:** * The artifact (if found). #### `get_or_create` ```python get_or_create(cls, key: str | None = None, description: str | None = None, data: dict[str, Any] | Any | None = None, client: 'PrefectClient | None' = None, **kwargs: Any) -> tuple['ArtifactResponse', bool] ``` A method to get or create an artifact. **Args:** * `key`: The key of the artifact to get or create. * `description`: The description of the artifact to create. * `data`: The data of the artifact to create. * `client`: The PrefectClient * `**kwargs`: Additional keyword arguments to use when creating the artifact. **Returns:** * The artifact, either retrieved or created. ### `ImageArtifact` An artifact that will display an image from a publicly accessible URL in the UI. **Args:** * `image_url`: The URL of the image to display. **Methods:** #### `acreate` ```python acreate(self, client: 'PrefectClient | None' = None) -> 'ArtifactResponse' ``` An async method to create an artifact. **Args:** * `client`: The PrefectClient **Returns:** * The created artifact. #### `aformat` ```python aformat(self) -> str ``` #### `aformat` ```python aformat(self) -> str | float | int | dict[str, Any] ``` #### `aget` ```python aget(cls, key: str | None = None, client: 'PrefectClient | None' = None) -> 'ArtifactResponse | None' ``` A async method to get an artifact. **Args:** * `key`: The key of the artifact to get. * `client`: A client to use when calling the Prefect API. **Returns:** * The artifact (if found). #### `aget_or_create` ```python aget_or_create(cls, key: str | None = None, description: str | None = None, data: dict[str, Any] | Any | None = None, client: 'PrefectClient | None' = None, **kwargs: Any) -> tuple['ArtifactResponse', bool] ``` A async method to get or create an artifact. **Args:** * `key`: The key of the artifact to get or create. * `description`: The description of the artifact to create. * `data`: The data of the artifact to create. * `client`: The PrefectClient * `**kwargs`: Additional keyword arguments to use when creating the artifact. **Returns:** * The artifact, either retrieved or created. #### `create` ```python create(self: Self, client: 'PrefectClient | None' = None) -> 'ArtifactResponse' ``` A method to create an artifact. **Args:** * `client`: The PrefectClient **Returns:** * The created artifact. #### `format` ```python format(self) -> str ``` This method is used to format the artifact data so it can be properly sent to the API when the .create() method is called. **Returns:** * The image URL. #### `format` ```python format(self) -> str | float | int | dict[str, Any] ``` #### `get` ```python get(cls, key: str | None = None, client: 'PrefectClient | None' = None) -> 'ArtifactResponse | None' ``` A method to get an artifact. **Args:** * `key`: The key of the artifact to get. * `client`: A client to use when calling the Prefect API. **Returns:** * The artifact (if found). #### `get_or_create` ```python get_or_create(cls, key: str | None = None, description: str | None = None, data: dict[str, Any] | Any | None = None, client: 'PrefectClient | None' = None, **kwargs: Any) -> tuple['ArtifactResponse', bool] ``` A method to get or create an artifact. **Args:** * `key`: The key of the artifact to get or create. * `description`: The description of the artifact to create. * `data`: The data of the artifact to create. * `client`: The PrefectClient * `**kwargs`: Additional keyword arguments to use when creating the artifact. **Returns:** * The artifact, either retrieved or created. # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-assets-__init__ # `prefect.assets` *This module is empty or contains only private/internal implementations.* # core Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-assets-core # `prefect.assets.core` ## Functions ### `add_asset_metadata` ```python add_asset_metadata(asset: str | Asset, metadata: dict[str, Any]) -> None ``` ## Classes ### `AssetProperties` Metadata properties to configure on an Asset **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `Asset` Assets are objects that represent materialized data, providing a way to track lineage and dependencies. **Methods:** #### `add_metadata` ```python add_metadata(self, metadata: dict[str, Any]) -> None ``` #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. # materialize Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-assets-materialize # `prefect.assets.materialize` ## Functions ### `materialize` ```python materialize(*assets: Union[str, Asset], **task_kwargs: Unpack[TaskOptions]) -> Callable[[Callable[P, R]], MaterializingTask[P, R]] ``` Decorator for materializing assets. **Args:** * `*assets`: Assets to materialize * `by`: An optional tool that is ultimately responsible for materializing the asset e.g. "dbt" or "spark" * `**task_kwargs`: Additional task configuration # automations Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-automations # `prefect.automations` ## Classes ### `Automation` **Methods:** #### `acreate` ```python acreate(self: Self) -> Self ``` Asynchronously create a new automation. Examples: ```python auto_to_create = Automation( name="woodchonk", trigger=EventTrigger( expect={"animal.walked"}, match={ "genus": "Marmota", "species": "monax", }, posture="Reactive", threshold=3, within=timedelta(seconds=10), ), actions=[CancelFlowRun()] ) created_automation = await auto_to_create.acreate() ``` #### `adelete` ```python adelete(self: Self) -> bool ``` Asynchronously delete an automation. Examples: ```python auto = Automation.read(id = 123) await auto.adelete() ``` #### `adisable` ```python adisable(self: Self) -> bool ``` Asynchronously disable an automation. **Raises:** * `ValueError`: If the automation does not have an id * `PrefectHTTPStatusError`: If the automation cannot be disabled Example: ```python auto = await Automation.aread(id = 123) await auto.adisable() ``` #### `aenable` ```python aenable(self: Self) -> bool ``` Asynchronously enable an automation. **Raises:** * `ValueError`: If the automation does not have an id * `PrefectHTTPStatusError`: If the automation cannot be enabled Example: ```python auto = await Automation.aread(id = 123) await auto.aenable() ``` #### `aread` ```python aread(cls, id: UUID, name: Optional[str] = ...) -> Self ``` #### `aread` ```python aread(cls, id: None = None, name: str = ...) -> Self ``` #### `aread` ```python aread(cls, id: Optional[UUID] = None, name: Optional[str] = None) -> Self ``` Asynchronously read an automation by ID or name. Examples: ```python automation = await Automation.aread(name="woodchonk") ``` ```python automation = await Automation.aread(id=UUID("b3514963-02b1-47a5-93d1-6eeb131041cb")) ``` #### `aupdate` ```python aupdate(self: Self) -> None ``` Updates an existing automation. Examples: ```python auto = Automation.read(id=123) auto.name = "new name" auto.update() ``` #### `create` ```python create(self: Self) -> Self ``` Create a new automation. Examples: ```python auto_to_create = Automation( name="woodchonk", trigger=EventTrigger( expect={"animal.walked"}, match={ "genus": "Marmota", "species": "monax", }, posture="Reactive", threshold=3, within=timedelta(seconds=10), ), actions=[CancelFlowRun()] ) created_automation = auto_to_create.create() ``` #### `delete` ```python delete(self: Self) -> bool ``` Delete an automation. Examples: ```python auto = Automation.read(id = 123) auto.delete() ``` #### `disable` ```python disable(self: Self) -> bool ``` Disable an automation. **Raises:** * `ValueError`: If the automation does not have an id * `PrefectHTTPStatusError`: If the automation cannot be disabled Example: ```python auto = Automation.read(id = 123) auto.disable() ``` #### `enable` ```python enable(self: Self) -> bool ``` Enable an automation. **Raises:** * `ValueError`: If the automation does not have an id * `PrefectHTTPStatusError`: If the automation cannot be enabled Example: ```python auto = Automation.read(id = 123) auto.enable() ``` #### `read` ```python read(cls, id: UUID, name: Optional[str] = ...) -> Self ``` #### `read` ```python read(cls, id: None = None, name: str = ...) -> Self ``` #### `read` ```python read(cls, id: Optional[UUID] = None, name: Optional[str] = None) -> Self ``` Read an automation by ID or name. Examples: ```python automation = Automation.read(name="woodchonk") ``` ```python automation = Automation.read(id=UUID("b3514963-02b1-47a5-93d1-6eeb131041cb")) ``` #### `update` ```python update(self: Self) ``` Updates an existing automation. Examples: ```python auto = Automation.read(id=123) auto.name = "new name" auto.update() ``` # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-blocks-__init__ # `prefect.blocks` *This module is empty or contains only private/internal implementations.* # abstract Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-blocks-abstract # `prefect.blocks.abstract` ## Classes ### `CredentialsBlock` Stores credentials for an external system and exposes a client for interacting with that system. Can also hold config that is tightly coupled to credentials (domain, endpoint, account ID, etc.) Will often be composed with other blocks. Parent block should rely on the client provided by a credentials block for interacting with the corresponding external system. **Methods:** #### `aload` ```python aload(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `aload`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Custom.aload("my-custom-message") ``` Load from Block with a block document slug: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Block.aload("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = await Custom.aload("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `annotation_refers_to_block_class` ```python annotation_refers_to_block_class(annotation: Any) -> bool ``` #### `block_initialization` ```python block_initialization(self) -> None ``` #### `delete` ```python delete(cls, name: str, client: Optional['PrefectClient'] = None) ``` #### `get_block_capabilities` ```python get_block_capabilities(cls) -> FrozenSet[str] ``` Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset. #### `get_block_class_from_key` ```python get_block_class_from_key(cls: type[Self], key: str) -> type[Self] ``` Retrieve the block class implementation given a key. #### `get_block_class_from_schema` ```python get_block_class_from_schema(cls: type[Self], schema: BlockSchema) -> type[Self] ``` Retrieve the block class implementation given a schema. #### `get_block_placeholder` ```python get_block_placeholder(self) -> str ``` Returns the block placeholder for the current block which can be used for templating. **Returns:** * The block placeholder for the current block in the format `prefect.blocks.{block_type_name}.{block_document_name}` **Raises:** * `BlockNotSavedError`: Raised if the block has not been saved. If a block has not been saved, the return value will be `None`. #### `get_block_schema_version` ```python get_block_schema_version(cls) -> str ``` #### `get_block_type_name` ```python get_block_type_name(cls) -> str ``` #### `get_block_type_slug` ```python get_block_type_slug(cls) -> str ``` #### `get_client` ```python get_client(self, *args: Any, **kwargs: Any) -> Any ``` Returns a client for interacting with the external system. If a service offers various clients, this method can accept a `client_type` keyword argument to get the desired client within the service. #### `get_code_example` ```python get_code_example(cls) -> Optional[str] ``` Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided. #### `get_description` ```python get_description(cls) -> Optional[str] ``` Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined. #### `is_block_class` ```python is_block_class(block: Any) -> TypeGuard[type['Block']] ``` #### `load` ```python load(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Custom.load("my-custom-message") ``` Load from Block with a block document slug: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Block.load("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = Custom.load("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `load_from_ref` ```python load_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `logger` ```python logger(self) -> LoggerOrAdapter ``` Returns a logger based on whether the CredentialsBlock is called from within a flow or task run context. If a run context is present, the logger property returns a run logger. Else, it returns a default logger labeled with the class's name. **Returns:** * The run logger or a default logger with the class's name. #### `model_dump` ```python model_dump(self) -> dict[str, Any] ``` #### `model_json_schema` ```python model_json_schema(cls, by_alias: bool = True, ref_template: str = '#/definitions/{model}', schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema, mode: Literal['validation', 'serialization'] = 'validation') -> dict[str, Any] ``` TODO: stop overriding this method - use GenerateSchema in ConfigDict instead? #### `model_validate` ```python model_validate(cls: type[Self], obj: dict[str, Any] | Any) -> Self ``` #### `register_type_and_schema` ```python register_type_and_schema(cls, client: Optional['PrefectClient'] = None) ``` Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. #### `save` ```python save(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) ``` Saves the values of a block as a block document. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` ### `NotificationError` Raised if a notification block fails to send a notification. ### `NotificationBlock` Block that represents a resource in an external system that is able to send notifications. **Methods:** #### `aload` ```python aload(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `aload`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Custom.aload("my-custom-message") ``` Load from Block with a block document slug: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Block.aload("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = await Custom.aload("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `annotation_refers_to_block_class` ```python annotation_refers_to_block_class(annotation: Any) -> bool ``` #### `block_initialization` ```python block_initialization(self) -> None ``` #### `delete` ```python delete(cls, name: str, client: Optional['PrefectClient'] = None) ``` #### `get_block_capabilities` ```python get_block_capabilities(cls) -> FrozenSet[str] ``` Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset. #### `get_block_class_from_key` ```python get_block_class_from_key(cls: type[Self], key: str) -> type[Self] ``` Retrieve the block class implementation given a key. #### `get_block_class_from_schema` ```python get_block_class_from_schema(cls: type[Self], schema: BlockSchema) -> type[Self] ``` Retrieve the block class implementation given a schema. #### `get_block_placeholder` ```python get_block_placeholder(self) -> str ``` Returns the block placeholder for the current block which can be used for templating. **Returns:** * The block placeholder for the current block in the format `prefect.blocks.{block_type_name}.{block_document_name}` **Raises:** * `BlockNotSavedError`: Raised if the block has not been saved. If a block has not been saved, the return value will be `None`. #### `get_block_schema_version` ```python get_block_schema_version(cls) -> str ``` #### `get_block_type_name` ```python get_block_type_name(cls) -> str ``` #### `get_block_type_slug` ```python get_block_type_slug(cls) -> str ``` #### `get_code_example` ```python get_code_example(cls) -> Optional[str] ``` Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided. #### `get_description` ```python get_description(cls) -> Optional[str] ``` Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined. #### `is_block_class` ```python is_block_class(block: Any) -> TypeGuard[type['Block']] ``` #### `load` ```python load(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Custom.load("my-custom-message") ``` Load from Block with a block document slug: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Block.load("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = Custom.load("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `load_from_ref` ```python load_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `logger` ```python logger(self) -> LoggerOrAdapter ``` Returns a logger based on whether the NotificationBlock is called from within a flow or task run context. If a run context is present, the logger property returns a run logger. Else, it returns a default logger labeled with the class's name. **Returns:** * The run logger or a default logger with the class's name. #### `model_dump` ```python model_dump(self) -> dict[str, Any] ``` #### `model_json_schema` ```python model_json_schema(cls, by_alias: bool = True, ref_template: str = '#/definitions/{model}', schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema, mode: Literal['validation', 'serialization'] = 'validation') -> dict[str, Any] ``` TODO: stop overriding this method - use GenerateSchema in ConfigDict instead? #### `model_validate` ```python model_validate(cls: type[Self], obj: dict[str, Any] | Any) -> Self ``` #### `notify` ```python notify(self, body: str, subject: str | None = None) -> None ``` Send a notification. **Args:** * `body`: The body of the notification. * `subject`: The subject of the notification. #### `raise_on_failure` ```python raise_on_failure(self) -> Generator[None, None, None] ``` Context manager that, while active, causes the block to raise errors if it encounters a failure sending notifications. #### `register_type_and_schema` ```python register_type_and_schema(cls, client: Optional['PrefectClient'] = None) ``` Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. #### `save` ```python save(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) ``` Saves the values of a block as a block document. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` ### `JobRun` Represents a job run in an external system. Allows waiting for the job run's completion and fetching its results. **Methods:** #### `fetch_result` ```python fetch_result(self) -> T ``` Retrieve the results of the job run and return them. #### `logger` ```python logger(self) -> LoggerOrAdapter ``` Returns a logger based on whether the JobRun is called from within a flow or task run context. If a run context is present, the logger property returns a run logger. Else, it returns a default logger labeled with the class's name. **Returns:** * The run logger or a default logger with the class's name. #### `wait_for_completion` ```python wait_for_completion(self) -> Logger ``` Wait for the job run to complete. ### `JobBlock` Block that represents an entity in an external service that can trigger a long running execution. **Methods:** #### `aload` ```python aload(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `aload`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Custom.aload("my-custom-message") ``` Load from Block with a block document slug: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Block.aload("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = await Custom.aload("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `annotation_refers_to_block_class` ```python annotation_refers_to_block_class(annotation: Any) -> bool ``` #### `block_initialization` ```python block_initialization(self) -> None ``` #### `delete` ```python delete(cls, name: str, client: Optional['PrefectClient'] = None) ``` #### `get_block_capabilities` ```python get_block_capabilities(cls) -> FrozenSet[str] ``` Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset. #### `get_block_class_from_key` ```python get_block_class_from_key(cls: type[Self], key: str) -> type[Self] ``` Retrieve the block class implementation given a key. #### `get_block_class_from_schema` ```python get_block_class_from_schema(cls: type[Self], schema: BlockSchema) -> type[Self] ``` Retrieve the block class implementation given a schema. #### `get_block_placeholder` ```python get_block_placeholder(self) -> str ``` Returns the block placeholder for the current block which can be used for templating. **Returns:** * The block placeholder for the current block in the format `prefect.blocks.{block_type_name}.{block_document_name}` **Raises:** * `BlockNotSavedError`: Raised if the block has not been saved. If a block has not been saved, the return value will be `None`. #### `get_block_schema_version` ```python get_block_schema_version(cls) -> str ``` #### `get_block_type_name` ```python get_block_type_name(cls) -> str ``` #### `get_block_type_slug` ```python get_block_type_slug(cls) -> str ``` #### `get_code_example` ```python get_code_example(cls) -> Optional[str] ``` Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided. #### `get_description` ```python get_description(cls) -> Optional[str] ``` Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined. #### `is_block_class` ```python is_block_class(block: Any) -> TypeGuard[type['Block']] ``` #### `load` ```python load(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Custom.load("my-custom-message") ``` Load from Block with a block document slug: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Block.load("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = Custom.load("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `load_from_ref` ```python load_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `logger` ```python logger(self) -> LoggerOrAdapter ``` Returns a logger based on whether the JobBlock is called from within a flow or task run context. If a run context is present, the logger property returns a run logger. Else, it returns a default logger labeled with the class's name. **Returns:** * The run logger or a default logger with the class's name. #### `model_dump` ```python model_dump(self) -> dict[str, Any] ``` #### `model_json_schema` ```python model_json_schema(cls, by_alias: bool = True, ref_template: str = '#/definitions/{model}', schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema, mode: Literal['validation', 'serialization'] = 'validation') -> dict[str, Any] ``` TODO: stop overriding this method - use GenerateSchema in ConfigDict instead? #### `model_validate` ```python model_validate(cls: type[Self], obj: dict[str, Any] | Any) -> Self ``` #### `register_type_and_schema` ```python register_type_and_schema(cls, client: Optional['PrefectClient'] = None) ``` Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. #### `save` ```python save(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) ``` Saves the values of a block as a block document. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `trigger` ```python trigger(self) -> JobRun[T] ``` Triggers a job run in an external service and returns a JobRun object to track the execution of the run. ### `DatabaseBlock` An abstract block type that represents a database and provides an interface for interacting with it. Blocks that implement this interface have the option to accept credentials directly via attributes or via a nested `CredentialsBlock`. Use of a nested credentials block is recommended unless credentials are tightly coupled to database connection configuration. Implementing either sync or async context management on `DatabaseBlock` implementations is recommended. **Methods:** #### `aload` ```python aload(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `aload`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Custom.aload("my-custom-message") ``` Load from Block with a block document slug: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Block.aload("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = await Custom.aload("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `annotation_refers_to_block_class` ```python annotation_refers_to_block_class(annotation: Any) -> bool ``` #### `block_initialization` ```python block_initialization(self) -> None ``` #### `delete` ```python delete(cls, name: str, client: Optional['PrefectClient'] = None) ``` #### `execute` ```python execute(self, operation: str, parameters: dict[str, Any] | None = None, **execution_kwargs: Any) -> None ``` Executes an operation on the database. This method is intended to be used for operations that do not return data, such as INSERT, UPDATE, or DELETE. **Args:** * `operation`: The SQL query or other operation to be executed. * `parameters`: The parameters for the operation. * `**execution_kwargs`: Additional keyword arguments to pass to execute. #### `execute_many` ```python execute_many(self, operation: str, seq_of_parameters: list[dict[str, Any]], **execution_kwargs: Any) -> None ``` Executes multiple operations on the database. This method is intended to be used for operations that do not return data, such as INSERT, UPDATE, or DELETE. **Args:** * `operation`: The SQL query or other operation to be executed. * `seq_of_parameters`: The sequence of parameters for the operation. * `**execution_kwargs`: Additional keyword arguments to pass to execute. #### `fetch_all` ```python fetch_all(self, operation: str, parameters: dict[str, Any] | None = None, **execution_kwargs: Any) -> list[tuple[Any, ...]] ``` Fetch all results from the database. **Args:** * `operation`: The SQL query or other operation to be executed. * `parameters`: The parameters for the operation. * `**execution_kwargs`: Additional keyword arguments to pass to execute. **Returns:** * A list of tuples containing the data returned by the database, where each row is a tuple and each column is a value in the tuple. #### `fetch_many` ```python fetch_many(self, operation: str, parameters: dict[str, Any] | None = None, size: int | None = None, **execution_kwargs: Any) -> list[tuple[Any, ...]] ``` Fetch a limited number of results from the database. **Args:** * `operation`: The SQL query or other operation to be executed. * `parameters`: The parameters for the operation. * `size`: The number of results to return. * `**execution_kwargs`: Additional keyword arguments to pass to execute. **Returns:** * A list of tuples containing the data returned by the database, where each row is a tuple and each column is a value in the tuple. #### `fetch_one` ```python fetch_one(self, operation: str, parameters: dict[str, Any] | None = None, **execution_kwargs: Any) -> tuple[Any, ...] ``` Fetch a single result from the database. **Args:** * `operation`: The SQL query or other operation to be executed. * `parameters`: The parameters for the operation. * `**execution_kwargs`: Additional keyword arguments to pass to execute. **Returns:** * A list of tuples containing the data returned by the database, where each row is a tuple and each column is a value in the tuple. #### `get_block_capabilities` ```python get_block_capabilities(cls) -> FrozenSet[str] ``` Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset. #### `get_block_class_from_key` ```python get_block_class_from_key(cls: type[Self], key: str) -> type[Self] ``` Retrieve the block class implementation given a key. #### `get_block_class_from_schema` ```python get_block_class_from_schema(cls: type[Self], schema: BlockSchema) -> type[Self] ``` Retrieve the block class implementation given a schema. #### `get_block_placeholder` ```python get_block_placeholder(self) -> str ``` Returns the block placeholder for the current block which can be used for templating. **Returns:** * The block placeholder for the current block in the format `prefect.blocks.{block_type_name}.{block_document_name}` **Raises:** * `BlockNotSavedError`: Raised if the block has not been saved. If a block has not been saved, the return value will be `None`. #### `get_block_schema_version` ```python get_block_schema_version(cls) -> str ``` #### `get_block_type_name` ```python get_block_type_name(cls) -> str ``` #### `get_block_type_slug` ```python get_block_type_slug(cls) -> str ``` #### `get_code_example` ```python get_code_example(cls) -> Optional[str] ``` Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided. #### `get_description` ```python get_description(cls) -> Optional[str] ``` Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined. #### `is_block_class` ```python is_block_class(block: Any) -> TypeGuard[type['Block']] ``` #### `load` ```python load(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Custom.load("my-custom-message") ``` Load from Block with a block document slug: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Block.load("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = Custom.load("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `load_from_ref` ```python load_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `logger` ```python logger(self) -> LoggerOrAdapter ``` Returns a logger based on whether the DatabaseBlock is called from within a flow or task run context. If a run context is present, the logger property returns a run logger. Else, it returns a default logger labeled with the class's name. **Returns:** * The run logger or a default logger with the class's name. #### `model_dump` ```python model_dump(self) -> dict[str, Any] ``` #### `model_json_schema` ```python model_json_schema(cls, by_alias: bool = True, ref_template: str = '#/definitions/{model}', schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema, mode: Literal['validation', 'serialization'] = 'validation') -> dict[str, Any] ``` TODO: stop overriding this method - use GenerateSchema in ConfigDict instead? #### `model_validate` ```python model_validate(cls: type[Self], obj: dict[str, Any] | Any) -> Self ``` #### `register_type_and_schema` ```python register_type_and_schema(cls, client: Optional['PrefectClient'] = None) ``` Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. #### `save` ```python save(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) ``` Saves the values of a block as a block document. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` ### `ObjectStorageBlock` Block that represents a resource in an external service that can store objects. **Methods:** #### `aload` ```python aload(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `aload`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Custom.aload("my-custom-message") ``` Load from Block with a block document slug: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Block.aload("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = await Custom.aload("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `annotation_refers_to_block_class` ```python annotation_refers_to_block_class(annotation: Any) -> bool ``` #### `block_initialization` ```python block_initialization(self) -> None ``` #### `delete` ```python delete(cls, name: str, client: Optional['PrefectClient'] = None) ``` #### `download_folder_to_path` ```python download_folder_to_path(self, from_folder: str, to_folder: str | Path, **download_kwargs: Any) -> Path ``` Downloads a folder from the object storage service to a path. **Args:** * `from_folder`: The path to the folder to download from. * `to_folder`: The path to download the folder to. * `**download_kwargs`: Additional keyword arguments to pass to download. **Returns:** * The path that the folder was downloaded to. #### `download_object_to_file_object` ```python download_object_to_file_object(self, from_path: str, to_file_object: BinaryIO, **download_kwargs: Any) -> BinaryIO ``` Downloads an object from the object storage service to a file-like object, which can be a BytesIO object or a BufferedWriter. **Args:** * `from_path`: The path to download from. * `to_file_object`: The file-like object to download to. * `**download_kwargs`: Additional keyword arguments to pass to download. **Returns:** * The file-like object that the object was downloaded to. #### `download_object_to_path` ```python download_object_to_path(self, from_path: str, to_path: str | Path, **download_kwargs: Any) -> Path ``` Downloads an object from the object storage service to a path. **Args:** * `from_path`: The path to download from. * `to_path`: The path to download to. * `**download_kwargs`: Additional keyword arguments to pass to download. **Returns:** * The path that the object was downloaded to. #### `get_block_capabilities` ```python get_block_capabilities(cls) -> FrozenSet[str] ``` Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset. #### `get_block_class_from_key` ```python get_block_class_from_key(cls: type[Self], key: str) -> type[Self] ``` Retrieve the block class implementation given a key. #### `get_block_class_from_schema` ```python get_block_class_from_schema(cls: type[Self], schema: BlockSchema) -> type[Self] ``` Retrieve the block class implementation given a schema. #### `get_block_placeholder` ```python get_block_placeholder(self) -> str ``` Returns the block placeholder for the current block which can be used for templating. **Returns:** * The block placeholder for the current block in the format `prefect.blocks.{block_type_name}.{block_document_name}` **Raises:** * `BlockNotSavedError`: Raised if the block has not been saved. If a block has not been saved, the return value will be `None`. #### `get_block_schema_version` ```python get_block_schema_version(cls) -> str ``` #### `get_block_type_name` ```python get_block_type_name(cls) -> str ``` #### `get_block_type_slug` ```python get_block_type_slug(cls) -> str ``` #### `get_code_example` ```python get_code_example(cls) -> Optional[str] ``` Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided. #### `get_description` ```python get_description(cls) -> Optional[str] ``` Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined. #### `is_block_class` ```python is_block_class(block: Any) -> TypeGuard[type['Block']] ``` #### `load` ```python load(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Custom.load("my-custom-message") ``` Load from Block with a block document slug: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Block.load("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = Custom.load("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `load_from_ref` ```python load_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `logger` ```python logger(self) -> LoggerOrAdapter ``` Returns a logger based on whether the ObjectStorageBlock is called from within a flow or task run context. If a run context is present, the logger property returns a run logger. Else, it returns a default logger labeled with the class's name. **Returns:** * The run logger or a default logger with the class's name. #### `model_dump` ```python model_dump(self) -> dict[str, Any] ``` #### `model_json_schema` ```python model_json_schema(cls, by_alias: bool = True, ref_template: str = '#/definitions/{model}', schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema, mode: Literal['validation', 'serialization'] = 'validation') -> dict[str, Any] ``` TODO: stop overriding this method - use GenerateSchema in ConfigDict instead? #### `model_validate` ```python model_validate(cls: type[Self], obj: dict[str, Any] | Any) -> Self ``` #### `register_type_and_schema` ```python register_type_and_schema(cls, client: Optional['PrefectClient'] = None) ``` Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. #### `save` ```python save(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) ``` Saves the values of a block as a block document. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `upload_from_file_object` ```python upload_from_file_object(self, from_file_object: BinaryIO, to_path: str, **upload_kwargs: Any) -> str ``` Uploads an object to the object storage service from a file-like object, which can be a BytesIO object or a BufferedReader. **Args:** * `from_file_object`: The file-like object to upload from. * `to_path`: The path to upload the object to. * `**upload_kwargs`: Additional keyword arguments to pass to upload. **Returns:** * The path that the object was uploaded to. #### `upload_from_folder` ```python upload_from_folder(self, from_folder: str | Path, to_folder: str, **upload_kwargs: Any) -> str ``` Uploads a folder to the object storage service from a path. **Args:** * `from_folder`: The path to the folder to upload from. * `to_folder`: The path to upload the folder to. * `**upload_kwargs`: Additional keyword arguments to pass to upload. **Returns:** * The path that the folder was uploaded to. #### `upload_from_path` ```python upload_from_path(self, from_path: str | Path, to_path: str, **upload_kwargs: Any) -> str ``` Uploads an object from a path to the object storage service. **Args:** * `from_path`: The path to the file to upload from. * `to_path`: The path to upload the file to. * `**upload_kwargs`: Additional keyword arguments to pass to upload. **Returns:** * The path that the object was uploaded to. ### `SecretBlock` Block that represents a resource that can store and retrieve secrets. **Methods:** #### `aload` ```python aload(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `aload`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Custom.aload("my-custom-message") ``` Load from Block with a block document slug: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Block.aload("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = await Custom.aload("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `annotation_refers_to_block_class` ```python annotation_refers_to_block_class(annotation: Any) -> bool ``` #### `block_initialization` ```python block_initialization(self) -> None ``` #### `delete` ```python delete(cls, name: str, client: Optional['PrefectClient'] = None) ``` #### `get_block_capabilities` ```python get_block_capabilities(cls) -> FrozenSet[str] ``` Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset. #### `get_block_class_from_key` ```python get_block_class_from_key(cls: type[Self], key: str) -> type[Self] ``` Retrieve the block class implementation given a key. #### `get_block_class_from_schema` ```python get_block_class_from_schema(cls: type[Self], schema: BlockSchema) -> type[Self] ``` Retrieve the block class implementation given a schema. #### `get_block_placeholder` ```python get_block_placeholder(self) -> str ``` Returns the block placeholder for the current block which can be used for templating. **Returns:** * The block placeholder for the current block in the format `prefect.blocks.{block_type_name}.{block_document_name}` **Raises:** * `BlockNotSavedError`: Raised if the block has not been saved. If a block has not been saved, the return value will be `None`. #### `get_block_schema_version` ```python get_block_schema_version(cls) -> str ``` #### `get_block_type_name` ```python get_block_type_name(cls) -> str ``` #### `get_block_type_slug` ```python get_block_type_slug(cls) -> str ``` #### `get_code_example` ```python get_code_example(cls) -> Optional[str] ``` Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided. #### `get_description` ```python get_description(cls) -> Optional[str] ``` Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined. #### `is_block_class` ```python is_block_class(block: Any) -> TypeGuard[type['Block']] ``` #### `load` ```python load(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Custom.load("my-custom-message") ``` Load from Block with a block document slug: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Block.load("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = Custom.load("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `load_from_ref` ```python load_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `logger` ```python logger(self) -> LoggerOrAdapter ``` Returns a logger based on whether the SecretBlock is called from within a flow or task run context. If a run context is present, the logger property returns a run logger. Else, it returns a default logger labeled with the class's name. **Returns:** * The run logger or a default logger with the class's name. #### `model_dump` ```python model_dump(self) -> dict[str, Any] ``` #### `model_json_schema` ```python model_json_schema(cls, by_alias: bool = True, ref_template: str = '#/definitions/{model}', schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema, mode: Literal['validation', 'serialization'] = 'validation') -> dict[str, Any] ``` TODO: stop overriding this method - use GenerateSchema in ConfigDict instead? #### `model_validate` ```python model_validate(cls: type[Self], obj: dict[str, Any] | Any) -> Self ``` #### `read_secret` ```python read_secret(self) -> bytes ``` Reads the configured secret from the secret storage service. **Returns:** * The secret data. #### `register_type_and_schema` ```python register_type_and_schema(cls, client: Optional['PrefectClient'] = None) ``` Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. #### `save` ```python save(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) ``` Saves the values of a block as a block document. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `write_secret` ```python write_secret(self, secret_data: bytes) -> str ``` Writes secret data to the configured secret in the secret storage service. **Args:** * `secret_data`: The secret data to write. **Returns:** * The key of the secret that can be used for retrieval. # core Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-blocks-core # `prefect.blocks.core` ## Functions ### `block_schema_to_key` ```python block_schema_to_key(schema: BlockSchema) -> str ``` Defines the unique key used to lookup the Block class for a given schema. ### `schema_extra` ```python schema_extra(schema: dict[str, Any], model: type['Block']) -> None ``` Customizes Pydantic's schema generation feature to add blocks related information. ## Classes ### `InvalidBlockRegistration` Raised on attempted registration of the base Block class or a Block interface class ### `UnknownBlockType` Raised when a block type is not found in the registry. ### `BlockNotSavedError` Raised when a given block is not saved and an operation that requires the block to be saved is attempted. ### `Block` A base class for implementing a block that wraps an external service. This class can be defined with an arbitrary set of fields and methods, and couples business logic with data contained in an block document. `_block_document_name`, `_block_document_id`, `_block_schema_id`, and `_block_type_id` are reserved by Prefect as Block metadata fields, but otherwise a Block can implement arbitrary logic. Blocks can be instantiated without populating these metadata fields, but can only be used interactively, not with the Prefect API. Instead of the **init** method, a block implementation allows the definition of a `block_initialization` method that is called after initialization. **Methods:** #### `aload` ```python aload(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `aload`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Custom.aload("my-custom-message") ``` Load from Block with a block document slug: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Block.aload("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = await Custom.aload("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `annotation_refers_to_block_class` ```python annotation_refers_to_block_class(annotation: Any) -> bool ``` #### `block_initialization` ```python block_initialization(self) -> None ``` #### `delete` ```python delete(cls, name: str, client: Optional['PrefectClient'] = None) ``` #### `get_block_capabilities` ```python get_block_capabilities(cls) -> FrozenSet[str] ``` Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset. #### `get_block_class_from_key` ```python get_block_class_from_key(cls: type[Self], key: str) -> type[Self] ``` Retrieve the block class implementation given a key. #### `get_block_class_from_schema` ```python get_block_class_from_schema(cls: type[Self], schema: BlockSchema) -> type[Self] ``` Retrieve the block class implementation given a schema. #### `get_block_placeholder` ```python get_block_placeholder(self) -> str ``` Returns the block placeholder for the current block which can be used for templating. **Returns:** * The block placeholder for the current block in the format `prefect.blocks.{block_type_name}.{block_document_name}` **Raises:** * `BlockNotSavedError`: Raised if the block has not been saved. If a block has not been saved, the return value will be `None`. #### `get_block_schema_version` ```python get_block_schema_version(cls) -> str ``` #### `get_block_type_name` ```python get_block_type_name(cls) -> str ``` #### `get_block_type_slug` ```python get_block_type_slug(cls) -> str ``` #### `get_code_example` ```python get_code_example(cls) -> Optional[str] ``` Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided. #### `get_description` ```python get_description(cls) -> Optional[str] ``` Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined. #### `is_block_class` ```python is_block_class(block: Any) -> TypeGuard[type['Block']] ``` #### `load` ```python load(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Custom.load("my-custom-message") ``` Load from Block with a block document slug: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Block.load("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = Custom.load("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `load_from_ref` ```python load_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `model_dump` ```python model_dump(self) -> dict[str, Any] ``` #### `model_json_schema` ```python model_json_schema(cls, by_alias: bool = True, ref_template: str = '#/definitions/{model}', schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema, mode: Literal['validation', 'serialization'] = 'validation') -> dict[str, Any] ``` TODO: stop overriding this method - use GenerateSchema in ConfigDict instead? #### `model_validate` ```python model_validate(cls: type[Self], obj: dict[str, Any] | Any) -> Self ``` #### `register_type_and_schema` ```python register_type_and_schema(cls, client: Optional['PrefectClient'] = None) ``` Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. #### `save` ```python save(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) ``` Saves the values of a block as a block document. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` # fields Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-blocks-fields # `prefect.blocks.fields` *This module is empty or contains only private/internal implementations.* # notifications Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-blocks-notifications # `prefect.blocks.notifications` ## Classes ### `AbstractAppriseNotificationBlock` An abstract class for sending notifications using Apprise. **Methods:** #### `block_initialization` ```python block_initialization(self) -> None ``` #### `logger` ```python logger(self) -> LoggerOrAdapter ``` Returns a logger based on whether the NotificationBlock is called from within a flow or task run context. If a run context is present, the logger property returns a run logger. Else, it returns a default logger labeled with the class's name. **Returns:** * The run logger or a default logger with the class's name. #### `notify` ```python notify(self, body: str, subject: str | None = None) -> None ``` #### `notify` ```python notify(self, body: str, subject: str | None = None) -> None ``` Send a notification. **Args:** * `body`: The body of the notification. * `subject`: The subject of the notification. #### `raise_on_failure` ```python raise_on_failure(self) -> Generator[None, None, None] ``` Context manager that, while active, causes the block to raise errors if it encounters a failure sending notifications. ### `AppriseNotificationBlock` A base class for sending notifications using Apprise, through webhook URLs. **Methods:** #### `block_initialization` ```python block_initialization(self) -> None ``` #### `notify` ```python notify(self, body: str, subject: str | None = None) ``` #### `notify` ```python notify(self, body: str, subject: str | None = None) -> None ``` ### `SlackWebhook` Enables sending notifications via a provided Slack webhook. **Examples:** Load a saved Slack webhook and send a message: ```python from prefect.blocks.notifications import SlackWebhook slack_webhook_block = SlackWebhook.load("BLOCK_NAME") slack_webhook_block.notify("Hello from Prefect!") ``` **Methods:** #### `notify` ```python notify(self, body: str, subject: str | None = None) ``` ### `MicrosoftTeamsWebhook` Enables sending notifications via a provided Microsoft Teams webhook. **Examples:** Load a saved Teams webhook and send a message: ```python from prefect.blocks.notifications import MicrosoftTeamsWebhook teams_webhook_block = MicrosoftTeamsWebhook.load("BLOCK_NAME") teams_webhook_block.notify("Hello from Prefect!") ``` **Methods:** #### `block_initialization` ```python block_initialization(self) -> None ``` see [https://github.com/caronc/apprise/pull/1172](https://github.com/caronc/apprise/pull/1172) #### `notify` ```python notify(self, body: str, subject: str | None = None) ``` ### `PagerDutyWebHook` Enables sending notifications via a provided PagerDuty webhook. See [Apprise notify\_pagerduty docs](https://github.com/caronc/apprise/wiki/Notify_pagerduty) for more info on formatting the URL. **Examples:** Load a saved PagerDuty webhook and send a message: ```python from prefect.blocks.notifications import PagerDutyWebHook pagerduty_webhook_block = PagerDutyWebHook.load("BLOCK_NAME") pagerduty_webhook_block.notify("Hello from Prefect!") ``` **Methods:** #### `block_initialization` ```python block_initialization(self) -> None ``` #### `block_initialization` ```python block_initialization(self) -> None ``` #### `notify` ```python notify(self, body: str, subject: str | None = None) ``` Apprise will combine subject and body by default, so we need to move the body into the custom\_details field. custom\_details is part of the webhook url, so we need to update the url and restart the client. #### `notify` ```python notify(self, body: str, subject: str | None = None) -> None ``` ### `TwilioSMS` Enables sending notifications via Twilio SMS. Find more on sending Twilio SMS messages in the [docs](https://www.twilio.com/docs/sms). **Examples:** Load a saved `TwilioSMS` block and send a message: ```python from prefect.blocks.notifications import TwilioSMS twilio_webhook_block = TwilioSMS.load("BLOCK_NAME") twilio_webhook_block.notify("Hello from Prefect!") ``` **Methods:** #### `block_initialization` ```python block_initialization(self) -> None ``` #### `block_initialization` ```python block_initialization(self) -> None ``` #### `notify` ```python notify(self, body: str, subject: str | None = None) -> None ``` ### `OpsgenieWebhook` Enables sending notifications via a provided Opsgenie webhook. See [Apprise notify\_opsgenie docs](https://github.com/caronc/apprise/wiki/Notify_opsgenie) for more info on formatting the URL. **Examples:** Load a saved Opsgenie webhook and send a message: ```python from prefect.blocks.notifications import OpsgenieWebhook opsgenie_webhook_block = OpsgenieWebhook.load("BLOCK_NAME") opsgenie_webhook_block.notify("Hello from Prefect!") ``` **Methods:** #### `block_initialization` ```python block_initialization(self) -> None ``` #### `block_initialization` ```python block_initialization(self) -> None ``` #### `notify` ```python notify(self, body: str, subject: str | None = None) -> None ``` ### `MattermostWebhook` Enables sending notifications via a provided Mattermost webhook. See [Apprise notify\_Mattermost docs](https://github.com/caronc/apprise/wiki/Notify_Mattermost) # noqa **Examples:** Load a saved Mattermost webhook and send a message: ```python from prefect.blocks.notifications import MattermostWebhook mattermost_webhook_block = MattermostWebhook.load("BLOCK_NAME") mattermost_webhook_block.notify("Hello from Prefect!") ``` **Methods:** #### `block_initialization` ```python block_initialization(self) -> None ``` #### `block_initialization` ```python block_initialization(self) -> None ``` #### `notify` ```python notify(self, body: str, subject: str | None = None) -> None ``` ### `DiscordWebhook` Enables sending notifications via a provided Discord webhook. See [Apprise notify\_Discord docs](https://github.com/caronc/apprise/wiki/Notify_Discord) # noqa **Examples:** Load a saved Discord webhook and send a message: ```python from prefect.blocks.notifications import DiscordWebhook discord_webhook_block = DiscordWebhook.load("BLOCK_NAME") discord_webhook_block.notify("Hello from Prefect!") ``` **Methods:** #### `block_initialization` ```python block_initialization(self) -> None ``` #### `block_initialization` ```python block_initialization(self) -> None ``` #### `notify` ```python notify(self, body: str, subject: str | None = None) -> None ``` ### `CustomWebhookNotificationBlock` Enables sending notifications via any custom webhook. All nested string param contains `{{key}}` will be substituted with value from context/secrets. Context values include: `subject`, `body` and `name`. **Examples:** Load a saved custom webhook and send a message: ```python from prefect.blocks.notifications import CustomWebhookNotificationBlock custom_webhook_block = CustomWebhookNotificationBlock.load("BLOCK_NAME") custom_webhook_block.notify("Hello from Prefect!") ``` **Methods:** #### `block_initialization` ```python block_initialization(self) -> None ``` #### `logger` ```python logger(self) -> LoggerOrAdapter ``` Returns a logger based on whether the NotificationBlock is called from within a flow or task run context. If a run context is present, the logger property returns a run logger. Else, it returns a default logger labeled with the class's name. **Returns:** * The run logger or a default logger with the class's name. #### `notify` ```python notify(self, body: str, subject: str | None = None) -> None ``` #### `notify` ```python notify(self, body: str, subject: str | None = None) -> None ``` Send a notification. **Args:** * `body`: The body of the notification. * `subject`: The subject of the notification. #### `raise_on_failure` ```python raise_on_failure(self) -> Generator[None, None, None] ``` Context manager that, while active, causes the block to raise errors if it encounters a failure sending notifications. ### `SendgridEmail` Enables sending notifications via any sendgrid account. See [Apprise Notify\_sendgrid docs](https://github.com/caronc/apprise/wiki/Notify_Sendgrid) **Examples:** Load a saved Sendgrid and send a email message: ```python from prefect.blocks.notifications import SendgridEmail sendgrid_block = SendgridEmail.load("BLOCK_NAME") sendgrid_block.notify("Hello from Prefect!") ``` **Methods:** #### `block_initialization` ```python block_initialization(self) -> None ``` #### `block_initialization` ```python block_initialization(self) -> None ``` #### `notify` ```python notify(self, body: str, subject: str | None = None) ``` #### `notify` ```python notify(self, body: str, subject: str | None = None) -> None ``` # redis Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-blocks-redis # `prefect.blocks.redis` ## Classes ### `RedisStorageContainer` Block used to interact with Redis as a filesystem **Methods:** #### `block_initialization` ```python block_initialization(self) -> None ``` #### `from_connection_string` ```python from_connection_string(cls, connection_string: str | SecretStr) -> Self ``` Create block from a Redis connection string Supports the following URL schemes: * `redis://` creates a TCP socket connection * `rediss://` creates a SSL wrapped TCP socket connection * `unix://` creates a Unix Domain Socket connection See \[Redis docs]\([https://redis.readthedocs.io/en/stable/examples](https://redis.readthedocs.io/en/stable/examples) /connection\_examples.html#Connecting-to-Redis-instances-by-specifying-a-URL -scheme.) for more info. **Args:** * `connection_string`: Redis connection string **Returns:** * `RedisStorageContainer` instance #### `from_host` ```python from_host(cls, host: str, port: int = 6379, db: int = 0, username: None | str | SecretStr = None, password: None | str | SecretStr = None) -> Self ``` Create block from hostname, username and password **Args:** * `host`: Redis hostname * `username`: Redis username * `password`: Redis password * `port`: Redis port **Returns:** * `RedisStorageContainer` instance #### `read_path` ```python read_path(self, path: Path | str) ``` Read the redis content at `path` **Args:** * `path`: Redis key to read from **Returns:** * Contents at key as bytes #### `read_path` ```python read_path(self, path: str) -> bytes ``` #### `write_path` ```python write_path(self, path: Path | str, content: bytes) ``` Write `content` to the redis at `path` **Args:** * `path`: Redis key to write to * `content`: Binary object to write #### `write_path` ```python write_path(self, path: str, content: bytes) -> None ``` # system Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-blocks-system # `prefect.blocks.system` ## Classes ### `JSON` A block that represents JSON. Deprecated, please use Variables to store JSON data instead. **Methods:** #### `aload` ```python aload(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `aload`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Custom.aload("my-custom-message") ``` Load from Block with a block document slug: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Block.aload("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = await Custom.aload("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `annotation_refers_to_block_class` ```python annotation_refers_to_block_class(annotation: Any) -> bool ``` #### `block_initialization` ```python block_initialization(self) -> None ``` #### `delete` ```python delete(cls, name: str, client: Optional['PrefectClient'] = None) ``` #### `get_block_capabilities` ```python get_block_capabilities(cls) -> FrozenSet[str] ``` Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset. #### `get_block_class_from_key` ```python get_block_class_from_key(cls: type[Self], key: str) -> type[Self] ``` Retrieve the block class implementation given a key. #### `get_block_class_from_schema` ```python get_block_class_from_schema(cls: type[Self], schema: BlockSchema) -> type[Self] ``` Retrieve the block class implementation given a schema. #### `get_block_placeholder` ```python get_block_placeholder(self) -> str ``` Returns the block placeholder for the current block which can be used for templating. **Returns:** * The block placeholder for the current block in the format `prefect.blocks.{block_type_name}.{block_document_name}` **Raises:** * `BlockNotSavedError`: Raised if the block has not been saved. If a block has not been saved, the return value will be `None`. #### `get_block_schema_version` ```python get_block_schema_version(cls) -> str ``` #### `get_block_type_name` ```python get_block_type_name(cls) -> str ``` #### `get_block_type_slug` ```python get_block_type_slug(cls) -> str ``` #### `get_code_example` ```python get_code_example(cls) -> Optional[str] ``` Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided. #### `get_description` ```python get_description(cls) -> Optional[str] ``` Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined. #### `is_block_class` ```python is_block_class(block: Any) -> TypeGuard[type['Block']] ``` #### `load` ```python load(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Custom.load("my-custom-message") ``` Load from Block with a block document slug: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Block.load("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = Custom.load("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `load_from_ref` ```python load_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `model_dump` ```python model_dump(self) -> dict[str, Any] ``` #### `model_json_schema` ```python model_json_schema(cls, by_alias: bool = True, ref_template: str = '#/definitions/{model}', schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema, mode: Literal['validation', 'serialization'] = 'validation') -> dict[str, Any] ``` TODO: stop overriding this method - use GenerateSchema in ConfigDict instead? #### `model_validate` ```python model_validate(cls: type[Self], obj: dict[str, Any] | Any) -> Self ``` #### `register_type_and_schema` ```python register_type_and_schema(cls, client: Optional['PrefectClient'] = None) ``` Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. #### `save` ```python save(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) ``` Saves the values of a block as a block document. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` ### `String` A block that represents a string. Deprecated, please use Variables to store string data instead. **Methods:** #### `aload` ```python aload(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `aload`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Custom.aload("my-custom-message") ``` Load from Block with a block document slug: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Block.aload("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = await Custom.aload("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `annotation_refers_to_block_class` ```python annotation_refers_to_block_class(annotation: Any) -> bool ``` #### `block_initialization` ```python block_initialization(self) -> None ``` #### `delete` ```python delete(cls, name: str, client: Optional['PrefectClient'] = None) ``` #### `get_block_capabilities` ```python get_block_capabilities(cls) -> FrozenSet[str] ``` Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset. #### `get_block_class_from_key` ```python get_block_class_from_key(cls: type[Self], key: str) -> type[Self] ``` Retrieve the block class implementation given a key. #### `get_block_class_from_schema` ```python get_block_class_from_schema(cls: type[Self], schema: BlockSchema) -> type[Self] ``` Retrieve the block class implementation given a schema. #### `get_block_placeholder` ```python get_block_placeholder(self) -> str ``` Returns the block placeholder for the current block which can be used for templating. **Returns:** * The block placeholder for the current block in the format `prefect.blocks.{block_type_name}.{block_document_name}` **Raises:** * `BlockNotSavedError`: Raised if the block has not been saved. If a block has not been saved, the return value will be `None`. #### `get_block_schema_version` ```python get_block_schema_version(cls) -> str ``` #### `get_block_type_name` ```python get_block_type_name(cls) -> str ``` #### `get_block_type_slug` ```python get_block_type_slug(cls) -> str ``` #### `get_code_example` ```python get_code_example(cls) -> Optional[str] ``` Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided. #### `get_description` ```python get_description(cls) -> Optional[str] ``` Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined. #### `is_block_class` ```python is_block_class(block: Any) -> TypeGuard[type['Block']] ``` #### `load` ```python load(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Custom.load("my-custom-message") ``` Load from Block with a block document slug: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Block.load("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = Custom.load("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `load_from_ref` ```python load_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `model_dump` ```python model_dump(self) -> dict[str, Any] ``` #### `model_json_schema` ```python model_json_schema(cls, by_alias: bool = True, ref_template: str = '#/definitions/{model}', schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema, mode: Literal['validation', 'serialization'] = 'validation') -> dict[str, Any] ``` TODO: stop overriding this method - use GenerateSchema in ConfigDict instead? #### `model_validate` ```python model_validate(cls: type[Self], obj: dict[str, Any] | Any) -> Self ``` #### `register_type_and_schema` ```python register_type_and_schema(cls, client: Optional['PrefectClient'] = None) ``` Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. #### `save` ```python save(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) ``` Saves the values of a block as a block document. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` ### `DateTime` A block that represents a datetime. Deprecated, please use Variables to store datetime data instead. **Methods:** #### `aload` ```python aload(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `aload`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Custom.aload("my-custom-message") ``` Load from Block with a block document slug: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Block.aload("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = await Custom.aload("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `annotation_refers_to_block_class` ```python annotation_refers_to_block_class(annotation: Any) -> bool ``` #### `block_initialization` ```python block_initialization(self) -> None ``` #### `delete` ```python delete(cls, name: str, client: Optional['PrefectClient'] = None) ``` #### `get_block_capabilities` ```python get_block_capabilities(cls) -> FrozenSet[str] ``` Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset. #### `get_block_class_from_key` ```python get_block_class_from_key(cls: type[Self], key: str) -> type[Self] ``` Retrieve the block class implementation given a key. #### `get_block_class_from_schema` ```python get_block_class_from_schema(cls: type[Self], schema: BlockSchema) -> type[Self] ``` Retrieve the block class implementation given a schema. #### `get_block_placeholder` ```python get_block_placeholder(self) -> str ``` Returns the block placeholder for the current block which can be used for templating. **Returns:** * The block placeholder for the current block in the format `prefect.blocks.{block_type_name}.{block_document_name}` **Raises:** * `BlockNotSavedError`: Raised if the block has not been saved. If a block has not been saved, the return value will be `None`. #### `get_block_schema_version` ```python get_block_schema_version(cls) -> str ``` #### `get_block_type_name` ```python get_block_type_name(cls) -> str ``` #### `get_block_type_slug` ```python get_block_type_slug(cls) -> str ``` #### `get_code_example` ```python get_code_example(cls) -> Optional[str] ``` Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided. #### `get_description` ```python get_description(cls) -> Optional[str] ``` Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined. #### `is_block_class` ```python is_block_class(block: Any) -> TypeGuard[type['Block']] ``` #### `load` ```python load(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Custom.load("my-custom-message") ``` Load from Block with a block document slug: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Block.load("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = Custom.load("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `load_from_ref` ```python load_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `model_dump` ```python model_dump(self) -> dict[str, Any] ``` #### `model_json_schema` ```python model_json_schema(cls, by_alias: bool = True, ref_template: str = '#/definitions/{model}', schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema, mode: Literal['validation', 'serialization'] = 'validation') -> dict[str, Any] ``` TODO: stop overriding this method - use GenerateSchema in ConfigDict instead? #### `model_validate` ```python model_validate(cls: type[Self], obj: dict[str, Any] | Any) -> Self ``` #### `register_type_and_schema` ```python register_type_and_schema(cls, client: Optional['PrefectClient'] = None) ``` Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. #### `save` ```python save(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) ``` Saves the values of a block as a block document. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` ### `Secret` A block that represents a secret value. The value stored in this block will be obfuscated when this block is viewed or edited in the UI. **Methods:** #### `aload` ```python aload(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `aload`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Custom.aload("my-custom-message") ``` Load from Block with a block document slug: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Block.aload("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = await Custom.aload("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `annotation_refers_to_block_class` ```python annotation_refers_to_block_class(annotation: Any) -> bool ``` #### `block_initialization` ```python block_initialization(self) -> None ``` #### `delete` ```python delete(cls, name: str, client: Optional['PrefectClient'] = None) ``` #### `get` ```python get(self) -> T | str ``` #### `get_block_capabilities` ```python get_block_capabilities(cls) -> FrozenSet[str] ``` Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset. #### `get_block_class_from_key` ```python get_block_class_from_key(cls: type[Self], key: str) -> type[Self] ``` Retrieve the block class implementation given a key. #### `get_block_class_from_schema` ```python get_block_class_from_schema(cls: type[Self], schema: BlockSchema) -> type[Self] ``` Retrieve the block class implementation given a schema. #### `get_block_placeholder` ```python get_block_placeholder(self) -> str ``` Returns the block placeholder for the current block which can be used for templating. **Returns:** * The block placeholder for the current block in the format `prefect.blocks.{block_type_name}.{block_document_name}` **Raises:** * `BlockNotSavedError`: Raised if the block has not been saved. If a block has not been saved, the return value will be `None`. #### `get_block_schema_version` ```python get_block_schema_version(cls) -> str ``` #### `get_block_type_name` ```python get_block_type_name(cls) -> str ``` #### `get_block_type_slug` ```python get_block_type_slug(cls) -> str ``` #### `get_code_example` ```python get_code_example(cls) -> Optional[str] ``` Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided. #### `get_description` ```python get_description(cls) -> Optional[str] ``` Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined. #### `is_block_class` ```python is_block_class(block: Any) -> TypeGuard[type['Block']] ``` #### `load` ```python load(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Custom.load("my-custom-message") ``` Load from Block with a block document slug: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Block.load("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = Custom.load("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `load_from_ref` ```python load_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `model_dump` ```python model_dump(self) -> dict[str, Any] ``` #### `model_json_schema` ```python model_json_schema(cls, by_alias: bool = True, ref_template: str = '#/definitions/{model}', schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema, mode: Literal['validation', 'serialization'] = 'validation') -> dict[str, Any] ``` TODO: stop overriding this method - use GenerateSchema in ConfigDict instead? #### `model_validate` ```python model_validate(cls: type[Self], obj: dict[str, Any] | Any) -> Self ``` #### `register_type_and_schema` ```python register_type_and_schema(cls, client: Optional['PrefectClient'] = None) ``` Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. #### `save` ```python save(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) ``` Saves the values of a block as a block document. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `validate_value` ```python validate_value(cls, value: Union[T, SecretStr, PydanticSecret[T]]) -> Union[SecretStr, PydanticSecret[T]] ``` # webhook Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-blocks-webhook # `prefect.blocks.webhook` ## Classes ### `Webhook` Block that enables calling webhooks. **Methods:** #### `aload` ```python aload(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `aload`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Custom.aload("my-custom-message") ``` Load from Block with a block document slug: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Block.aload("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = await Custom.aload("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `annotation_refers_to_block_class` ```python annotation_refers_to_block_class(annotation: Any) -> bool ``` #### `block_initialization` ```python block_initialization(self) -> None ``` #### `block_initialization` ```python block_initialization(self) -> None ``` #### `call` ```python call(self, payload: dict[str, Any] | str | None = None) -> Response ``` Call the webhook. **Args:** * `payload`: an optional payload to send when calling the webhook. #### `delete` ```python delete(cls, name: str, client: Optional['PrefectClient'] = None) ``` #### `get_block_capabilities` ```python get_block_capabilities(cls) -> FrozenSet[str] ``` Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset. #### `get_block_class_from_key` ```python get_block_class_from_key(cls: type[Self], key: str) -> type[Self] ``` Retrieve the block class implementation given a key. #### `get_block_class_from_schema` ```python get_block_class_from_schema(cls: type[Self], schema: BlockSchema) -> type[Self] ``` Retrieve the block class implementation given a schema. #### `get_block_placeholder` ```python get_block_placeholder(self) -> str ``` Returns the block placeholder for the current block which can be used for templating. **Returns:** * The block placeholder for the current block in the format `prefect.blocks.{block_type_name}.{block_document_name}` **Raises:** * `BlockNotSavedError`: Raised if the block has not been saved. If a block has not been saved, the return value will be `None`. #### `get_block_schema_version` ```python get_block_schema_version(cls) -> str ``` #### `get_block_type_name` ```python get_block_type_name(cls) -> str ``` #### `get_block_type_slug` ```python get_block_type_slug(cls) -> str ``` #### `get_code_example` ```python get_code_example(cls) -> Optional[str] ``` Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided. #### `get_description` ```python get_description(cls) -> Optional[str] ``` Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined. #### `is_block_class` ```python is_block_class(block: Any) -> TypeGuard[type['Block']] ``` #### `load` ```python load(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Custom.load("my-custom-message") ``` Load from Block with a block document slug: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Block.load("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = Custom.load("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `load_from_ref` ```python load_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `model_dump` ```python model_dump(self) -> dict[str, Any] ``` #### `model_json_schema` ```python model_json_schema(cls, by_alias: bool = True, ref_template: str = '#/definitions/{model}', schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema, mode: Literal['validation', 'serialization'] = 'validation') -> dict[str, Any] ``` TODO: stop overriding this method - use GenerateSchema in ConfigDict instead? #### `model_validate` ```python model_validate(cls: type[Self], obj: dict[str, Any] | Any) -> Self ``` #### `register_type_and_schema` ```python register_type_and_schema(cls, client: Optional['PrefectClient'] = None) ``` Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. #### `save` ```python save(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) ``` Saves the values of a block as a block document. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` # cache_policies Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-cache_policies # `prefect.cache_policies` ## Classes ### `CachePolicy` Base class for all cache policies. **Methods:** #### `compute_key` ```python compute_key(self, task_ctx: TaskRunContext, inputs: dict[str, Any], flow_parameters: dict[str, Any], **kwargs: Any) -> Optional[str] ``` #### `configure` ```python configure(self, key_storage: Union['WritableFileSystem', str, Path, None] = None, lock_manager: Optional['LockManager'] = None, isolation_level: Union[Literal['READ_COMMITTED', 'SERIALIZABLE'], 'IsolationLevel', None] = None) -> Self ``` Configure the cache policy with the given key storage, lock manager, and isolation level. **Args:** * `key_storage`: The storage to use for cache keys. If not provided, the current key storage will be used. * `lock_manager`: The lock manager to use for the cache policy. If not provided, the current lock manager will be used. * `isolation_level`: The isolation level to use for the cache policy. If not provided, the current isolation level will be used. **Returns:** * A new cache policy with the given key storage, lock manager, and isolation level. #### `from_cache_key_fn` ```python from_cache_key_fn(cls, cache_key_fn: Callable[['TaskRunContext', Dict[str, Any]], Optional[str]]) -> 'CacheKeyFnPolicy' ``` Given a function generates a key policy. ### `CacheKeyFnPolicy` This policy accepts a custom function with signature f(task\_run\_context, task\_parameters, flow\_parameters) -> str and uses it to compute a task run cache key. **Methods:** #### `compute_key` ```python compute_key(self, task_ctx: TaskRunContext, inputs: dict[str, Any], flow_parameters: dict[str, Any], **kwargs: Any) -> Optional[str] ``` #### `compute_key` ```python compute_key(self, task_ctx: TaskRunContext, inputs: dict[str, Any], flow_parameters: dict[str, Any], **kwargs: Any) -> Optional[str] ``` #### `configure` ```python configure(self, key_storage: Union['WritableFileSystem', str, Path, None] = None, lock_manager: Optional['LockManager'] = None, isolation_level: Union[Literal['READ_COMMITTED', 'SERIALIZABLE'], 'IsolationLevel', None] = None) -> Self ``` Configure the cache policy with the given key storage, lock manager, and isolation level. **Args:** * `key_storage`: The storage to use for cache keys. If not provided, the current key storage will be used. * `lock_manager`: The lock manager to use for the cache policy. If not provided, the current lock manager will be used. * `isolation_level`: The isolation level to use for the cache policy. If not provided, the current isolation level will be used. **Returns:** * A new cache policy with the given key storage, lock manager, and isolation level. #### `from_cache_key_fn` ```python from_cache_key_fn(cls, cache_key_fn: Callable[['TaskRunContext', Dict[str, Any]], Optional[str]]) -> 'CacheKeyFnPolicy' ``` Given a function generates a key policy. ### `CompoundCachePolicy` This policy is constructed from two or more other cache policies and works by computing the keys for each policy individually, and then hashing a sorted tuple of all computed keys. Any keys that return `None` will be ignored. **Methods:** #### `compute_key` ```python compute_key(self, task_ctx: TaskRunContext, inputs: dict[str, Any], flow_parameters: dict[str, Any], **kwargs: Any) -> Optional[str] ``` #### `compute_key` ```python compute_key(self, task_ctx: TaskRunContext, inputs: dict[str, Any], flow_parameters: dict[str, Any], **kwargs: Any) -> Optional[str] ``` #### `configure` ```python configure(self, key_storage: Union['WritableFileSystem', str, Path, None] = None, lock_manager: Optional['LockManager'] = None, isolation_level: Union[Literal['READ_COMMITTED', 'SERIALIZABLE'], 'IsolationLevel', None] = None) -> Self ``` Configure the cache policy with the given key storage, lock manager, and isolation level. **Args:** * `key_storage`: The storage to use for cache keys. If not provided, the current key storage will be used. * `lock_manager`: The lock manager to use for the cache policy. If not provided, the current lock manager will be used. * `isolation_level`: The isolation level to use for the cache policy. If not provided, the current isolation level will be used. **Returns:** * A new cache policy with the given key storage, lock manager, and isolation level. #### `from_cache_key_fn` ```python from_cache_key_fn(cls, cache_key_fn: Callable[['TaskRunContext', Dict[str, Any]], Optional[str]]) -> 'CacheKeyFnPolicy' ``` Given a function generates a key policy. ### `TaskSource` Policy for computing a cache key based on the source code of the task. This policy only considers raw lines of code in the task, and not the source code of nested tasks. **Methods:** #### `compute_key` ```python compute_key(self, task_ctx: TaskRunContext, inputs: Optional[dict[str, Any]], flow_parameters: Optional[dict[str, Any]], **kwargs: Any) -> Optional[str] ``` #### `compute_key` ```python compute_key(self, task_ctx: TaskRunContext, inputs: dict[str, Any], flow_parameters: dict[str, Any], **kwargs: Any) -> Optional[str] ``` #### `configure` ```python configure(self, key_storage: Union['WritableFileSystem', str, Path, None] = None, lock_manager: Optional['LockManager'] = None, isolation_level: Union[Literal['READ_COMMITTED', 'SERIALIZABLE'], 'IsolationLevel', None] = None) -> Self ``` Configure the cache policy with the given key storage, lock manager, and isolation level. **Args:** * `key_storage`: The storage to use for cache keys. If not provided, the current key storage will be used. * `lock_manager`: The lock manager to use for the cache policy. If not provided, the current lock manager will be used. * `isolation_level`: The isolation level to use for the cache policy. If not provided, the current isolation level will be used. **Returns:** * A new cache policy with the given key storage, lock manager, and isolation level. #### `from_cache_key_fn` ```python from_cache_key_fn(cls, cache_key_fn: Callable[['TaskRunContext', Dict[str, Any]], Optional[str]]) -> 'CacheKeyFnPolicy' ``` Given a function generates a key policy. ### `FlowParameters` Policy that computes the cache key based on a hash of the flow parameters. **Methods:** #### `compute_key` ```python compute_key(self, task_ctx: TaskRunContext, inputs: dict[str, Any], flow_parameters: dict[str, Any], **kwargs: Any) -> Optional[str] ``` #### `compute_key` ```python compute_key(self, task_ctx: TaskRunContext, inputs: dict[str, Any], flow_parameters: dict[str, Any], **kwargs: Any) -> Optional[str] ``` #### `configure` ```python configure(self, key_storage: Union['WritableFileSystem', str, Path, None] = None, lock_manager: Optional['LockManager'] = None, isolation_level: Union[Literal['READ_COMMITTED', 'SERIALIZABLE'], 'IsolationLevel', None] = None) -> Self ``` Configure the cache policy with the given key storage, lock manager, and isolation level. **Args:** * `key_storage`: The storage to use for cache keys. If not provided, the current key storage will be used. * `lock_manager`: The lock manager to use for the cache policy. If not provided, the current lock manager will be used. * `isolation_level`: The isolation level to use for the cache policy. If not provided, the current isolation level will be used. **Returns:** * A new cache policy with the given key storage, lock manager, and isolation level. #### `from_cache_key_fn` ```python from_cache_key_fn(cls, cache_key_fn: Callable[['TaskRunContext', Dict[str, Any]], Optional[str]]) -> 'CacheKeyFnPolicy' ``` Given a function generates a key policy. ### `RunId` Returns either the prevailing flow run ID, or if not found, the prevailing task run ID. **Methods:** #### `compute_key` ```python compute_key(self, task_ctx: TaskRunContext, inputs: dict[str, Any], flow_parameters: dict[str, Any], **kwargs: Any) -> Optional[str] ``` #### `compute_key` ```python compute_key(self, task_ctx: TaskRunContext, inputs: dict[str, Any], flow_parameters: dict[str, Any], **kwargs: Any) -> Optional[str] ``` #### `configure` ```python configure(self, key_storage: Union['WritableFileSystem', str, Path, None] = None, lock_manager: Optional['LockManager'] = None, isolation_level: Union[Literal['READ_COMMITTED', 'SERIALIZABLE'], 'IsolationLevel', None] = None) -> Self ``` Configure the cache policy with the given key storage, lock manager, and isolation level. **Args:** * `key_storage`: The storage to use for cache keys. If not provided, the current key storage will be used. * `lock_manager`: The lock manager to use for the cache policy. If not provided, the current lock manager will be used. * `isolation_level`: The isolation level to use for the cache policy. If not provided, the current isolation level will be used. **Returns:** * A new cache policy with the given key storage, lock manager, and isolation level. #### `from_cache_key_fn` ```python from_cache_key_fn(cls, cache_key_fn: Callable[['TaskRunContext', Dict[str, Any]], Optional[str]]) -> 'CacheKeyFnPolicy' ``` Given a function generates a key policy. ### `Inputs` Policy that computes a cache key based on a hash of the runtime inputs provided to the task.. **Methods:** #### `compute_key` ```python compute_key(self, task_ctx: TaskRunContext, inputs: dict[str, Any], flow_parameters: dict[str, Any], **kwargs: Any) -> Optional[str] ``` #### `compute_key` ```python compute_key(self, task_ctx: TaskRunContext, inputs: dict[str, Any], flow_parameters: dict[str, Any], **kwargs: Any) -> Optional[str] ``` #### `configure` ```python configure(self, key_storage: Union['WritableFileSystem', str, Path, None] = None, lock_manager: Optional['LockManager'] = None, isolation_level: Union[Literal['READ_COMMITTED', 'SERIALIZABLE'], 'IsolationLevel', None] = None) -> Self ``` Configure the cache policy with the given key storage, lock manager, and isolation level. **Args:** * `key_storage`: The storage to use for cache keys. If not provided, the current key storage will be used. * `lock_manager`: The lock manager to use for the cache policy. If not provided, the current lock manager will be used. * `isolation_level`: The isolation level to use for the cache policy. If not provided, the current isolation level will be used. **Returns:** * A new cache policy with the given key storage, lock manager, and isolation level. #### `from_cache_key_fn` ```python from_cache_key_fn(cls, cache_key_fn: Callable[['TaskRunContext', Dict[str, Any]], Optional[str]]) -> 'CacheKeyFnPolicy' ``` Given a function generates a key policy. # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-cli-__init__ # `prefect.cli` *This module is empty or contains only private/internal implementations.* # artifact Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-cli-artifact # `prefect.cli.artifact` ## Functions ### `list_artifacts` ```python list_artifacts(limit: int = typer.Option(100, '--limit', help='The maximum number of artifacts to return.'), all: bool = typer.Option(False, '--all', '-a', help='Whether or not to only return the latest version of each artifact.')) ``` List artifacts. ### `inspect` ```python inspect(key: str, limit: int = typer.Option(10, '--limit', help='The maximum number of artifacts to return.'), output: Optional[str] = typer.Option(None, '--output', '-o', help='Specify an output format. Currently supports: json')) ``` View details about an artifact. **Args:** * `key`: the key of the artifact to inspect **Examples:** `$ prefect artifact inspect "my-artifact"` ```json [ { 'id': 'ba1d67be-0bd7-452e-8110-247fe5e6d8cc', 'created': '2023-03-21T21:40:09.895910+00:00', 'updated': '2023-03-21T21:40:09.895910+00:00', 'key': 'my-artifact', 'type': 'markdown', 'description': None, 'data': 'my markdown', 'metadata_': None, 'flow_run_id': '8dc54b6f-6e24-4586-a05c-e98c6490cb98', 'task_run_id': None }, { 'id': '57f235b5-2576-45a5-bd93-c829c2900966', 'created': '2023-03-27T23:16:15.536434+00:00', 'updated': '2023-03-27T23:16:15.536434+00:00', 'key': 'my-artifact', 'type': 'markdown', 'description': 'my-artifact-description', 'data': 'my markdown', 'metadata_': None, 'flow_run_id': 'ffa91051-f249-48c1-ae0f-4754fcb7eb29', 'task_run_id': None } ] ``` ### `delete` ```python delete(key: Optional[str] = typer.Argument(None, help='The key of the artifact to delete.'), artifact_id: Optional[UUID] = typer.Option(None, '--id', help='The ID of the artifact to delete.')) ``` Delete an artifact. **Args:** * `key`: the key of the artifact to delete **Examples:** `$ prefect artifact delete "my-artifact"` # block Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-cli-block # `prefect.cli.block` Command line interface for working with blocks. ## Functions ### `display_block` ```python display_block(block_document: 'BlockDocument') -> Table ``` ### `display_block_type` ```python display_block_type(block_type: 'BlockType') -> Table ``` ### `display_block_schema_properties` ```python display_block_schema_properties(block_schema_fields: dict[str, Any]) -> Table ``` ### `display_block_schema_extra_definitions` ```python display_block_schema_extra_definitions(block_schema_definitions: dict[str, Any]) -> Table ``` ### `register` ```python register(module_name: Optional[str] = typer.Option(None, '--module', '-m', help='Python module containing block types to be registered'), file_path: Optional[Path] = typer.Option(None, '--file', '-f', help='Path to .py file containing block types to be registered')) ``` Register blocks types within a module or file. This makes the blocks available for configuration via the UI. If a block type has already been registered, its registration will be updated to match the block's current definition.  Examples:  Register block types in a Python module: $prefect block register -m prefect_aws.credentials  Register block types in a .py file: $ prefect block register -f my\_blocks.py ### `block_ls` ```python block_ls() ``` View all configured blocks. ### `block_delete` ```python block_delete(slug: Optional[str] = typer.Argument(None, help="A block slug. Formatted as '/'"), block_id: Optional[UUID] = typer.Option(None, '--id', help='A block id.')) ``` Delete a configured block. ### `block_create` ```python block_create(block_type_slug: str = typer.Argument(..., help='A block type slug. View available types with: prefect block type ls', show_default=False)) ``` Generate a link to the Prefect UI to create a block. ### `block_inspect` ```python block_inspect(slug: Optional[str] = typer.Argument(None, help='A Block slug: /'), block_id: Optional[UUID] = typer.Option(None, '--id', help='A Block id to search for if no slug is given')) ``` Displays details about a configured block. ### `list_types` ```python list_types() ``` List all block types. ### `blocktype_inspect` ```python blocktype_inspect(slug: str = typer.Argument(..., help='A block type slug')) ``` Display details about a block type. ### `blocktype_delete` ```python blocktype_delete(slug: str = typer.Argument(..., help='A Block type slug')) ``` Delete an unprotected Block Type. # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-cli-cloud-__init__ # `prefect.cli.cloud` Command line interface for interacting with Prefect Cloud ## Functions ### `set_login_api_ready_event` ```python set_login_api_ready_event() -> None ``` ### `lifespan` ```python lifespan(app: FastAPI) ``` ### `receive_login` ```python receive_login(payload: LoginSuccess) -> None ``` ### `receive_failure` ```python receive_failure(payload: LoginFailed) -> None ``` ### `serve_login_api` ```python serve_login_api(cancel_scope: anyio.CancelScope, task_status: anyio.abc.TaskStatus[uvicorn.Server]) -> None ``` ### `confirm_logged_in` ```python confirm_logged_in() -> None ``` ### `get_current_workspace` ```python get_current_workspace(workspaces: Iterable[Workspace]) -> Workspace | None ``` ### `prompt_select_from_list` ```python prompt_select_from_list(console: Console, prompt: str, options: list[str] | list[tuple[T, str]]) -> str | T ``` Given a list of options, display the values to user in a table and prompt them to select one. **Args:** * `options`: A list of options to present to the user. A list of tuples can be passed as key value pairs. If a value is chosen, the key will be returned. **Returns:** * the selected option ### `login_with_browser` ```python login_with_browser() -> str ``` Perform login using the browser. On failure, this function will exit the process. On success, it will return an API key. ### `check_key_is_valid_for_login` ```python check_key_is_valid_for_login(key: str) -> bool ``` Attempt to use a key to see if it is valid ### `login` ```python login(key: Optional[str] = typer.Option(None, '--key', '-k', help='API Key to authenticate with Prefect'), workspace_handle: Optional[str] = typer.Option(None, '--workspace', '-w', help="Full handle of workspace, in format '/'")) ``` Log in to Prefect Cloud. Creates a new profile configured to use the specified PREFECT\_API\_KEY. Uses a previously configured profile if it exists. ### `logout` ```python logout() ``` Logout the current workspace. Reset PREFECT\_API\_KEY and PREFECT\_API\_URL to default. ### `open` ```python open() ``` Open the Prefect Cloud UI in the browser. ### `ls` ```python ls() ``` List available workspaces. ### `set` ```python set(workspace_handle: str = typer.Option(None, '--workspace', '-w', help="Full handle of workspace, in format '/'")) ``` Set current workspace. Shows a workspace picker if no workspace is specified. ## Classes ### `LoginSuccess` ### `LoginFailed` ### `LoginResult` ### `ServerExit` # ip_allowlist Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-cli-cloud-ip_allowlist # `prefect.cli.cloud.ip_allowlist` ## Functions ### `require_access_to_ip_allowlisting` ```python require_access_to_ip_allowlisting(ctx: typer.Context) -> None ``` Enforce access to IP allowlisting for all subcommands. ### `enable` ```python enable(ctx: typer.Context) -> None ``` Enable the IP allowlist for your account. When enabled, if the allowlist is non-empty, then access to your Prefect Cloud account will be restricted to only those IP addresses on the allowlist. ### `disable` ```python disable() ``` Disable the IP allowlist for your account. When disabled, all IP addresses will be allowed to access your Prefect Cloud account. ### `ls` ```python ls(ctx: typer.Context) ``` Fetch and list all IP allowlist entries in your account. ### `parse_ip_network_argument` ```python parse_ip_network_argument(val: str) -> IPNetworkArg ``` ### `add` ```python add(ctx: typer.Context, ip_address_or_range: IP_ARGUMENT, description: Optional[str] = typer.Option(None, '--description', '-d', help='A short description to annotate the entry with.')) ``` Add a new IP entry to your account IP allowlist. ### `remove` ```python remove(ctx: typer.Context, ip_address_or_range: IP_ARGUMENT) ``` Remove an IP entry from your account IP allowlist. ### `toggle` ```python toggle(ctx: typer.Context, ip_address_or_range: IP_ARGUMENT) ``` Toggle the enabled status of an individual IP entry in your account IP allowlist. ## Classes ### `IPNetworkArg` # webhook Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-cli-cloud-webhook # `prefect.cli.cloud.webhook` Command line interface for working with webhooks ## Functions ### `ls` ```python ls() ``` Fetch and list all webhooks in your workspace ### `get` ```python get(webhook_id: UUID) ``` Retrieve a webhook by ID. ### `create` ```python create(webhook_name: str, description: str = typer.Option('', '--description', '-d', help='Description of the webhook'), template: str = typer.Option(None, '--template', '-t', help='Jinja2 template expression')) ``` Create a new Cloud webhook ### `rotate` ```python rotate(webhook_id: UUID) ``` Rotate url for an existing Cloud webhook, in case it has been compromised ### `toggle` ```python toggle(webhook_id: UUID) ``` Toggle the enabled status of an existing Cloud webhook ### `update` ```python update(webhook_id: UUID, webhook_name: str = typer.Option(None, '--name', '-n', help='Webhook name'), description: str = typer.Option(None, '--description', '-d', help='Description of the webhook'), template: str = typer.Option(None, '--template', '-t', help='Jinja2 template expression')) ``` Partially update an existing Cloud webhook ### `delete` ```python delete(webhook_id: UUID) ``` Delete an existing Cloud webhook # concurrency_limit Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-cli-concurrency_limit # `prefect.cli.concurrency_limit` Command line interface for working with concurrency limits. ## Functions ### `create` ```python create(tag: str, concurrency_limit: int) ``` Create a concurrency limit against a tag. This limit controls how many task runs with that tag may simultaneously be in a Running state. ### `inspect` ```python inspect(tag: str, output: Optional[str] = typer.Option(None, '--output', '-o', help='Specify an output format. Currently supports: json')) ``` View details about a concurrency limit. `active_slots` shows a list of TaskRun IDs which are currently using a concurrency slot. ### `ls` ```python ls(limit: int = 15, offset: int = 0) ``` View all concurrency limits. ### `reset` ```python reset(tag: str) ``` Resets the concurrency limit slots set on the specified tag. ### `delete` ```python delete(tag: str) ``` Delete the concurrency limit set on the specified tag. # config Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-cli-config # `prefect.cli.config` Command line interface for working with profiles ## Functions ### `set_` ```python set_(settings: list[str]) ``` Change the value for a setting by setting the value in the current profile. ### `validate` ```python validate() ``` Read and validate the current profile. Deprecated settings will be automatically converted to new names unless both are set. ### `unset` ```python unset(setting_names: list[str], confirm: bool = typer.Option(False, '--yes', '-y')) ``` Restore the default value for a setting. Removes the setting from the current profile. ### `view` ```python view(show_defaults: bool = typer.Option(False, '--show-defaults/--hide-defaults', help=show_defaults_help), show_sources: bool = typer.Option(True, '--show-sources/--hide-sources', help=show_sources_help), show_secrets: bool = typer.Option(False, '--show-secrets/--hide-secrets', help='Toggle display of secrets setting values.')) ``` Display the current settings. # dashboard Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-cli-dashboard # `prefect.cli.dashboard` ## Functions ### `open` ```python open() -> None ``` Open the Prefect UI in the browser. # deployment Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-cli-deployment # `prefect.cli.deployment` Command line interface for working with deployments. ## Functions ### `str_presenter` ```python str_presenter(dumper: yaml.Dumper | yaml.representer.SafeRepresenter, data: str) -> yaml.ScalarNode ``` configures yaml for dumping multiline strings Ref: [https://stackoverflow.com/questions/8640959/how-can-i-control-what-scalar-form-pyyaml-uses-for-my-data](https://stackoverflow.com/questions/8640959/how-can-i-control-what-scalar-form-pyyaml-uses-for-my-data) ### `assert_deployment_name_format` ```python assert_deployment_name_format(name: str) -> None ``` ### `get_deployment` ```python get_deployment(client: 'PrefectClient', name: str | None, deployment_id: str | None) -> DeploymentResponse ``` ### `create_work_queue_and_set_concurrency_limit` ```python create_work_queue_and_set_concurrency_limit(work_queue_name: str, work_pool_name: str | None, work_queue_concurrency: int | None) -> None ``` ### `check_work_pool_exists` ```python check_work_pool_exists(work_pool_name: str | None, client: 'PrefectClient | None' = None) ``` ### `inspect` ```python inspect(name: str, output: Optional[str] = typer.Option(None, '--output', '-o', help='Specify an output format. Currently supports: json')) ``` View details about a deployment. **Examples:** `$ prefect deployment inspect "hello-world/my-deployment"` ```python { 'id': '610df9c3-0fb4-4856-b330-67f588d20201', 'created': '2022-08-01T18:36:25.192102+00:00', 'updated': '2022-08-01T18:36:25.188166+00:00', 'name': 'my-deployment', 'description': None, 'flow_id': 'b57b0aa2-ef3a-479e-be49-381fb0483b4e', 'schedules': None, 'parameters': {'name': 'Marvin'}, 'tags': ['test'], 'parameter_openapi_schema': { 'title': 'Parameters', 'type': 'object', 'properties': { 'name': { 'title': 'name', 'type': 'string' } }, 'required': ['name'] }, 'storage_document_id': '63ef008f-1e5d-4e07-a0d4-4535731adb32', 'infrastructure_document_id': '6702c598-7094-42c8-9785-338d2ec3a028', 'infrastructure': { 'type': 'process', 'env': {}, 'labels': {}, 'name': None, 'command': ['python', '-m', 'prefect.engine'], 'stream_output': True } } ``` ### `create_schedule` ```python create_schedule(name: str, interval: Optional[float] = typer.Option(None, '--interval', help='An interval to schedule on, specified in seconds', min=0.0001), interval_anchor: Optional[str] = typer.Option(None, '--anchor-date', help='The anchor date for an interval schedule'), rrule_string: Optional[str] = typer.Option(None, '--rrule', help='Deployment schedule rrule string'), cron_string: Optional[str] = typer.Option(None, '--cron', help='Deployment schedule cron string'), cron_day_or: bool = typer.Option(True, '--day_or', help='Control how croniter handles `day` and `day_of_week` entries'), timezone: Optional[str] = typer.Option(None, '--timezone', help="Deployment schedule timezone string e.g. 'America/New_York'"), active: bool = typer.Option(True, '--active', help='Whether the schedule is active. Defaults to True.'), replace: Optional[bool] = typer.Option(False, '--replace', help="Replace the deployment's current schedule(s) with this new schedule."), assume_yes: Optional[bool] = typer.Option(False, '--accept-yes', '-y', help='Accept the confirmation prompt without prompting')) ``` Create a schedule for a given deployment. ### `delete_schedule` ```python delete_schedule(deployment_name: str, schedule_id: UUID, assume_yes: bool = typer.Option(False, '--accept-yes', '-y', help='Accept the confirmation prompt without prompting')) ``` Delete a deployment schedule. ### `pause_schedule` ```python pause_schedule(deployment_name: Optional[str] = typer.Argument(None), schedule_id: Optional[UUID] = typer.Argument(None), _all: bool = typer.Option(False, '--all', help='Pause all deployment schedules')) ``` Pause deployment schedules. **Examples:** Pause a specific schedule: \$ prefect deployment schedule pause my-flow/my-deployment abc123-... Pause all schedules: \$ prefect deployment schedule pause --all ### `resume_schedule` ```python resume_schedule(deployment_name: Optional[str] = typer.Argument(None), schedule_id: Optional[UUID] = typer.Argument(None), _all: bool = typer.Option(False, '--all', help='Resume all deployment schedules')) ``` Resume deployment schedules. **Examples:** Resume a specific schedule: \$ prefect deployment schedule resume my-flow/my-deployment abc123-... Resume all schedules: \$ prefect deployment schedule resume --all ### `list_schedules` ```python list_schedules(deployment_name: str) ``` View all schedules for a deployment. ### `clear_schedules` ```python clear_schedules(deployment_name: str, assume_yes: bool = typer.Option(False, '--accept-yes', '-y', help='Accept the confirmation prompt without prompting')) ``` Clear all schedules for a deployment. ### `ls` ```python ls(flow_name: Optional[list[str]] = None, by_created: bool = False) ``` View all deployments or deployments for specific flows. ### `run` ```python run(name: Optional[str] = typer.Argument(None, help="A deployed flow's name: /"), deployment_id: Optional[str] = typer.Option(None, '--id', help='A deployment id to search for if no name is given'), job_variables: list[str] = typer.Option(None, '-jv', '--job-variable', help='A key, value pair (key=value) specifying a flow run job variable. The value will be interpreted as JSON. May be passed multiple times to specify multiple job variable values.'), params: list[str] = typer.Option(None, '-p', '--param', help='A key, value pair (key=value) specifying a flow parameter. The value will be interpreted as JSON. May be passed multiple times to specify multiple parameter values.'), multiparams: Optional[str] = typer.Option(None, '--params', help="A mapping of parameters to values. To use a stdin, pass '-'. Any parameters passed with `--param` will take precedence over these values."), start_in: Optional[str] = typer.Option(None, '--start-in', help="A human-readable string specifying a time interval to wait before starting the flow run. E.g. 'in 5 minutes', 'in 1 hour', 'in 2 days'."), start_at: Optional[str] = typer.Option(None, '--start-at', help="A human-readable string specifying a time to start the flow run. E.g. 'at 5:30pm', 'at 2022-08-01 17:30', 'at 2022-08-01 17:30:00'."), tags: list[str] = typer.Option(None, '--tag', help='Tag(s) to be applied to flow run.'), watch: bool = typer.Option(False, '--watch', help='Whether to poll the flow run until a terminal state is reached.'), watch_interval: Optional[int] = typer.Option(None, '--watch-interval', help='How often to poll the flow run for state changes (in seconds).'), watch_timeout: Optional[int] = typer.Option(None, '--watch-timeout', help='Timeout for --watch.'), flow_run_name: Optional[str] = typer.Option(None, '--flow-run-name', help='Custom name to give the flow run.')) ``` Create a flow run for the given flow and deployment. The flow run will be scheduled to run immediately unless `--start-in` or `--start-at` is specified. The flow run will not execute until a worker starts. To watch the flow run until it reaches a terminal state, use the `--watch` flag. ### `delete` ```python delete(name: Optional[str] = typer.Argument(None, help="A deployed flow's name: /"), deployment_id: Optional[UUID] = typer.Option(None, '--id', help='A deployment id to search for if no name is given'), _all: bool = typer.Option(False, '--all', help='Delete all deployments')) ``` Delete a deployment. **Examples:** ```bash $ prefect deployment delete test_flow/test_deployment $ prefect deployment delete --id dfd3e220-a130-4149-9af6-8d487e02fea6 ``` ## Classes ### `RichTextIO` **Methods:** #### `write` ```python write(self, content: str) -> None ``` # dev Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-cli-dev # `prefect.cli.dev` Command line interface for working with Prefect Server ## Functions ### `exit_with_error_if_not_editable_install` ```python exit_with_error_if_not_editable_install() -> None ``` ### `build_docs` ```python build_docs(schema_path: Optional[str] = None) ``` Builds REST API reference documentation for static display. ### `build_ui` ```python build_ui(no_install: bool = False) ``` ### `ui` ```python ui() ``` Starts a hot-reloading development UI. ### `api` ```python api(host: str = SettingsOption(PREFECT_SERVER_API_HOST), port: int = SettingsOption(PREFECT_SERVER_API_PORT), log_level: str = 'DEBUG', services: bool = True) ``` Starts a hot-reloading development API. ### `start` ```python start(exclude_api: bool = typer.Option(False, '--no-api'), exclude_ui: bool = typer.Option(False, '--no-ui')) ``` Starts a hot-reloading development server with API, UI, and agent processes. Each service has an individual command if you wish to start them separately. Each service can be excluded here as well. ### `build_image` ```python build_image(arch: str = typer.Option(None, help=f'The architecture to build the container for. Defaults to the architecture of the host Python. [default: {platform.machine()}]'), python_version: str = typer.Option(None, help=f'The Python version to build the container for. Defaults to the version of the host Python. [default: {python_version_minor()}]'), flavor: str = typer.Option(None, help="An alternative flavor to build, for example 'conda'. Defaults to the standard Python base image"), dry_run: bool = False) ``` Build a docker image for development. ### `container` ```python container(bg: bool = False, name = 'prefect-dev', api: bool = True, tag: Optional[str] = None) ``` Run a docker container with local code mounted and installed. # events Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-cli-events # `prefect.cli.events` ## Functions ### `stream` ```python stream(format: StreamFormat = typer.Option(StreamFormat.json, '--format', help='Output format (json or text)'), output_file: str = typer.Option(None, '--output-file', help='File to write events to'), account: bool = typer.Option(False, '--account', help='Stream events for entire account, including audit logs'), run_once: bool = typer.Option(False, '--run-once', help='Stream only one event')) ``` Subscribes to the event stream of a workspace, printing each event as it is received. By default, events are printed as JSON, but can be printed as text by passing `--format text`. ### `handle_event` ```python handle_event(event: Event, format: StreamFormat, output_file: str) -> None ``` ### `handle_error` ```python handle_error(exc: Exception) -> None ``` ## Classes ### `StreamFormat` # flow Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-cli-flow # `prefect.cli.flow` Command line interface for working with flows. ## Functions ### `ls` ```python ls(limit: int = 15) ``` View flows. ### `serve` ```python serve(entrypoint: str = typer.Argument(..., help='The path to a file containing a flow and the name of the flow function in the format `./path/to/file.py:flow_func_name`.'), name: str = typer.Option(..., '--name', '-n', help='The name to give the deployment created for the flow.'), description: Optional[str] = typer.Option(None, '--description', '-d', help="The description to give the created deployment. If not provided, the description will be populated from the flow's description."), version: Optional[str] = typer.Option(None, '-v', '--version', help='A version to give the created deployment.'), tags: Optional[List[str]] = typer.Option(None, '-t', '--tag', help='One or more optional tags to apply to the created deployment.'), cron: Optional[str] = typer.Option(None, '--cron', help='A cron string that will be used to set a schedule for the created deployment.'), interval: Optional[int] = typer.Option(None, '--interval', help='An integer specifying an interval (in seconds) between scheduled runs of the flow.'), interval_anchor: Optional[str] = typer.Option(None, '--anchor-date', help='The start date for an interval schedule.'), rrule: Optional[str] = typer.Option(None, '--rrule', help='An RRule that will be used to set a schedule for the created deployment.'), timezone: Optional[str] = typer.Option(None, '--timezone', help="Timezone to used scheduling flow runs e.g. 'America/New_York'"), pause_on_shutdown: bool = typer.Option(True, help='If set, provided schedule will be paused when the serve command is stopped. If not set, the schedules will continue running.'), limit: Optional[int] = typer.Option(None, help='The maximum number of runs that can be executed concurrently by the created runner; only applies to this served flow. To apply a limit across multiple served flows, use global_limit.'), global_limit: Optional[int] = typer.Option(None, help='The maximum number of concurrent runs allowed across all served flow instances associated with the same deployment.')) ``` Serve a flow via an entrypoint. # flow_run Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-cli-flow_run # `prefect.cli.flow_run` Command line interface for working with flow runs ## Functions ### `inspect` ```python inspect(id: UUID, web: bool = typer.Option(False, '--web', help='Open the flow run in a web browser.'), output: Optional[str] = typer.Option(None, '--output', '-o', help='Specify an output format. Currently supports: json')) ``` View details about a flow run. ### `ls` ```python ls(flow_name: List[str] = typer.Option(None, help='Name of the flow'), limit: int = typer.Option(15, help='Maximum number of flow runs to list'), state: List[str] = typer.Option(None, help="Name of the flow run's state"), state_type: List[str] = typer.Option(None, help="Type of the flow run's state")) ``` View recent flow runs or flow runs for specific flows. Arguments: flow\_name: Name of the flow limit: Maximum number of flow runs to list. Defaults to 15. state: Name of the flow run's state. Can be provided multiple times. Options are 'SCHEDULED', 'PENDING', 'RUNNING', 'COMPLETED', 'FAILED', 'CRASHED', 'CANCELLING', 'CANCELLED', 'PAUSED', 'SUSPENDED', 'AWAITINGRETRY', 'RETRYING', and 'LATE'. state\_type: Type of the flow run's state. Can be provided multiple times. Options are 'SCHEDULED', 'PENDING', 'RUNNING', 'COMPLETED', 'FAILED', 'CRASHED', 'CANCELLING', 'CANCELLED', 'CRASHED', and 'PAUSED'. Examples: \$ prefect flow-runs ls --state Running \$ prefect flow-runs ls --state Running --state late \$ prefect flow-runs ls --state-type RUNNING \$ prefect flow-runs ls --state-type RUNNING --state-type FAILED ### `delete` ```python delete(id: UUID) ``` Delete a flow run by ID. ### `cancel` ```python cancel(id: UUID) ``` Cancel a flow run by ID. ### `logs` ```python logs(id: UUID, head: bool = typer.Option(False, '--head', '-h', help=f'Show the first {LOGS_WITH_LIMIT_FLAG_DEFAULT_NUM_LOGS} logs instead of all logs.'), num_logs: int = typer.Option(None, '--num-logs', '-n', help=f'Number of logs to show when using the --head or --tail flag. If None, defaults to {LOGS_WITH_LIMIT_FLAG_DEFAULT_NUM_LOGS}.', min=1), reverse: bool = typer.Option(False, '--reverse', '-r', help='Reverse the logs order to print the most recent logs first'), tail: bool = typer.Option(False, '--tail', '-t', help=f'Show the last {LOGS_WITH_LIMIT_FLAG_DEFAULT_NUM_LOGS} logs instead of all logs.')) ``` View logs for a flow run. ### `execute` ```python execute(id: Optional[UUID] = typer.Argument(None, help='ID of the flow run to execute')) ``` # global_concurrency_limit Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-cli-global_concurrency_limit # `prefect.cli.global_concurrency_limit` ## Functions ### `list_global_concurrency_limits` ```python list_global_concurrency_limits() ``` List all global concurrency limits. ### `inspect_global_concurrency_limit` ```python inspect_global_concurrency_limit(name: str = typer.Argument(..., help='The name of the global concurrency limit to inspect.'), output: Optional[str] = typer.Option(None, '--output', '-o', help='Specify an output format. Currently supports: json'), file_path: Optional[Path] = typer.Option(None, '--file', '-f', help='Path to .json file to write the global concurrency limit output to.')) ``` Inspect a global concurrency limit. **Args:** * `name`: The name of the global concurrency limit to inspect. * `output`: An output format for the command. Currently only supports JSON. Required if --file/-f is set. * `file_path`: A path to .json file to write the global concurrent limit output to. **Returns:** * The ID of the global concurrency limit. * The created date of the global concurrency limit. * The updated date of the global concurrency limit. * The name of the global concurrency limit. * The limit of the global concurrency limit. * The number of active slots. * The slot decay per second. ### `delete_global_concurrency_limit` ```python delete_global_concurrency_limit(name: str = typer.Argument(..., help='The name of the global concurrency limit to delete.')) ``` Delete a global concurrency limit. **Args:** * `name`: The name of the global concurrency limit to delete. ### `enable_global_concurrency_limit` ```python enable_global_concurrency_limit(name: str = typer.Argument(..., help='The name of the global concurrency limit to enable.')) ``` Enable a global concurrency limit. **Args:** * `name`: The name of the global concurrency limit to enable. ### `disable_global_concurrency_limit` ```python disable_global_concurrency_limit(name: str = typer.Argument(..., help='The name of the global concurrency limit to disable.')) ``` Disable a global concurrency limit. **Args:** * `name`: The name of the global concurrency limit to disable. ### `update_global_concurrency_limit` ```python update_global_concurrency_limit(name: str = typer.Argument(..., help='The name of the global concurrency limit to update.'), enable: Optional[bool] = typer.Option(None, '--enable', help='Enable the global concurrency limit.'), disable: Optional[bool] = typer.Option(None, '--disable', help='Disable the global concurrency limit.'), limit: Optional[int] = typer.Option(None, '--limit', '-l', help='The limit of the global concurrency limit.'), active_slots: Optional[int] = typer.Option(None, '--active-slots', help='The number of active slots.'), slot_decay_per_second: Optional[float] = typer.Option(None, '--slot-decay-per-second', help='The slot decay per second.')) ``` Update a global concurrency limit. **Args:** * `name`: The name of the global concurrency limit to update. * `enable`: Enable the global concurrency limit. * `disable`: Disable the global concurrency limit. * `limit`: The limit of the global concurrency limit. * `active_slots`: The number of active slots. * `slot_decay_per_second`: The slot decay per second. **Examples:** $prefect global-concurrency-limit update my-gcl --limit 10$ prefect gcl update my-gcl --active-slots 5 $prefect gcl update my-gcl --slot-decay-per-second 0.5$ prefect gcl update my-gcl --enable \$ prefect gcl update my-gcl --disable --limit 5 ### `create_global_concurrency_limit` ```python create_global_concurrency_limit(name: str = typer.Argument(..., help='The name of the global concurrency limit to create.'), limit: int = typer.Option(..., '--limit', '-l', help='The limit of the global concurrency limit.'), disable: Optional[bool] = typer.Option(None, '--disable', help='Create an inactive global concurrency limit.'), active_slots: Optional[int] = typer.Option(0, '--active-slots', help='The number of active slots.'), slot_decay_per_second: Optional[float] = typer.Option(0.0, '--slot-decay-per-second', help='The slot decay per second.')) ``` Create a global concurrency limit. Arguments: name (str): The name of the global concurrency limit to create. limit (int): The limit of the global concurrency limit. disable (Optional\[bool]): Create an inactive global concurrency limit. active\_slots (Optional\[int]): The number of active slots. slot\_decay\_per\_second (Optional\[float]): The slot decay per second. Examples: \$ prefect global-concurrency-limit create my-gcl --limit 10 \$ prefect gcl create my-gcl --limit 5 --active-slots 3 \$ prefect gcl create my-gcl --limit 5 --active-slots 3 --slot-decay-per-second 0.5 \$ prefect gcl create my-gcl --limit 5 --inactive # profile Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-cli-profile # `prefect.cli.profile` Command line interface for working with profiles. ## Functions ### `ls` ```python ls() ``` List profile names. ### `create` ```python create(name: str, from_name: str = typer.Option(None, '--from', help='Copy an existing profile.')) ``` Create a new profile. ### `use` ```python use(name: str) ``` Set the given profile to active. ### `delete` ```python delete(name: str) ``` Delete the given profile. ### `rename` ```python rename(name: str, new_name: str) ``` Change the name of a profile. ### `inspect` ```python inspect(name: Optional[str] = typer.Argument(None, help='Name of profile to inspect; defaults to active profile.'), output: Optional[str] = typer.Option(None, '--output', '-o', help='Specify an output format. Currently supports: json')) ``` Display settings from a given profile; defaults to active. ### `show_profile_changes` ```python show_profile_changes(user_profiles: ProfilesCollection, default_profiles: ProfilesCollection) -> bool ``` ### `populate_defaults` ```python populate_defaults() ``` Populate the profiles configuration with default base profiles, preserving existing user profiles. ### `check_server_connection` ```python check_server_connection() -> ConnectionStatus ``` ## Classes ### `ConnectionStatus` **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` # root Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-cli-root # `prefect.cli.root` Base `prefect` command-line application ## Functions ### `version_callback` ```python version_callback(value: bool) -> None ``` ### `is_interactive` ```python is_interactive() -> bool ``` ### `main` ```python main(ctx: typer.Context, version: bool = typer.Option(None, '--version', '-v', callback=version_callback, help='Display the current version.', is_eager=True), profile: str = typer.Option(None, '--profile', '-p', help='Select a profile for this CLI run.', is_eager=True), prompt: bool = SettingsOption(prefect.settings.PREFECT_CLI_PROMPT, help='Force toggle prompts for this CLI run.')) ``` ### `version` ```python version(omit_integrations: bool = typer.Option(False, '--omit-integrations', help='Omit integration information')) ``` Get the current Prefect version and integration information. ### `get_prefect_integrations` ```python get_prefect_integrations() -> dict[str, str] ``` Get information about installed Prefect integrations. ### `display` ```python display(object: dict[str, Any], nesting: int = 0) -> None ``` Recursive display of a dictionary with nesting. # server Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-cli-server # `prefect.cli.server` Command line interface for working with the Prefect API and server. ## Functions ### `generate_welcome_blurb` ```python generate_welcome_blurb(base_url: str, ui_enabled: bool) -> str ``` ### `prestart_check` ```python prestart_check(base_url: str) -> None ``` Check if `PREFECT_API_URL` is set in the current profile. If not, prompt the user to set it. **Args:** * `base_url`: The base URL the server will be running on ### `start` ```python start(host: str = SettingsOption(PREFECT_SERVER_API_HOST), port: int = SettingsOption(PREFECT_SERVER_API_PORT), keep_alive_timeout: int = SettingsOption(PREFECT_SERVER_API_KEEPALIVE_TIMEOUT), log_level: str = SettingsOption(PREFECT_SERVER_LOGGING_LEVEL), scheduler: bool = SettingsOption(PREFECT_API_SERVICES_SCHEDULER_ENABLED), analytics: bool = SettingsOption(PREFECT_SERVER_ANALYTICS_ENABLED, '--analytics-on/--analytics-off'), late_runs: bool = SettingsOption(PREFECT_API_SERVICES_LATE_RUNS_ENABLED), ui: bool = SettingsOption(PREFECT_UI_ENABLED), no_services: bool = typer.Option(False, '--no-services', help='Only run the webserver API and UI'), background: bool = typer.Option(False, '--background', '-b', help='Run the server in the background'), workers: int = typer.Option(1, '--workers', help='Number of worker processes to run. Only runs the webserver API and UI')) ``` Start a Prefect server instance ### `stop` ```python stop() ``` Stop a Prefect server instance running in the background ### `reset` ```python reset(yes: bool = typer.Option(False, '--yes', '-y')) ``` Drop and recreate all Prefect database tables ### `upgrade` ```python upgrade(yes: bool = typer.Option(False, '--yes', '-y'), revision: str = typer.Option('head', '-r', help='The revision to pass to `alembic upgrade`. If not provided, runs all migrations.'), dry_run: bool = typer.Option(False, help='Flag to show what migrations would be made without applying them. Will emit sql statements to stdout.')) ``` Upgrade the Prefect database ### `downgrade` ```python downgrade(yes: bool = typer.Option(False, '--yes', '-y'), revision: str = typer.Option('-1', '-r', help="The revision to pass to `alembic downgrade`. If not provided, downgrades to the most recent revision. Use 'base' to run all migrations."), dry_run: bool = typer.Option(False, help='Flag to show what migrations would be made without applying them. Will emit sql statements to stdout.')) ``` Downgrade the Prefect database ### `revision` ```python revision(message: str = typer.Option(None, '--message', '-m', help='A message to describe the migration.'), autogenerate: bool = False) ``` Create a new migration for the Prefect database ### `stamp` ```python stamp(revision: str) ``` Stamp the revision table with the given revision; don't run any migrations ### `run_manager_process` ```python run_manager_process() ``` This is an internal entrypoint used by `prefect server services start --background`. Users do not call this directly. We do everything in sync so that the child won't exit until the user kills it. ### `list_services` ```python list_services() ``` List all available services and their status. ### `start_services` ```python start_services(background: bool = typer.Option(False, '--background', '-b', help='Run the services in the background')) ``` Start all enabled Prefect services in one process. ### `stop_services` ```python stop_services() ``` Stop any background Prefect services that were started. # shell Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-cli-shell # `prefect.cli.shell` Provides a set of tools for executing shell commands as Prefect flows. Includes functionalities for running shell commands ad-hoc or serving them as Prefect flows, with options for logging output, scheduling, and deployment customization. ## Functions ### `output_stream` ```python output_stream(pipe: IO[str], logger_function: Callable[[str], None]) -> None ``` Read from a pipe line by line and log using the provided logging function. **Args:** * `pipe`: A file-like object for reading process output. * `logger_function`: A logging function from the logger. ### `output_collect` ```python output_collect(pipe: IO[str], container: list[str]) -> None ``` Collects output from a subprocess pipe and stores it in a container list. **Args:** * `pipe`: The output pipe of the subprocess, either stdout or stderr. * `container`: A list to store the collected output lines. ### `run_shell_process` ```python run_shell_process(command: str, log_output: bool = True, stream_stdout: bool = False, log_stderr: bool = False, popen_kwargs: Optional[Dict[str, Any]] = None) ``` Asynchronously executes the specified shell command and logs its output. This function is designed to be used within Prefect flows to run shell commands as part of task execution. It handles both the execution of the command and the collection of its output for logging purposes. **Args:** * `command`: The shell command to execute. * `log_output`: If True, the output of the command (both stdout and stderr) is logged to Prefect. * `stream_stdout`: If True, the stdout of the command is streamed to Prefect logs. * `log_stderr`: If True, the stderr of the command is logged to Prefect logs. * `popen_kwargs`: Additional keyword arguments to pass to the `subprocess.Popen` call. ### `watch` ```python watch(command: str, log_output: bool = typer.Option(True, help='Log the output of the command to Prefect logs.'), flow_run_name: str = typer.Option(None, help='Name of the flow run.'), flow_name: str = typer.Option('Shell Command', help='Name of the flow.'), stream_stdout: bool = typer.Option(True, help='Stream the output of the command.'), tag: Annotated[Optional[List[str]], typer.Option(help='Optional tags for the flow run.')] = None) ``` Executes a shell command and observes it as Prefect flow. **Args:** * `command`: The shell command to be executed. * `log_output`: If True, logs the command's output. Defaults to True. * `flow_run_name`: An optional name for the flow run. * `flow_name`: An optional name for the flow. Useful for identification in the Prefect UI. * `tag`: An optional list of tags for categorizing and filtering flows in the Prefect UI. ### `serve` ```python serve(command: str, flow_name: str = typer.Option(..., help='Name of the flow'), deployment_name: str = typer.Option('CLI Runner Deployment', help='Name of the deployment'), deployment_tags: List[str] = typer.Option(None, '--tag', help='Tag for the deployment (can be provided multiple times)'), log_output: bool = typer.Option(True, help='Stream the output of the command', hidden=True), stream_stdout: bool = typer.Option(True, help='Stream the output of the command'), cron_schedule: str = typer.Option(None, help='Cron schedule for the flow'), timezone: str = typer.Option(None, help='Timezone for the schedule'), concurrency_limit: int = typer.Option(None, min=1, help='The maximum number of flow runs that can execute at the same time'), run_once: bool = typer.Option(False, help='Run the agent loop once, instead of forever.')) ``` Creates and serves a Prefect deployment that runs a specified shell command according to a cron schedule or ad hoc. This function allows users to integrate shell command execution into Prefect workflows seamlessly. It provides options for scheduled execution via cron expressions, flow and deployment naming for better management, and the application of tags for easier categorization and filtering within the Prefect UI. Additionally, it supports streaming command output to Prefect logs, setting concurrency limits to control flow execution, and optionally running the deployment once for ad-hoc tasks. **Args:** * `command`: The shell command the flow will execute. * `name`: The name assigned to the flow. This is required.. * `deployment_tags`: Optional tags for the deployment to facilitate filtering and organization. * `log_output`: If True, streams the output of the shell command to the Prefect logs. Defaults to True. * `cron_schedule`: A cron expression that defines when the flow will run. If not provided, the flow can be triggered manually. * `timezone`: The timezone for the cron schedule. This is important if the schedule should align with local time. * `concurrency_limit`: The maximum number of instances of the flow that can run simultaneously. * `deployment_name`: The name of the deployment. This helps distinguish deployments within the Prefect platform. * `run_once`: When True, the flow will only run once upon deployment initiation, rather than continuously. # task Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-cli-task # `prefect.cli.task` ## Functions ### `serve` ```python serve(entrypoints: Optional[list[str]] = typer.Argument(None, help='The paths to one or more tasks, in the form of `./path/to/file.py:task_func_name`.'), module: Optional[list[str]] = typer.Option(None, '--module', '-m', help='The module(s) to import the tasks from.'), limit: int = typer.Option(10, help='The maximum number of tasks that can be run concurrently. Defaults to 10.')) ``` Serve the provided tasks so that their runs may be submitted to and executed in the engine. **Args:** * `entrypoints`: List of strings representing the paths to one or more tasks. Each path should be in the format `./path/to/file.py\:task_func_name`. * `module`: The module(s) to import the task definitions from. * `limit`: The maximum number of tasks that can be run concurrently. Defaults to 10. # task_run Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-cli-task_run # `prefect.cli.task_run` Command line interface for working with task runs ## Functions ### `inspect` ```python inspect(id: UUID, web: bool = typer.Option(False, '--web', help='Open the task run in a web browser.'), output: Optional[str] = typer.Option(None, '--output', '-o', help='Specify an output format. Currently supports: json')) ``` View details about a task run. ### `ls` ```python ls(task_run_name: List[str] = typer.Option(None, help='Name of the task'), limit: int = typer.Option(15, help='Maximum number of task runs to list'), state: List[str] = typer.Option(None, help="Name of the task run's state"), state_type: List[StateType] = typer.Option(None, help="Type of the task run's state")) ``` View recent task runs ### `logs` ```python logs(id: UUID, head: bool = typer.Option(False, '--head', '-h', help=f'Show the first {LOGS_WITH_LIMIT_FLAG_DEFAULT_NUM_LOGS} logs instead of all logs.'), num_logs: int = typer.Option(LOGS_WITH_LIMIT_FLAG_DEFAULT_NUM_LOGS, '--num-logs', '-n', help=f'Number of logs to show when using the --head or --tail flag. If None, defaults to {LOGS_WITH_LIMIT_FLAG_DEFAULT_NUM_LOGS}.', min=1), reverse: bool = typer.Option(False, '--reverse', '-r', help='Reverse the logs order to print the most recent logs first'), tail: bool = typer.Option(False, '--tail', '-t', help=f'Show the last {LOGS_WITH_LIMIT_FLAG_DEFAULT_NUM_LOGS} logs instead of all logs.')) ``` View logs for a task run. # variable Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-cli-variable # `prefect.cli.variable` ## Functions ### `list_variables` ```python list_variables(limit: int = typer.Option(100, '--limit', help='The maximum number of variables to return.')) ``` List variables. ### `inspect` ```python inspect(name: str, output: Optional[str] = typer.Option(None, '--output', '-o', help='Specify an output format. Currently supports: json')) ``` View details about a variable. **Args:** * `name`: the name of the variable to inspect ### `get` ```python get(name: str) ``` Get a variable's value. **Args:** * `name`: the name of the variable to get ### `parse_value` ```python parse_value(value: str) -> Union[str, int, float, bool, None, Dict[str, Any], List[str]] ``` ### `unset` ```python unset(name: str) ``` Unset a variable. **Args:** * `name`: the name of the variable to unset # work_pool Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-cli-work_pool # `prefect.cli.work_pool` Command line interface for working with work queues. ## Functions ### `set_work_pool_as_default` ```python set_work_pool_as_default(name: str) -> None ``` ### `has_provisioner_for_type` ```python has_provisioner_for_type(work_pool_type: str) -> bool ``` Check if there is a provisioner for the given work pool type. **Args:** * `work_pool_type`: The type of the work pool. **Returns:** * True if a provisioner exists for the given type, False otherwise. ### `create` ```python create(name: str = typer.Argument(..., help='The name of the work pool.'), base_job_template: typer.FileText = typer.Option(None, '--base-job-template', help='The path to a JSON file containing the base job template to use. If unspecified, Prefect will use the default base job template for the given worker type.'), paused: bool = typer.Option(False, '--paused', help='Whether or not to create the work pool in a paused state.'), type: str = typer.Option(None, '-t', '--type', help='The type of work pool to create.'), set_as_default: bool = typer.Option(False, '--set-as-default', help='Whether or not to use the created work pool as the local default for deployment.'), provision_infrastructure: bool = typer.Option(False, '--provision-infrastructure', '--provision-infra', help='Whether or not to provision infrastructure for the work pool if supported for the given work pool type.'), overwrite: bool = typer.Option(False, '--overwrite', help='Whether or not to overwrite an existing work pool with the same name.')) ``` Create a new work pool or update an existing one.  Examples:  Create a Kubernetes work pool in a paused state:  $prefect work-pool create "my-pool" --type kubernetes --paused  Create a Docker work pool with a custom base job template:  $ prefect work-pool create "my-pool" --type docker --base-job-template ./base-job-template.json  Update an existing work pool:  \$ prefect work-pool create "existing-pool" --base-job-template ./base-job-template.json --overwrite ### `ls` ```python ls(verbose: bool = typer.Option(False, '--verbose', '-v', help='Show additional information about work pools.')) ``` List work pools.  Examples: \$ prefect work-pool ls ### `inspect` ```python inspect(name: str = typer.Argument(..., help='The name of the work pool to inspect.'), output: Optional[str] = typer.Option(None, '--output', '-o', help='Specify an output format. Currently supports: json')) ``` Inspect a work pool.  Examples: $prefect work-pool inspect "my-pool" $ prefect work-pool inspect "my-pool" --output json ### `pause` ```python pause(name: str = typer.Argument(..., help='The name of the work pool to pause.')) ``` Pause a work pool.  Examples: \$ prefect work-pool pause "my-pool" ### `resume` ```python resume(name: str = typer.Argument(..., help='The name of the work pool to resume.')) ``` Resume a work pool.  Examples: \$ prefect work-pool resume "my-pool" ### `update` ```python update(name: str = typer.Argument(..., help='The name of the work pool to update.'), base_job_template: typer.FileText = typer.Option(None, '--base-job-template', help='The path to a JSON file containing the base job template to use. If unspecified, Prefect will use the default base job template for the given worker type. If None, the base job template will not be modified.'), concurrency_limit: int = typer.Option(None, '--concurrency-limit', help='The concurrency limit for the work pool. If None, the concurrency limit will not be modified.'), description: str = typer.Option(None, '--description', help='The description for the work pool. If None, the description will not be modified.')) ``` Update a work pool.  Examples: \$ prefect work-pool update "my-pool" ### `provision_infrastructure` ```python provision_infrastructure(name: str = typer.Argument(..., help='The name of the work pool to provision infrastructure for.')) ``` Provision infrastructure for a work pool.  Examples: \$ prefect work-pool provision-infrastructure "my-pool" \$ prefect work-pool provision-infra "my-pool" ### `delete` ```python delete(name: str = typer.Argument(..., help='The name of the work pool to delete.')) ``` Delete a work pool.  Examples: \$ prefect work-pool delete "my-pool" ### `set_concurrency_limit` ```python set_concurrency_limit(name: str = typer.Argument(..., help='The name of the work pool to update.'), concurrency_limit: int = typer.Argument(..., help='The new concurrency limit for the work pool.')) ``` Set the concurrency limit for a work pool.  Examples: \$ prefect work-pool set-concurrency-limit "my-pool" 10 ### `clear_concurrency_limit` ```python clear_concurrency_limit(name: str = typer.Argument(..., help='The name of the work pool to update.')) ``` Clear the concurrency limit for a work pool.  Examples: \$ prefect work-pool clear-concurrency-limit "my-pool" ### `get_default_base_job_template` ```python get_default_base_job_template(type: str = typer.Option(None, '-t', '--type', help='The type of work pool for which to get the default base job template.'), file: str = typer.Option(None, '-f', '--file', help='If set, write the output to a file.')) ``` Get the default base job template for a given work pool type.  Examples: \$ prefect work-pool get-default-base-job-template --type kubernetes ### `preview` ```python preview(name: str = typer.Argument(None, help='The name or ID of the work pool to preview'), hours: int = typer.Option(None, '-h', '--hours', help='The number of hours to look ahead; defaults to 1 hour')) ``` Preview the work pool's scheduled work for all queues.  Examples: \$ prefect work-pool preview "my-pool" --hours 24 ### `storage_inspect` ```python storage_inspect(work_pool_name: Annotated[str, typer.Argument(..., help='The name of the work pool to display storage configuration for.')], output: Optional[str] = typer.Option(None, '--output', '-o', help='Specify an output format. Currently supports: json')) ``` EXPERIMENTAL: Inspect the storage configuration for a work pool. **Examples:** $prefect work-pool storage inspect "my-pool"$ prefect work-pool storage inspect "my-pool" --output json ### `s3` ```python s3(work_pool_name: str = typer.Argument(..., help='The name of the work pool to configure storage for.', show_default=False), bucket: str = typer.Option(..., '--bucket', help='The name of the S3 bucket to use.', show_default=False, prompt='Enter the name of the S3 bucket to use'), credentials_block_name: str = typer.Option(..., '--aws-credentials-block-name', help='The name of the AWS credentials block to use.', show_default=False, prompt='Enter the name of the AWS credentials block to use')) ``` EXPERIMENTAL: Configure AWS S3 storage for a work pool.  Examples: \$ prefect work-pool storage configure s3 "my-pool" --bucket my-bucket --aws-credentials-block-name my-credentials ### `gcs` ```python gcs(work_pool_name: str = typer.Argument(..., help='The name of the work pool to configure storage for.', show_default=False), bucket: str = typer.Option(..., '--bucket', help='The name of the Google Cloud Storage bucket to use.', show_default=False, prompt='Enter the name of the Google Cloud Storage bucket to use'), credentials_block_name: str = typer.Option(..., '--gcp-credentials-block-name', help='The name of the Google Cloud credentials block to use.', show_default=False, prompt='Enter the name of the Google Cloud credentials block to use')) ``` EXPERIMENTAL: Configure Google Cloud storage for a work pool.  Examples: \$ prefect work-pool storage configure gcs "my-pool" --bucket my-bucket --gcp-credentials-block-name my-credentials ### `azure_blob_storage` ```python azure_blob_storage(work_pool_name: str = typer.Argument(..., help='The name of the work pool to configure storage for.', show_default=False), container: str = typer.Option(..., '--container', help='The name of the Azure Blob Storage container to use.', show_default=False, prompt='Enter the name of the Azure Blob Storage container to use'), credentials_block_name: str = typer.Option(..., '--azure-blob-storage-credentials-block-name', help='The name of the Azure Blob Storage credentials block to use.', show_default=False, prompt='Enter the name of the Azure Blob Storage credentials block to use')) ``` EXPERIMENTAL: Configure Azure Blob Storage for a work pool.  Examples: \$ prefect work-pool storage configure azure-blob-storage "my-pool" --container my-container --azure-blob-storage-credentials-block-name my-credentials # work_queue Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-cli-work_queue # `prefect.cli.work_queue` Command line interface for working with work queues. ## Functions ### `create` ```python create(name: str = typer.Argument(..., help='The unique name to assign this work queue'), limit: int = typer.Option(None, '-l', '--limit', help='The concurrency limit to set on the queue.'), pool: Optional[str] = typer.Option(None, '-p', '--pool', help='The name of the work pool to create the work queue in.'), priority: Optional[int] = typer.Option(None, '-q', '--priority', help='The associated priority for the created work queue')) ``` Create a work queue. ### `set_concurrency_limit` ```python set_concurrency_limit(name: str = typer.Argument(..., help='The name or ID of the work queue'), limit: int = typer.Argument(..., help='The concurrency limit to set on the queue.'), pool: Optional[str] = typer.Option(None, '-p', '--pool', help='The name of the work pool that the work queue belongs to.')) ``` Set a concurrency limit on a work queue. ### `clear_concurrency_limit` ```python clear_concurrency_limit(name: str = typer.Argument(..., help='The name or ID of the work queue to clear'), pool: Optional[str] = typer.Option(None, '-p', '--pool', help='The name of the work pool that the work queue belongs to.')) ``` Clear any concurrency limits from a work queue. ### `pause` ```python pause(name: str = typer.Argument(..., help='The name or ID of the work queue to pause'), pool: Optional[str] = typer.Option(None, '-p', '--pool', help='The name of the work pool that the work queue belongs to.')) ``` Pause a work queue. ### `resume` ```python resume(name: str = typer.Argument(..., help='The name or ID of the work queue to resume'), pool: Optional[str] = typer.Option(None, '-p', '--pool', help='The name of the work pool that the work queue belongs to.')) ``` Resume a paused work queue. ### `inspect` ```python inspect(name: str = typer.Argument(None, help='The name or ID of the work queue to inspect'), pool: Optional[str] = typer.Option(None, '-p', '--pool', help='The name of the work pool that the work queue belongs to.'), output: Optional[str] = typer.Option(None, '--output', '-o', help='Specify an output format. Currently supports: json')) ``` Inspect a work queue by ID. ### `ls` ```python ls(verbose: bool = typer.Option(False, '--verbose', '-v', help='Display more information.'), work_queue_prefix: str = typer.Option(None, '--match', '-m', help='Will match work queues with names that start with the specified prefix string'), pool: Optional[str] = typer.Option(None, '-p', '--pool', help='The name of the work pool containing the work queues to list.')) ``` View all work queues. ### `preview` ```python preview(name: str = typer.Argument(None, help='The name or ID of the work queue to preview'), hours: int = typer.Option(None, '-h', '--hours', help='The number of hours to look ahead; defaults to 1 hour'), pool: Optional[str] = typer.Option(None, '-p', '--pool', help='The name of the work pool that the work queue belongs to.')) ``` Preview a work queue. ### `delete` ```python delete(name: str = typer.Argument(..., help='The name or ID of the work queue to delete'), pool: Optional[str] = typer.Option(None, '-p', '--pool', help='The name of the work pool containing the work queue to delete.')) ``` Delete a work queue by ID. ### `read_wq_runs` ```python read_wq_runs(name: str = typer.Argument(..., help='The name or ID of the work queue to poll'), pool: Optional[str] = typer.Option(None, '-p', '--pool', help='The name of the work pool containing the work queue to poll.')) ``` Get runs in a work queue. Note that this will trigger an artificial poll of the work queue. # worker Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-cli-worker # `prefect.cli.worker` ## Functions ### `start` ```python start(worker_name: str = typer.Option(None, '-n', '--name', help='The name to give to the started worker. If not provided, a unique name will be generated.'), work_pool_name: str = typer.Option(..., '-p', '--pool', help='The work pool the started worker should poll.', prompt=True), work_queues: List[str] = typer.Option(None, '-q', '--work-queue', help='One or more work queue names for the worker to pull from. If not provided, the worker will pull from all work queues in the work pool.'), worker_type: Optional[str] = typer.Option(None, '-t', '--type', help='The type of worker to start. If not provided, the worker type will be inferred from the work pool.'), prefetch_seconds: int = SettingsOption(PREFECT_WORKER_PREFETCH_SECONDS, help='Number of seconds to look into the future for scheduled flow runs.'), run_once: bool = typer.Option(False, help='Only run worker polling once. By default, the worker runs forever.'), limit: int = typer.Option(None, '-l', '--limit', help='Maximum number of flow runs to start simultaneously.'), with_healthcheck: bool = typer.Option(False, help='Start a healthcheck server for the worker.'), install_policy: InstallPolicy = typer.Option(InstallPolicy.PROMPT.value, '--install-policy', help='Install policy to use workers from Prefect integration packages.', case_sensitive=False), base_job_template: typer.FileText = typer.Option(None, '--base-job-template', help='The path to a JSON file containing the base job template to use. If unspecified, Prefect will use the default base job template for the given worker type. If the work pool already exists, this will be ignored.')) ``` Start a worker process to poll a work pool for flow runs. ## Classes ### `InstallPolicy` # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-client-__init__ # `prefect.client` Asynchronous client implementation for communicating with the [Prefect REST API](https://docs.prefect.io/v3/api-ref/rest-api/). Explore the client by communicating with an in-memory webserver - no setup required: \
``` $ # start python REPL with native await functionality $ python -m asyncio from prefect.client.orchestration import get_client async with get_client() as client: response = await client.hello() print(response.json()) πŸ‘‹ ``` \
# base Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-client-base # `prefect.client.base` ## Functions ### `app_lifespan_context` ```python app_lifespan_context(app: ASGIApp) -> AsyncGenerator[None, None] ``` A context manager that calls startup/shutdown hooks for the given application. Lifespan contexts are cached per application to avoid calling the lifespan hooks more than once if the context is entered in nested code. A no-op context will be returned if the context for the given application is already being managed. This manager is robust to concurrent access within the event loop. For example, if you have concurrent contexts for the same application, it is guaranteed that startup hooks will be called before their context starts and shutdown hooks will only be called after their context exits. A reference count is used to support nested use of clients without running lifespan hooks excessively. The first client context entered will create and enter a lifespan context. Each subsequent client will increment a reference count but will not create a new lifespan context. When each client context exits, the reference count is decremented. When the last client context exits, the lifespan will be closed. In simple nested cases, the first client context will be the one to exit the lifespan. However, if client contexts are entered concurrently they may not exit in a consistent order. If the first client context was responsible for closing the lifespan, it would have to wait until all other client contexts to exit to avoid firing shutdown hooks while the application is in use. Waiting for the other clients to exit can introduce deadlocks, so, instead, the first client will exit without closing the lifespan context and reference counts will be used to ensure the lifespan is closed once all of the clients are done. ### `determine_server_type` ```python determine_server_type() -> ServerType ``` Determine the server type based on the current settings. **Returns:** * * `ServerType.EPHEMERAL` if the ephemeral server is enabled * * `ServerType.SERVER` if a API URL is configured and it is not a cloud URL * * `ServerType.CLOUD` if an API URL is configured and it is a cloud URL * * `ServerType.UNCONFIGURED` if no API URL is configured and ephemeral mode is not enabled ## Classes ### `ASGIApp` ### `PrefectResponse` A Prefect wrapper for the `httpx.Response` class. Provides more informative error messages. **Methods:** #### `from_httpx_response` ```python from_httpx_response(cls: type[Self], response: httpx.Response) -> Response ``` Create a `PrefectResponse` from an `httpx.Response`. By changing the `__class__` attribute of the Response, we change the method resolution order to look for methods defined in PrefectResponse, while leaving everything else about the original Response instance intact. #### `raise_for_status` ```python raise_for_status(self) -> Response ``` Raise an exception if the response contains an HTTPStatusError. The `PrefectHTTPStatusError` contains useful additional information that is not contained in the `HTTPStatusError`. ### `PrefectHttpxAsyncClient` A Prefect wrapper for the async httpx client with support for retry-after headers for the provided status codes (typically 429, 502 and 503). Additionally, this client will always call `raise_for_status` on responses. For more details on rate limit headers, see: [Configuring Cloudflare Rate Limiting](https://support.cloudflare.com/hc/en-us/articles/115001635128-Configuring-Rate-Limiting-from-UI) **Methods:** #### `send` ```python send(self, request: Request, *args: Any, **kwargs: Any) -> Response ``` Send a request with automatic retry behavior for the following status codes: * 403 Forbidden, if the request failed due to CSRF protection * 408 Request Timeout * 429 CloudFlare-style rate limiting * 502 Bad Gateway * 503 Service unavailable * Any additional status codes provided in `PREFECT_CLIENT_RETRY_EXTRA_CODES` ### `PrefectHttpxSyncClient` A Prefect wrapper for the async httpx client with support for retry-after headers for the provided status codes (typically 429, 502 and 503). Additionally, this client will always call `raise_for_status` on responses. For more details on rate limit headers, see: [Configuring Cloudflare Rate Limiting](https://support.cloudflare.com/hc/en-us/articles/115001635128-Configuring-Rate-Limiting-from-UI) **Methods:** #### `send` ```python send(self, request: Request, *args: Any, **kwargs: Any) -> Response ``` Send a request with automatic retry behavior for the following status codes: * 403 Forbidden, if the request failed due to CSRF protection * 408 Request Timeout * 429 CloudFlare-style rate limiting * 502 Bad Gateway * 503 Service unavailable * Any additional status codes provided in `PREFECT_CLIENT_RETRY_EXTRA_CODES` ### `ServerType` **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` # cloud Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-client-cloud # `prefect.client.cloud` ## Functions ### `get_cloud_client` ```python get_cloud_client(host: Optional[str] = None, api_key: Optional[str] = None, httpx_settings: Optional[dict[str, Any]] = None, infer_cloud_url: bool = False) -> 'CloudClient' ``` Needs a docstring. ## Classes ### `CloudUnauthorizedError` Raised when the CloudClient receives a 401 or 403 from the Cloud API. ### `CloudClient` **Methods:** #### `account_base_url` ```python account_base_url(self) -> str ``` #### `api_healthcheck` ```python api_healthcheck(self) -> None ``` Attempts to connect to the Cloud API and raises the encountered exception if not successful. If successful, returns `None`. #### `check_ip_allowlist_access` ```python check_ip_allowlist_access(self) -> IPAllowlistMyAccessResponse ``` #### `get` ```python get(self, route: str, **kwargs: Any) -> Any ``` #### `read_account_ip_allowlist` ```python read_account_ip_allowlist(self) -> IPAllowlist ``` #### `read_account_settings` ```python read_account_settings(self) -> dict[str, Any] ``` #### `read_current_workspace` ```python read_current_workspace(self) -> Workspace ``` #### `read_worker_metadata` ```python read_worker_metadata(self) -> dict[str, Any] ``` #### `read_workspaces` ```python read_workspaces(self) -> list[Workspace] ``` #### `request` ```python request(self, method: str, route: str, **kwargs: Any) -> Any ``` #### `update_account_ip_allowlist` ```python update_account_ip_allowlist(self, updated_allowlist: IPAllowlist) -> None ``` #### `update_account_settings` ```python update_account_settings(self, settings: dict[str, Any]) -> None ``` #### `workspace_base_url` ```python workspace_base_url(self) -> str ``` # collections Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-client-collections # `prefect.client.collections` ## Functions ### `get_collections_metadata_client` ```python get_collections_metadata_client(httpx_settings: Optional[Dict[str, Any]] = None) -> 'CollectionsMetadataClient' ``` Creates a client that can be used to fetch metadata for Prefect collections. Will return a `CloudClient` if profile is set to connect to Prefect Cloud, otherwise will return an `OrchestrationClient`. ## Classes ### `CollectionsMetadataClient` **Methods:** #### `read_worker_metadata` ```python read_worker_metadata(self) -> Dict[str, Any] ``` # constants Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-client-constants # `prefect.client.constants` *This module is empty or contains only private/internal implementations.* # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-client-orchestration-__init__ # `prefect.client.orchestration` ## Functions ### `get_client` ```python get_client(httpx_settings: Optional[dict[str, Any]] = None, sync_client: bool = False) -> Union['SyncPrefectClient', 'PrefectClient'] ``` Retrieve a HTTP client for communicating with the Prefect REST API. The client must be context managed; for example: ```python async with get_client() as client: await client.hello() ``` To return a synchronous client, pass sync\_client=True: ```python with get_client(sync_client=True) as client: client.hello() ``` ## Classes ### `PrefectClient` An asynchronous client for interacting with the [Prefect REST API](https://docs.prefect.io/v3/api-ref/rest-api/). **Args:** * `api`: the REST API URL or FastAPI application to connect to * `api_key`: An optional API key for authentication. * `api_version`: The API version this client is compatible with. * `httpx_settings`: An optional dictionary of settings to pass to the underlying `httpx.AsyncClient` Examples: Say hello to a Prefect REST API ```python async with get_client() as client: response = await client.hello() print(response.json()) πŸ‘‹ ``` **Methods:** #### `api_healthcheck` ```python api_healthcheck(self) -> Optional[Exception] ``` Attempts to connect to the API and returns the encountered exception if not successful. If successful, returns `None`. #### `api_url` ```python api_url(self) -> httpx.URL ``` Get the base URL for the API. #### `api_version` ```python api_version(self) -> str ``` #### `apply_slas_for_deployment` ```python apply_slas_for_deployment(self, deployment_id: 'UUID', slas: 'list[SlaTypes]') -> 'UUID' ``` Applies service level agreements for a deployment. Performs matching by SLA name. If a SLA with the same name already exists, it will be updated. If a SLA with the same name does not exist, it will be created. Existing SLAs that are not in the list will be deleted. Args: deployment\_id: The ID of the deployment to update SLAs for slas: List of SLAs to associate with the deployment Raises: httpx.RequestError: if the SLAs were not updated for any reason Returns: SlaMergeResponse: The response from the backend, containing the names of the created, updated, and deleted SLAs #### `client_version` ```python client_version(self) -> str ``` #### `count_flow_runs` ```python count_flow_runs(self) -> int ``` Returns the count of flow runs matching all criteria for flow runs. **Args:** * `flow_filter`: filter criteria for flows * `flow_run_filter`: filter criteria for flow runs * `task_run_filter`: filter criteria for task runs * `deployment_filter`: filter criteria for deployments * `work_pool_filter`: filter criteria for work pools * `work_queue_filter`: filter criteria for work pool queues **Returns:** * count of flow runs #### `create_artifact` ```python create_artifact(self, artifact: 'ArtifactCreate') -> 'Artifact' ``` #### `create_automation` ```python create_automation(self, automation: 'AutomationCore') -> 'UUID' ``` Creates an automation in Prefect Cloud. #### `create_block_document` ```python create_block_document(self, block_document: 'BlockDocument | BlockDocumentCreate', include_secrets: bool = True) -> 'BlockDocument' ``` Create a block document in the Prefect API. This data is used to configure a corresponding Block. **Args:** * `include_secrets`: whether to include secret values on the stored Block, corresponding to Pydantic's `SecretStr` and `SecretBytes` fields. Note Blocks may not work as expected if this is set to `False`. #### `create_block_schema` ```python create_block_schema(self, block_schema: 'BlockSchemaCreate') -> 'BlockSchema' ``` Create a block schema in the Prefect API. #### `create_block_type` ```python create_block_type(self, block_type: 'BlockTypeCreate') -> 'BlockType' ``` Create a block type in the Prefect API. #### `create_concurrency_limit` ```python create_concurrency_limit(self, tag: str, concurrency_limit: int) -> 'UUID' ``` Create a tag concurrency limit in the Prefect API. These limits govern concurrently running tasks. **Args:** * `tag`: a tag the concurrency limit is applied to * `concurrency_limit`: the maximum number of concurrent task runs for a given tag **Raises:** * `httpx.RequestError`: if the concurrency limit was not created for any reason **Returns:** * the ID of the concurrency limit in the backend #### `create_deployment` ```python create_deployment(self, flow_id: UUID, name: str, version: str | None = None, version_info: 'VersionInfo | None' = None, schedules: list['DeploymentScheduleCreate'] | None = None, concurrency_limit: int | None = None, concurrency_options: 'ConcurrencyOptions | None' = None, parameters: dict[str, Any] | None = None, description: str | None = None, work_queue_name: str | None = None, work_pool_name: str | None = None, tags: list[str] | None = None, storage_document_id: UUID | None = None, path: str | None = None, entrypoint: str | None = None, infrastructure_document_id: UUID | None = None, parameter_openapi_schema: dict[str, Any] | None = None, paused: bool | None = None, pull_steps: list[dict[str, Any]] | None = None, enforce_parameter_schema: bool | None = None, job_variables: dict[str, Any] | None = None, branch: str | None = None, base: UUID | None = None, root: UUID | None = None) -> UUID ``` Create a deployment. **Args:** * `flow_id`: the flow ID to create a deployment for * `name`: the name of the deployment * `version`: an optional version string for the deployment * `tags`: an optional list of tags to apply to the deployment * `storage_document_id`: an reference to the storage block document used for the deployed flow * `infrastructure_document_id`: an reference to the infrastructure block document to use for this deployment * `job_variables`: A dictionary of dot delimited infrastructure overrides that will be applied at runtime; for example `env.CONFIG_KEY=config_value` or `namespace='prefect'`. This argument was previously named `infra_overrides`. Both arguments are supported for backwards compatibility. **Raises:** * `RequestError`: if the deployment was not created for any reason **Returns:** * the ID of the deployment in the backend #### `create_deployment_branch` ```python create_deployment_branch(self, deployment_id: UUID, branch: str, options: 'DeploymentBranchingOptions | None' = None, overrides: 'DeploymentUpdate | None' = None) -> UUID ``` #### `create_deployment_schedules` ```python create_deployment_schedules(self, deployment_id: UUID, schedules: list[tuple['SCHEDULE_TYPES', bool]]) -> list['DeploymentSchedule'] ``` Create deployment schedules. **Args:** * `deployment_id`: the deployment ID * `schedules`: a list of tuples containing the schedule to create and whether or not it should be active. **Raises:** * `RequestError`: if the schedules were not created for any reason **Returns:** * the list of schedules created in the backend #### `create_flow` ```python create_flow(self, flow: 'FlowObject[Any, Any]') -> 'UUID' ``` Create a flow in the Prefect API. **Args:** * `flow`: a `Flow` object **Raises:** * `httpx.RequestError`: if a flow was not created for any reason **Returns:** * the ID of the flow in the backend #### `create_flow_from_name` ```python create_flow_from_name(self, flow_name: str) -> 'UUID' ``` Create a flow in the Prefect API. **Args:** * `flow_name`: the name of the new flow **Raises:** * `httpx.RequestError`: if a flow was not created for any reason **Returns:** * the ID of the flow in the backend #### `create_flow_run` ```python create_flow_run(self, flow: 'FlowObject[Any, R]', name: str | None = None, parameters: dict[str, Any] | None = None, context: dict[str, Any] | None = None, tags: 'Iterable[str] | None' = None, parent_task_run_id: 'UUID | None' = None, state: 'State[R] | None' = None, work_pool_name: str | None = None, work_queue_name: str | None = None, job_variables: dict[str, Any] | None = None) -> 'FlowRun' ``` Create a flow run for a flow. **Args:** * `flow`: The flow model to create the flow run for * `name`: An optional name for the flow run * `parameters`: Parameter overrides for this flow run. * `context`: Optional run context data * `tags`: a list of tags to apply to this flow run * `parent_task_run_id`: if a subflow run is being created, the placeholder task run identifier in the parent flow * `state`: The initial state for the run. If not provided, defaults to `Pending`. * `work_pool_name`: The name of the work pool to run the flow run in. * `work_queue_name`: The name of the work queue to place the flow run in. * `job_variables`: The job variables to use when setting up flow run infrastructure. **Raises:** * `httpx.RequestError`: if the Prefect API does not successfully create a run for any reason **Returns:** * The flow run model #### `create_flow_run_from_deployment` ```python create_flow_run_from_deployment(self, deployment_id: UUID) -> 'FlowRun' ``` Create a flow run for a deployment. **Args:** * `deployment_id`: The deployment ID to create the flow run from * `parameters`: Parameter overrides for this flow run. Merged with the deployment defaults * `context`: Optional run context data * `state`: The initial state for the run. If not provided, defaults to `Scheduled` for now. Should always be a `Scheduled` type. * `name`: An optional name for the flow run. If not provided, the server will generate a name. * `tags`: An optional iterable of tags to apply to the flow run; these tags are merged with the deployment's tags. * `idempotency_key`: Optional idempotency key for creation of the flow run. If the key matches the key of an existing flow run, the existing run will be returned instead of creating a new one. * `parent_task_run_id`: if a subflow run is being created, the placeholder task run identifier in the parent flow * `work_queue_name`: An optional work queue name to add this run to. If not provided, will default to the deployment's set work queue. If one is provided that does not exist, a new work queue will be created within the deployment's work pool. * `job_variables`: Optional variables that will be supplied to the flow run job. **Raises:** * `RequestError`: if the Prefect API does not successfully create a run for any reason **Returns:** * The flow run model #### `create_flow_run_input` ```python create_flow_run_input(self, flow_run_id: 'UUID', key: str, value: str, sender: str | None = None) -> None ``` Creates a flow run input. **Args:** * `flow_run_id`: The flow run id. * `key`: The input key. * `value`: The input value. * `sender`: The sender of the input. #### `create_global_concurrency_limit` ```python create_global_concurrency_limit(self, concurrency_limit: 'GlobalConcurrencyLimitCreate') -> 'UUID' ``` #### `create_logs` ```python create_logs(self, logs: Iterable[Union['LogCreate', dict[str, Any]]]) -> None ``` Create logs for a flow or task run **Args:** * `logs`: An iterable of `LogCreate` objects or already json-compatible dicts #### `create_task_run` ```python create_task_run(self, task: 'TaskObject[P, R]', flow_run_id: Optional[UUID], dynamic_key: str, id: Optional[UUID] = None, name: Optional[str] = None, extra_tags: Optional[Iterable[str]] = None, state: Optional[prefect.states.State[R]] = None, task_inputs: Optional[dict[str, list[Union[TaskRunResult, FlowRunResult, Parameter, Constant]]]] = None) -> TaskRun ``` Create a task run **Args:** * `task`: The Task to run * `flow_run_id`: The flow run id with which to associate the task run * `dynamic_key`: A key unique to this particular run of a Task within the flow * `id`: An optional ID for the task run. If not provided, one will be generated server-side. * `name`: An optional name for the task run * `extra_tags`: an optional list of extra tags to apply to the task run in addition to `task.tags` * `state`: The initial state for the run. If not provided, defaults to `Pending` for now. Should always be a `Scheduled` type. * `task_inputs`: the set of inputs passed to the task **Returns:** * The created task run. #### `create_variable` ```python create_variable(self, variable: 'VariableCreate') -> 'Variable' ``` Creates a variable with the provided configuration. #### `create_work_pool` ```python create_work_pool(self, work_pool: 'WorkPoolCreate', overwrite: bool = False) -> 'WorkPool' ``` Creates a work pool with the provided configuration. **Args:** * `work_pool`: Desired configuration for the new work pool. **Returns:** * Information about the newly created work pool. #### `create_work_queue` ```python create_work_queue(self, name: str, description: Optional[str] = None, is_paused: Optional[bool] = None, concurrency_limit: Optional[int] = None, priority: Optional[int] = None, work_pool_name: Optional[str] = None) -> WorkQueue ``` Create a work queue. **Args:** * `name`: a unique name for the work queue * `description`: An optional description for the work queue. * `is_paused`: Whether or not the work queue is paused. * `concurrency_limit`: An optional concurrency limit for the work queue. * `priority`: The queue's priority. Lower values are higher priority (1 is the highest). * `work_pool_name`: The name of the work pool to use for this queue. **Raises:** * `prefect.exceptions.ObjectAlreadyExists`: If request returns 409 * `httpx.RequestError`: If request fails **Returns:** * The created work queue #### `decrement_v1_concurrency_slots` ```python decrement_v1_concurrency_slots(self, names: list[str], task_run_id: 'UUID', occupancy_seconds: float) -> 'Response' ``` Decrement concurrency limit slots for the specified limits. **Args:** * `names`: A list of limit names to decrement. * `task_run_id`: The task run ID that incremented the limits. * `occupancy_seconds`: The duration in seconds that the limits were held. **Returns:** * "Response": The HTTP response from the server. #### `delete_artifact` ```python delete_artifact(self, artifact_id: 'UUID') -> None ``` #### `delete_automation` ```python delete_automation(self, automation_id: 'UUID') -> None ``` #### `delete_block_document` ```python delete_block_document(self, block_document_id: 'UUID') -> None ``` Delete a block document. #### `delete_block_type` ```python delete_block_type(self, block_type_id: 'UUID') -> None ``` Delete a block type. #### `delete_concurrency_limit_by_tag` ```python delete_concurrency_limit_by_tag(self, tag: str) -> None ``` Delete the concurrency limit set on a specific tag. **Args:** * `tag`: a tag the concurrency limit is applied to **Raises:** * `ObjectNotFound`: If request returns 404 * `httpx.RequestError`: If request fails #### `delete_deployment` ```python delete_deployment(self, deployment_id: UUID) -> None ``` Delete deployment by id. **Args:** * `deployment_id`: The deployment id of interest. Raises: ObjectNotFound: If request returns 404 RequestError: If requests fails #### `delete_deployment_schedule` ```python delete_deployment_schedule(self, deployment_id: UUID, schedule_id: UUID) -> None ``` Delete a deployment schedule. **Args:** * `deployment_id`: the deployment ID * `schedule_id`: the ID of the deployment schedule to delete. **Raises:** * `RequestError`: if the schedules were not deleted for any reason #### `delete_flow` ```python delete_flow(self, flow_id: 'UUID') -> None ``` Delete a flow by UUID. **Args:** * `flow_id`: ID of the flow to be deleted Raises: prefect.exceptions.ObjectNotFound: If request returns 404 httpx.RequestError: If requests fail #### `delete_flow_run` ```python delete_flow_run(self, flow_run_id: 'UUID') -> None ``` Delete a flow run by UUID. **Args:** * `flow_run_id`: The flow run UUID of interest. Raises: ObjectNotFound: If request returns 404 httpx.RequestError: If requests fails #### `delete_flow_run_input` ```python delete_flow_run_input(self, flow_run_id: 'UUID', key: str) -> None ``` Deletes a flow run input. **Args:** * `flow_run_id`: The flow run id. * `key`: The input key. #### `delete_global_concurrency_limit_by_name` ```python delete_global_concurrency_limit_by_name(self, name: str) -> 'Response' ``` #### `delete_resource_owned_automations` ```python delete_resource_owned_automations(self, resource_id: str) -> None ``` #### `delete_task_run` ```python delete_task_run(self, task_run_id: UUID) -> None ``` Delete a task run by id. **Args:** * `task_run_id`: the task run ID of interest Raises: prefect.exceptions.ObjectNotFound: If request returns 404 httpx.RequestError: If requests fails #### `delete_variable_by_name` ```python delete_variable_by_name(self, name: str) -> None ``` Deletes a variable by name. #### `delete_work_pool` ```python delete_work_pool(self, work_pool_name: str) -> None ``` Deletes a work pool. **Args:** * `work_pool_name`: Name of the work pool to delete. #### `delete_work_queue_by_id` ```python delete_work_queue_by_id(self, id: UUID) -> None ``` Delete a work queue by its ID. **Args:** * `id`: the id of the work queue to delete **Raises:** * `prefect.exceptions.ObjectNotFound`: If request returns 404 * `httpx.RequestError`: If requests fails #### `filter_flow_run_input` ```python filter_flow_run_input(self, flow_run_id: 'UUID', key_prefix: str, limit: int, exclude_keys: 'set[str]') -> 'list[FlowRunInput]' ``` #### `find_automation` ```python find_automation(self, id_or_name: 'str | UUID') -> 'Automation | None' ``` #### `get_most_recent_block_schema_for_block_type` ```python get_most_recent_block_schema_for_block_type(self, block_type_id: 'UUID') -> 'BlockSchema | None' ``` Fetches the most recent block schema for a specified block type ID. **Args:** * `block_type_id`: The ID of the block type. **Raises:** * `httpx.RequestError`: If the request fails for any reason. **Returns:** * The most recent block schema or None. #### `get_runs_in_work_queue` ```python get_runs_in_work_queue(self, id: UUID, limit: int = 10, scheduled_before: Optional[datetime.datetime] = None) -> list[FlowRun] ``` Read flow runs off a work queue. **Args:** * `id`: the id of the work queue to read from * `limit`: a limit on the number of runs to return * `scheduled_before`: a timestamp; only runs scheduled before this time will be returned. Defaults to now. **Raises:** * `prefect.exceptions.ObjectNotFound`: If request returns 404 * `httpx.RequestError`: If request fails **Returns:** * List\[FlowRun]: a list of FlowRun objects read from the queue #### `get_scheduled_flow_runs_for_deployments` ```python get_scheduled_flow_runs_for_deployments(self, deployment_ids: list[UUID], scheduled_before: 'datetime.datetime | None' = None, limit: int | None = None) -> list['FlowRun'] ``` #### `get_scheduled_flow_runs_for_work_pool` ```python get_scheduled_flow_runs_for_work_pool(self, work_pool_name: str, work_queue_names: list[str] | None = None, scheduled_before: datetime | None = None) -> list['WorkerFlowRunResponse'] ``` Retrieves scheduled flow runs for the provided set of work pool queues. **Args:** * `work_pool_name`: The name of the work pool that the work pool queues are associated with. * `work_queue_names`: The names of the work pool queues from which to get scheduled flow runs. * `scheduled_before`: Datetime used to filter returned flow runs. Flow runs scheduled for after the given datetime string will not be returned. **Returns:** * A list of worker flow run responses containing information about the * retrieved flow runs. #### `hello` ```python hello(self) -> httpx.Response ``` Send a GET request to /hello for testing purposes. #### `increment_concurrency_slots` ```python increment_concurrency_slots(self, names: list[str], slots: int, mode: Literal['concurrency', 'rate_limit']) -> 'Response' ``` Increment concurrency slots for the specified limits. **Args:** * `names`: A list of limit names for which to occupy slots. * `slots`: The number of concurrency slots to occupy. * `mode`: The mode of the concurrency limits. #### `increment_concurrency_slots_with_lease` ```python increment_concurrency_slots_with_lease(self, names: list[str], slots: int, mode: Literal['concurrency', 'rate_limit'], lease_duration: float, holder: 'ConcurrencyLeaseHolder | None' = None) -> 'Response' ``` Increment concurrency slots for the specified limits with a lease. **Args:** * `names`: A list of limit names for which to occupy slots. * `slots`: The number of concurrency slots to occupy. * `mode`: The mode of the concurrency limits. * `lease_duration`: The duration of the lease in seconds. * `holder`: Optional holder information for tracking who holds the slots. #### `increment_v1_concurrency_slots` ```python increment_v1_concurrency_slots(self, names: list[str], task_run_id: 'UUID') -> 'Response' ``` Increment concurrency limit slots for the specified limits. **Args:** * `names`: A list of limit names for which to increment limits. * `task_run_id`: The task run ID incrementing the limits. #### `loop` ```python loop(self) -> asyncio.AbstractEventLoop | None ``` #### `match_work_queues` ```python match_work_queues(self, prefixes: list[str], work_pool_name: Optional[str] = None) -> list[WorkQueue] ``` Query the Prefect API for work queues with names with a specific prefix. **Args:** * `prefixes`: a list of strings used to match work queue name prefixes * `work_pool_name`: an optional work pool name to scope the query to **Returns:** * a list of WorkQueue model representations of the work queues #### `pause_automation` ```python pause_automation(self, automation_id: 'UUID') -> None ``` #### `pause_deployment` ```python pause_deployment(self, deployment_id: Union[UUID, str]) -> None ``` Pause a deployment by ID. **Args:** * `deployment_id`: The deployment ID of interest (can be a UUID or a string). **Raises:** * `ObjectNotFound`: If request returns 404 * `RequestError`: If request fails #### `raise_for_api_version_mismatch` ```python raise_for_api_version_mismatch(self) -> None ``` #### `read_artifacts` ```python read_artifacts(self, **kwargs: Unpack['ArtifactReadParams']) -> list['Artifact'] ``` #### `read_automation` ```python read_automation(self, automation_id: 'UUID | str') -> 'Automation | None' ``` #### `read_automations` ```python read_automations(self) -> list['Automation'] ``` #### `read_automations_by_name` ```python read_automations_by_name(self, name: str) -> list['Automation'] ``` Query the Prefect API for an automation by name. Only automations matching the provided name will be returned. **Args:** * `name`: the name of the automation to query **Returns:** * a list of Automation model representations of the automations #### `read_block_document` ```python read_block_document(self, block_document_id: 'UUID', include_secrets: bool = True) -> 'BlockDocument' ``` Read the block document with the specified ID. **Args:** * `block_document_id`: the block document id * `include_secrets`: whether to include secret values on the Block, corresponding to Pydantic's `SecretStr` and `SecretBytes` fields. These fields are automatically obfuscated by Pydantic, but users can additionally choose not to receive their values from the API. Note that any business logic on the Block may not work if this is `False`. **Raises:** * `httpx.RequestError`: if the block document was not found for any reason **Returns:** * A block document or None. #### `read_block_document_by_name` ```python read_block_document_by_name(self, name: str, block_type_slug: str, include_secrets: bool = True) -> 'BlockDocument' ``` Read the block document with the specified name that corresponds to a specific block type name. **Args:** * `name`: The block document name. * `block_type_slug`: The block type slug. * `include_secrets`: whether to include secret values on the Block, corresponding to Pydantic's `SecretStr` and `SecretBytes` fields. These fields are automatically obfuscated by Pydantic, but users can additionally choose not to receive their values from the API. Note that any business logic on the Block may not work if this is `False`. **Raises:** * `httpx.RequestError`: if the block document was not found for any reason **Returns:** * A block document or None. #### `read_block_documents` ```python read_block_documents(self, block_schema_type: str | None = None, offset: int | None = None, limit: int | None = None, include_secrets: bool = True) -> 'list[BlockDocument]' ``` Read block documents **Args:** * `block_schema_type`: an optional block schema type * `offset`: an offset * `limit`: the number of blocks to return * `include_secrets`: whether to include secret values on the Block, corresponding to Pydantic's `SecretStr` and `SecretBytes` fields. These fields are automatically obfuscated by Pydantic, but users can additionally choose not to receive their values from the API. Note that any business logic on the Block may not work if this is `False`. **Returns:** * A list of block documents #### `read_block_documents_by_type` ```python read_block_documents_by_type(self, block_type_slug: str, offset: int | None = None, limit: int | None = None, include_secrets: bool = True) -> 'list[BlockDocument]' ``` Retrieve block documents by block type slug. **Args:** * `block_type_slug`: The block type slug. * `offset`: an offset * `limit`: the number of blocks to return * `include_secrets`: whether to include secret values **Returns:** * A list of block documents #### `read_block_schema_by_checksum` ```python read_block_schema_by_checksum(self, checksum: str, version: str | None = None) -> 'BlockSchema' ``` Look up a block schema checksum #### `read_block_schemas` ```python read_block_schemas(self) -> 'list[BlockSchema]' ``` Read all block schemas Raises: httpx.RequestError: if a valid block schema was not found **Returns:** * A BlockSchema. #### `read_block_type_by_slug` ```python read_block_type_by_slug(self, slug: str) -> 'BlockType' ``` Read a block type by its slug. #### `read_block_types` ```python read_block_types(self) -> 'list[BlockType]' ``` Read all block types Raises: httpx.RequestError: if the block types were not found **Returns:** * List of BlockTypes. #### `read_concurrency_limit_by_tag` ```python read_concurrency_limit_by_tag(self, tag: str) -> 'ConcurrencyLimit' ``` Read the concurrency limit set on a specific tag. **Args:** * `tag`: a tag the concurrency limit is applied to **Raises:** * `ObjectNotFound`: If request returns 404 * `httpx.RequestError`: if the concurrency limit was not created for any reason **Returns:** * the concurrency limit set on a specific tag #### `read_concurrency_limits` ```python read_concurrency_limits(self, limit: int, offset: int) -> list['ConcurrencyLimit'] ``` Lists concurrency limits set on task run tags. **Args:** * `limit`: the maximum number of concurrency limits returned * `offset`: the concurrency limit query offset **Returns:** * a list of concurrency limits #### `read_deployment` ```python read_deployment(self, deployment_id: Union[UUID, str]) -> 'DeploymentResponse' ``` Query the Prefect API for a deployment by id. **Args:** * `deployment_id`: the deployment ID of interest **Returns:** * a Deployment model representation of the deployment #### `read_deployment_by_name` ```python read_deployment_by_name(self, name: str) -> 'DeploymentResponse' ``` Query the Prefect API for a deployment by name. **Args:** * `name`: A deployed flow's name: \/\ **Raises:** * `ObjectNotFound`: If request returns 404 * `RequestError`: If request fails **Returns:** * a Deployment model representation of the deployment #### `read_deployment_schedules` ```python read_deployment_schedules(self, deployment_id: UUID) -> list['DeploymentSchedule'] ``` Query the Prefect API for a deployment's schedules. **Args:** * `deployment_id`: the deployment ID **Returns:** * a list of DeploymentSchedule model representations of the deployment schedules #### `read_deployments` ```python read_deployments(self) -> list['DeploymentResponse'] ``` Query the Prefect API for deployments. Only deployments matching all the provided criteria will be returned. **Args:** * `flow_filter`: filter criteria for flows * `flow_run_filter`: filter criteria for flow runs * `task_run_filter`: filter criteria for task runs * `deployment_filter`: filter criteria for deployments * `work_pool_filter`: filter criteria for work pools * `work_queue_filter`: filter criteria for work pool queues * `limit`: a limit for the deployment query * `offset`: an offset for the deployment query **Returns:** * a list of Deployment model representations of the deployments #### `read_flow` ```python read_flow(self, flow_id: 'UUID') -> 'Flow' ``` Query the Prefect API for a flow by id. **Args:** * `flow_id`: the flow ID of interest **Returns:** * a Flow model representation of the flow #### `read_flow_by_name` ```python read_flow_by_name(self, flow_name: str) -> 'Flow' ``` Query the Prefect API for a flow by name. **Args:** * `flow_name`: the name of a flow **Returns:** * a fully hydrated Flow model #### `read_flow_run` ```python read_flow_run(self, flow_run_id: 'UUID') -> 'FlowRun' ``` Query the Prefect API for a flow run by id. **Args:** * `flow_run_id`: the flow run ID of interest **Returns:** * a Flow Run model representation of the flow run #### `read_flow_run_input` ```python read_flow_run_input(self, flow_run_id: 'UUID', key: str) -> str ``` Reads a flow run input. **Args:** * `flow_run_id`: The flow run id. * `key`: The input key. #### `read_flow_run_states` ```python read_flow_run_states(self, flow_run_id: 'UUID') -> 'list[State]' ``` Query for the states of a flow run **Args:** * `flow_run_id`: the id of the flow run **Returns:** * a list of State model representations of the flow run states #### `read_flow_runs` ```python read_flow_runs(self) -> 'list[FlowRun]' ``` Query the Prefect API for flow runs. Only flow runs matching all criteria will be returned. **Args:** * `flow_filter`: filter criteria for flows * `flow_run_filter`: filter criteria for flow runs * `task_run_filter`: filter criteria for task runs * `deployment_filter`: filter criteria for deployments * `work_pool_filter`: filter criteria for work pools * `work_queue_filter`: filter criteria for work pool queues * `sort`: sort criteria for the flow runs * `limit`: limit for the flow run query * `offset`: offset for the flow run query **Returns:** * a list of Flow Run model representations of the flow runs #### `read_flows` ```python read_flows(self) -> list['Flow'] ``` Query the Prefect API for flows. Only flows matching all criteria will be returned. **Args:** * `flow_filter`: filter criteria for flows * `flow_run_filter`: filter criteria for flow runs * `task_run_filter`: filter criteria for task runs * `deployment_filter`: filter criteria for deployments * `work_pool_filter`: filter criteria for work pools * `work_queue_filter`: filter criteria for work pool queues * `sort`: sort criteria for the flows * `limit`: limit for the flow query * `offset`: offset for the flow query **Returns:** * a list of Flow model representations of the flows #### `read_global_concurrency_limit_by_name` ```python read_global_concurrency_limit_by_name(self, name: str) -> 'GlobalConcurrencyLimitResponse' ``` #### `read_global_concurrency_limits` ```python read_global_concurrency_limits(self, limit: int = 10, offset: int = 0) -> list['GlobalConcurrencyLimitResponse'] ``` #### `read_latest_artifacts` ```python read_latest_artifacts(self, **kwargs: Unpack['ArtifactCollectionReadParams']) -> list['ArtifactCollection'] ``` #### `read_logs` ```python read_logs(self, log_filter: 'LogFilter | None' = None, limit: int | None = None, offset: int | None = None, sort: 'LogSort | None' = None) -> list[Log] ``` Read flow and task run logs. #### `read_resource_related_automations` ```python read_resource_related_automations(self, resource_id: str) -> list['Automation'] ``` #### `read_task_run` ```python read_task_run(self, task_run_id: UUID) -> TaskRun ``` Query the Prefect API for a task run by id. **Args:** * `task_run_id`: the task run ID of interest **Returns:** * a Task Run model representation of the task run #### `read_task_run_states` ```python read_task_run_states(self, task_run_id: UUID) -> list[prefect.states.State] ``` Query for the states of a task run **Args:** * `task_run_id`: the id of the task run **Returns:** * a list of State model representations of the task run states #### `read_task_runs` ```python read_task_runs(self) -> list[TaskRun] ``` Query the Prefect API for task runs. Only task runs matching all criteria will be returned. **Args:** * `flow_filter`: filter criteria for flows * `flow_run_filter`: filter criteria for flow runs * `task_run_filter`: filter criteria for task runs * `deployment_filter`: filter criteria for deployments * `sort`: sort criteria for the task runs * `limit`: a limit for the task run query * `offset`: an offset for the task run query **Returns:** * a list of Task Run model representations of the task runs #### `read_variable_by_name` ```python read_variable_by_name(self, name: str) -> 'Variable | None' ``` Reads a variable by name. Returns None if no variable is found. #### `read_variables` ```python read_variables(self, limit: int | None = None) -> list['Variable'] ``` Reads all variables. #### `read_work_pool` ```python read_work_pool(self, work_pool_name: str) -> 'WorkPool' ``` Reads information for a given work pool **Args:** * `work_pool_name`: The name of the work pool to for which to get information. **Returns:** * Information about the requested work pool. #### `read_work_pools` ```python read_work_pools(self, limit: int | None = None, offset: int = 0, work_pool_filter: 'WorkPoolFilter | None' = None) -> list['WorkPool'] ``` Reads work pools. **Args:** * `limit`: Limit for the work pool query. * `offset`: Offset for the work pool query. * `work_pool_filter`: Criteria by which to filter work pools. **Returns:** * A list of work pools. #### `read_work_queue` ```python read_work_queue(self, id: UUID) -> WorkQueue ``` Read a work queue. **Args:** * `id`: the id of the work queue to load **Raises:** * `prefect.exceptions.ObjectNotFound`: If request returns 404 * `httpx.RequestError`: If request fails **Returns:** * an instantiated WorkQueue object #### `read_work_queue_by_name` ```python read_work_queue_by_name(self, name: str, work_pool_name: Optional[str] = None) -> WorkQueue ``` Read a work queue by name. **Args:** * `name`: a unique name for the work queue * `work_pool_name`: the name of the work pool the queue belongs to. **Raises:** * `prefect.exceptions.ObjectNotFound`: if no work queue is found * `httpx.HTTPStatusError`: other status errors **Returns:** * a work queue API object #### `read_work_queue_status` ```python read_work_queue_status(self, id: UUID) -> WorkQueueStatusDetail ``` Read a work queue status. **Args:** * `id`: the id of the work queue to load **Raises:** * `prefect.exceptions.ObjectNotFound`: If request returns 404 * `httpx.RequestError`: If request fails **Returns:** * an instantiated WorkQueueStatus object #### `read_work_queues` ```python read_work_queues(self, work_pool_name: Optional[str] = None, work_queue_filter: Optional[WorkQueueFilter] = None, limit: Optional[int] = None, offset: Optional[int] = None) -> list[WorkQueue] ``` Retrieves queues for a work pool. **Args:** * `work_pool_name`: Name of the work pool for which to get queues. * `work_queue_filter`: Criteria by which to filter queues. * `limit`: Limit for the queue query. * `offset`: Limit for the queue query. **Returns:** * List of queues for the specified work pool. #### `read_worker_metadata` ```python read_worker_metadata(self) -> dict[str, Any] ``` Reads worker metadata stored in Prefect collection registry. #### `read_workers_for_work_pool` ```python read_workers_for_work_pool(self, work_pool_name: str, worker_filter: 'WorkerFilter | None' = None, offset: int | None = None, limit: int | None = None) -> list['Worker'] ``` Reads workers for a given work pool. **Args:** * `work_pool_name`: The name of the work pool for which to get member workers. * `worker_filter`: Criteria by which to filter workers. * `limit`: Limit for the worker query. * `offset`: Limit for the worker query. #### `release_concurrency_slots` ```python release_concurrency_slots(self, names: list[str], slots: int, occupancy_seconds: float) -> 'Response' ``` Release concurrency slots for the specified limits. **Args:** * `names`: A list of limit names for which to release slots. * `slots`: The number of concurrency slots to release. * `occupancy_seconds`: The duration in seconds that the slots were occupied. **Returns:** * "Response": The HTTP response from the server. #### `release_concurrency_slots_with_lease` ```python release_concurrency_slots_with_lease(self, lease_id: 'UUID') -> 'Response' ``` Release concurrency slots for the specified lease. **Args:** * `lease_id`: The ID of the lease corresponding to the concurrency limits to release. #### `renew_concurrency_lease` ```python renew_concurrency_lease(self, lease_id: 'UUID', lease_duration: float) -> 'Response' ``` Renew a concurrency lease. **Args:** * `lease_id`: The ID of the lease to renew. * `lease_duration`: The new lease duration in seconds. #### `reset_concurrency_limit_by_tag` ```python reset_concurrency_limit_by_tag(self, tag: str, slot_override: list['UUID | str'] | None = None) -> None ``` Resets the concurrency limit slots set on a specific tag. **Args:** * `tag`: a tag the concurrency limit is applied to * `slot_override`: a list of task run IDs that are currently using a concurrency slot, please check that any task run IDs included in `slot_override` are currently running, otherwise those concurrency slots will never be released. **Raises:** * `ObjectNotFound`: If request returns 404 * `httpx.RequestError`: If request fails #### `resume_automation` ```python resume_automation(self, automation_id: 'UUID') -> None ``` #### `resume_deployment` ```python resume_deployment(self, deployment_id: Union[UUID, str]) -> None ``` Resume (unpause) a deployment by ID. **Args:** * `deployment_id`: The deployment ID of interest (can be a UUID or a string). **Raises:** * `ObjectNotFound`: If request returns 404 * `RequestError`: If request fails #### `resume_flow_run` ```python resume_flow_run(self, flow_run_id: 'UUID', run_input: dict[str, Any] | None = None) -> 'OrchestrationResult[Any]' ``` Resumes a paused flow run. **Args:** * `flow_run_id`: the flow run ID of interest * `run_input`: the input to resume the flow run with **Returns:** * an OrchestrationResult model representation of state orchestration output #### `send_worker_heartbeat` ```python send_worker_heartbeat(self, work_pool_name: str, worker_name: str, heartbeat_interval_seconds: float | None = None, get_worker_id: bool = False, worker_metadata: 'WorkerMetadata | None' = None) -> 'UUID | None' ``` Sends a worker heartbeat for a given work pool. **Args:** * `work_pool_name`: The name of the work pool to heartbeat against. * `worker_name`: The name of the worker sending the heartbeat. * `return_id`: Whether to return the worker ID. Note: will return `None` if the connected server does not support returning worker IDs, even if `return_id` is `True`. * `worker_metadata`: Metadata about the worker to send to the server. #### `set_deployment_paused_state` ```python set_deployment_paused_state(self, deployment_id: UUID, paused: bool) -> None ``` DEPRECATED: Use pause\_deployment or resume\_deployment instead. Set the paused state of a deployment. **Args:** * `deployment_id`: the deployment ID to update * `paused`: whether the deployment should be paused #### `set_flow_run_name` ```python set_flow_run_name(self, flow_run_id: 'UUID', name: str) -> httpx.Response ``` #### `set_flow_run_state` ```python set_flow_run_state(self, flow_run_id: 'UUID | str', state: 'State[T]', force: bool = False) -> 'OrchestrationResult[T]' ``` Set the state of a flow run. **Args:** * `flow_run_id`: the id of the flow run * `state`: the state to set * `force`: if True, disregard orchestration logic when setting the state, forcing the Prefect API to accept the state **Returns:** * an OrchestrationResult model representation of state orchestration output #### `set_task_run_name` ```python set_task_run_name(self, task_run_id: UUID, name: str) -> httpx.Response ``` #### `set_task_run_state` ```python set_task_run_state(self, task_run_id: UUID, state: prefect.states.State[T], force: bool = False) -> OrchestrationResult[T] ``` Set the state of a task run. **Args:** * `task_run_id`: the id of the task run * `state`: the state to set * `force`: if True, disregard orchestration logic when setting the state, forcing the Prefect API to accept the state **Returns:** * an OrchestrationResult model representation of state orchestration output #### `update_artifact` ```python update_artifact(self, artifact_id: 'UUID', artifact: 'ArtifactUpdate') -> None ``` #### `update_automation` ```python update_automation(self, automation_id: 'UUID', automation: 'AutomationCore') -> None ``` Updates an automation in Prefect Cloud. #### `update_block_document` ```python update_block_document(self, block_document_id: 'UUID', block_document: 'BlockDocumentUpdate') -> None ``` Update a block document in the Prefect API. #### `update_block_type` ```python update_block_type(self, block_type_id: 'UUID', block_type: 'BlockTypeUpdate') -> None ``` Update a block document in the Prefect API. #### `update_deployment` ```python update_deployment(self, deployment_id: UUID, deployment: 'DeploymentUpdate') -> None ``` #### `update_deployment_schedule` ```python update_deployment_schedule(self, deployment_id: UUID, schedule_id: UUID, active: bool | None = None, schedule: 'SCHEDULE_TYPES | None' = None) -> None ``` Update a deployment schedule by ID. **Args:** * `deployment_id`: the deployment ID * `schedule_id`: the deployment schedule ID of interest * `active`: whether or not the schedule should be active * `schedule`: the cron, rrule, or interval schedule this deployment schedule should use #### `update_flow_run` ```python update_flow_run(self, flow_run_id: 'UUID', flow_version: str | None = None, parameters: dict[str, Any] | None = None, name: str | None = None, tags: 'Iterable[str] | None' = None, empirical_policy: 'FlowRunPolicy | None' = None, infrastructure_pid: str | None = None, job_variables: dict[str, Any] | None = None) -> httpx.Response ``` Update a flow run's details. **Args:** * `flow_run_id`: The identifier for the flow run to update. * `flow_version`: A new version string for the flow run. * `parameters`: A dictionary of parameter values for the flow run. This will not be merged with any existing parameters. * `name`: A new name for the flow run. * `empirical_policy`: A new flow run orchestration policy. This will not be merged with any existing policy. * `tags`: An iterable of new tags for the flow run. These will not be merged with any existing tags. * `infrastructure_pid`: The id of flow run as returned by an infrastructure block. **Returns:** * an `httpx.Response` object from the PATCH request #### `update_flow_run_labels` ```python update_flow_run_labels(self, flow_run_id: 'UUID', labels: 'KeyValueLabelsField') -> None ``` Updates the labels of a flow run. #### `update_global_concurrency_limit` ```python update_global_concurrency_limit(self, name: str, concurrency_limit: 'GlobalConcurrencyLimitUpdate') -> 'Response' ``` #### `update_variable` ```python update_variable(self, variable: 'VariableUpdate') -> None ``` Updates a variable with the provided configuration. **Args:** * `variable`: Desired configuration for the updated variable. Returns: Information about the updated variable. #### `update_work_pool` ```python update_work_pool(self, work_pool_name: str, work_pool: 'WorkPoolUpdate') -> None ``` Updates a work pool. **Args:** * `work_pool_name`: Name of the work pool to update. * `work_pool`: Fields to update in the work pool. #### `update_work_queue` ```python update_work_queue(self, id: UUID, **kwargs: Any) -> None ``` Update properties of a work queue. **Args:** * `id`: the ID of the work queue to update * `**kwargs`: the fields to update **Raises:** * `ValueError`: if no kwargs are provided * `prefect.exceptions.ObjectNotFound`: if request returns 404 * `httpx.RequestError`: if the request fails #### `upsert_global_concurrency_limit_by_name` ```python upsert_global_concurrency_limit_by_name(self, name: str, limit: int) -> None ``` Creates a global concurrency limit with the given name and limit if one does not already exist. If one does already exist matching the name then update it's limit if it is different. Note: This is not done atomically. ### `SyncPrefectClient` A synchronous client for interacting with the [Prefect REST API](https://docs.prefect.io/v3/api-ref/rest-api/). **Args:** * `api`: the REST API URL or FastAPI application to connect to * `api_key`: An optional API key for authentication. * `api_version`: The API version this client is compatible with. * `httpx_settings`: An optional dictionary of settings to pass to the underlying `httpx.Client` Examples: Say hello to a Prefect REST API ```python with get_client(sync_client=True) as client: response = client.hello() print(response.json()) πŸ‘‹ ``` **Methods:** #### `api_healthcheck` ```python api_healthcheck(self) -> Optional[Exception] ``` Attempts to connect to the API and returns the encountered exception if not successful. If successful, returns `None`. #### `api_url` ```python api_url(self) -> httpx.URL ``` Get the base URL for the API. #### `api_version` ```python api_version(self) -> str ``` #### `apply_slas_for_deployment` ```python apply_slas_for_deployment(self, deployment_id: 'UUID', slas: 'list[SlaTypes]') -> 'SlaMergeResponse' ``` Applies service level agreements for a deployment. Performs matching by SLA name. If a SLA with the same name already exists, it will be updated. If a SLA with the same name does not exist, it will be created. Existing SLAs that are not in the list will be deleted. Args: deployment\_id: The ID of the deployment to update SLAs for slas: List of SLAs to associate with the deployment Raises: httpx.RequestError: if the SLAs were not updated for any reason Returns: SlaMergeResponse: The response from the backend, containing the names of the created, updated, and deleted SLAs #### `client_version` ```python client_version(self) -> str ``` #### `count_flow_runs` ```python count_flow_runs(self) -> int ``` Returns the count of flow runs matching all criteria for flow runs. **Args:** * `flow_filter`: filter criteria for flows * `flow_run_filter`: filter criteria for flow runs * `task_run_filter`: filter criteria for task runs * `deployment_filter`: filter criteria for deployments * `work_pool_filter`: filter criteria for work pools * `work_queue_filter`: filter criteria for work pool queues **Returns:** * count of flow runs #### `create_artifact` ```python create_artifact(self, artifact: 'ArtifactCreate') -> 'Artifact' ``` #### `create_automation` ```python create_automation(self, automation: 'AutomationCore') -> 'UUID' ``` Creates an automation in Prefect Cloud. #### `create_block_document` ```python create_block_document(self, block_document: 'BlockDocument | BlockDocumentCreate', include_secrets: bool = True) -> 'BlockDocument' ``` Create a block document in the Prefect API. This data is used to configure a corresponding Block. **Args:** * `include_secrets`: whether to include secret values on the stored Block, corresponding to Pydantic's `SecretStr` and `SecretBytes` fields. Note Blocks may not work as expected if this is set to `False`. #### `create_block_schema` ```python create_block_schema(self, block_schema: 'BlockSchemaCreate') -> 'BlockSchema' ``` Create a block schema in the Prefect API. #### `create_block_type` ```python create_block_type(self, block_type: 'BlockTypeCreate') -> 'BlockType' ``` Create a block type in the Prefect API. #### `create_concurrency_limit` ```python create_concurrency_limit(self, tag: str, concurrency_limit: int) -> 'UUID' ``` Create a tag concurrency limit in the Prefect API. These limits govern concurrently running tasks. **Args:** * `tag`: a tag the concurrency limit is applied to * `concurrency_limit`: the maximum number of concurrent task runs for a given tag **Raises:** * `httpx.RequestError`: if the concurrency limit was not created for any reason **Returns:** * the ID of the concurrency limit in the backend #### `create_deployment` ```python create_deployment(self, flow_id: UUID, name: str, version: str | None = None, version_info: 'VersionInfo | None' = None, schedules: list['DeploymentScheduleCreate'] | None = None, concurrency_limit: int | None = None, concurrency_options: 'ConcurrencyOptions | None' = None, parameters: dict[str, Any] | None = None, description: str | None = None, work_queue_name: str | None = None, work_pool_name: str | None = None, tags: list[str] | None = None, storage_document_id: UUID | None = None, path: str | None = None, entrypoint: str | None = None, infrastructure_document_id: UUID | None = None, parameter_openapi_schema: dict[str, Any] | None = None, paused: bool | None = None, pull_steps: list[dict[str, Any]] | None = None, enforce_parameter_schema: bool | None = None, job_variables: dict[str, Any] | None = None, branch: str | None = None, base: UUID | None = None, root: UUID | None = None) -> UUID ``` Create a deployment. **Args:** * `flow_id`: the flow ID to create a deployment for * `name`: the name of the deployment * `version`: an optional version string for the deployment * `tags`: an optional list of tags to apply to the deployment * `storage_document_id`: an reference to the storage block document used for the deployed flow * `infrastructure_document_id`: an reference to the infrastructure block document to use for this deployment * `job_variables`: A dictionary of dot delimited infrastructure overrides that will be applied at runtime; for example `env.CONFIG_KEY=config_value` or `namespace='prefect'`. This argument was previously named `infra_overrides`. Both arguments are supported for backwards compatibility. **Raises:** * `RequestError`: if the deployment was not created for any reason **Returns:** * the ID of the deployment in the backend #### `create_deployment_branch` ```python create_deployment_branch(self, deployment_id: UUID, branch: str, options: 'DeploymentBranchingOptions | None' = None, overrides: 'DeploymentUpdate | None' = None) -> UUID ``` #### `create_deployment_schedules` ```python create_deployment_schedules(self, deployment_id: UUID, schedules: list[tuple['SCHEDULE_TYPES', bool]]) -> list['DeploymentSchedule'] ``` Create deployment schedules. **Args:** * `deployment_id`: the deployment ID * `schedules`: a list of tuples containing the schedule to create and whether or not it should be active. **Raises:** * `RequestError`: if the schedules were not created for any reason **Returns:** * the list of schedules created in the backend #### `create_flow` ```python create_flow(self, flow: 'FlowObject[Any, Any]') -> 'UUID' ``` Create a flow in the Prefect API. **Args:** * `flow`: a `Flow` object **Raises:** * `httpx.RequestError`: if a flow was not created for any reason **Returns:** * the ID of the flow in the backend #### `create_flow_from_name` ```python create_flow_from_name(self, flow_name: str) -> 'UUID' ``` Create a flow in the Prefect API. **Args:** * `flow_name`: the name of the new flow **Raises:** * `httpx.RequestError`: if a flow was not created for any reason **Returns:** * the ID of the flow in the backend #### `create_flow_run` ```python create_flow_run(self, flow: 'FlowObject[Any, R]', name: str | None = None, parameters: dict[str, Any] | None = None, context: dict[str, Any] | None = None, tags: 'Iterable[str] | None' = None, parent_task_run_id: 'UUID | None' = None, state: 'State[R] | None' = None, work_pool_name: str | None = None, work_queue_name: str | None = None, job_variables: dict[str, Any] | None = None) -> 'FlowRun' ``` Create a flow run for a flow. **Args:** * `flow`: The flow model to create the flow run for * `name`: An optional name for the flow run * `parameters`: Parameter overrides for this flow run. * `context`: Optional run context data * `tags`: a list of tags to apply to this flow run * `parent_task_run_id`: if a subflow run is being created, the placeholder task run identifier in the parent flow * `state`: The initial state for the run. If not provided, defaults to `Pending`. * `work_pool_name`: The name of the work pool to run the flow run in. * `work_queue_name`: The name of the work queue to place the flow run in. * `job_variables`: The job variables to use when setting up flow run infrastructure. **Raises:** * `httpx.RequestError`: if the Prefect API does not successfully create a run for any reason **Returns:** * The flow run model #### `create_flow_run_from_deployment` ```python create_flow_run_from_deployment(self, deployment_id: UUID) -> 'FlowRun' ``` Create a flow run for a deployment. **Args:** * `deployment_id`: The deployment ID to create the flow run from * `parameters`: Parameter overrides for this flow run. Merged with the deployment defaults * `context`: Optional run context data * `state`: The initial state for the run. If not provided, defaults to `Scheduled` for now. Should always be a `Scheduled` type. * `name`: An optional name for the flow run. If not provided, the server will generate a name. * `tags`: An optional iterable of tags to apply to the flow run; these tags are merged with the deployment's tags. * `idempotency_key`: Optional idempotency key for creation of the flow run. If the key matches the key of an existing flow run, the existing run will be returned instead of creating a new one. * `parent_task_run_id`: if a subflow run is being created, the placeholder task run identifier in the parent flow * `work_queue_name`: An optional work queue name to add this run to. If not provided, will default to the deployment's set work queue. If one is provided that does not exist, a new work queue will be created within the deployment's work pool. * `job_variables`: Optional variables that will be supplied to the flow run job. **Raises:** * `RequestError`: if the Prefect API does not successfully create a run for any reason **Returns:** * The flow run model #### `create_flow_run_input` ```python create_flow_run_input(self, flow_run_id: 'UUID', key: str, value: str, sender: str | None = None) -> None ``` Creates a flow run input. **Args:** * `flow_run_id`: The flow run id. * `key`: The input key. * `value`: The input value. * `sender`: The sender of the input. #### `create_global_concurrency_limit` ```python create_global_concurrency_limit(self, concurrency_limit: 'GlobalConcurrencyLimitCreate') -> 'UUID' ``` #### `create_logs` ```python create_logs(self, logs: Iterable[Union['LogCreate', dict[str, Any]]]) -> None ``` Create logs for a flow or task run #### `create_task_run` ```python create_task_run(self, task: 'TaskObject[P, R]', flow_run_id: Optional[UUID], dynamic_key: str, id: Optional[UUID] = None, name: Optional[str] = None, extra_tags: Optional[Iterable[str]] = None, state: Optional[prefect.states.State[R]] = None, task_inputs: Optional[dict[str, list[Union[TaskRunResult, FlowRunResult, Parameter, Constant]]]] = None) -> TaskRun ``` Create a task run **Args:** * `task`: The Task to run * `flow_run_id`: The flow run id with which to associate the task run * `dynamic_key`: A key unique to this particular run of a Task within the flow * `id`: An optional ID for the task run. If not provided, one will be generated server-side. * `name`: An optional name for the task run * `extra_tags`: an optional list of extra tags to apply to the task run in addition to `task.tags` * `state`: The initial state for the run. If not provided, defaults to `Pending` for now. Should always be a `Scheduled` type. * `task_inputs`: the set of inputs passed to the task **Returns:** * The created task run. #### `create_variable` ```python create_variable(self, variable: 'VariableCreate') -> 'Variable' ``` Creates an variable with the provided configuration. **Args:** * `variable`: Desired configuration for the new variable. Returns: Information about the newly created variable. #### `create_work_pool` ```python create_work_pool(self, work_pool: 'WorkPoolCreate', overwrite: bool = False) -> 'WorkPool' ``` Creates a work pool with the provided configuration. **Args:** * `work_pool`: Desired configuration for the new work pool. **Returns:** * Information about the newly created work pool. #### `decrement_v1_concurrency_slots` ```python decrement_v1_concurrency_slots(self, names: list[str], task_run_id: 'UUID', occupancy_seconds: float) -> 'Response' ``` Decrement concurrency limit slots for the specified limits. **Args:** * `names`: A list of limit names to decrement. * `task_run_id`: The task run ID that incremented the limits. * `occupancy_seconds`: The duration in seconds that the limits were held. **Returns:** * "Response": The HTTP response from the server. #### `delete_artifact` ```python delete_artifact(self, artifact_id: 'UUID') -> None ``` #### `delete_automation` ```python delete_automation(self, automation_id: 'UUID') -> None ``` #### `delete_block_document` ```python delete_block_document(self, block_document_id: 'UUID') -> None ``` Delete a block document. #### `delete_block_type` ```python delete_block_type(self, block_type_id: 'UUID') -> None ``` Delete a block type. #### `delete_concurrency_limit_by_tag` ```python delete_concurrency_limit_by_tag(self, tag: str) -> None ``` Delete the concurrency limit set on a specific tag. **Args:** * `tag`: a tag the concurrency limit is applied to **Raises:** * `ObjectNotFound`: If request returns 404 * `httpx.RequestError`: If request fails #### `delete_deployment` ```python delete_deployment(self, deployment_id: UUID) -> None ``` Delete deployment by id. **Args:** * `deployment_id`: The deployment id of interest. Raises: ObjectNotFound: If request returns 404 RequestError: If requests fails #### `delete_deployment_schedule` ```python delete_deployment_schedule(self, deployment_id: UUID, schedule_id: UUID) -> None ``` Delete a deployment schedule. **Args:** * `deployment_id`: the deployment ID * `schedule_id`: the ID of the deployment schedule to delete. **Raises:** * `RequestError`: if the schedules were not deleted for any reason #### `delete_flow` ```python delete_flow(self, flow_id: 'UUID') -> None ``` Delete a flow by UUID. **Args:** * `flow_id`: ID of the flow to be deleted Raises: prefect.exceptions.ObjectNotFound: If request returns 404 httpx.RequestError: If requests fail #### `delete_flow_run` ```python delete_flow_run(self, flow_run_id: 'UUID') -> None ``` Delete a flow run by UUID. **Args:** * `flow_run_id`: The flow run UUID of interest. Raises: ObjectNotFound: If request returns 404 httpx.RequestError: If requests fails #### `delete_flow_run_input` ```python delete_flow_run_input(self, flow_run_id: 'UUID', key: str) -> None ``` Deletes a flow run input. **Args:** * `flow_run_id`: The flow run id. * `key`: The input key. #### `delete_global_concurrency_limit_by_name` ```python delete_global_concurrency_limit_by_name(self, name: str) -> 'Response' ``` #### `delete_resource_owned_automations` ```python delete_resource_owned_automations(self, resource_id: str) -> None ``` #### `delete_variable_by_name` ```python delete_variable_by_name(self, name: str) -> None ``` Deletes a variable by name. #### `delete_work_pool` ```python delete_work_pool(self, work_pool_name: str) -> None ``` Deletes a work pool. **Args:** * `work_pool_name`: Name of the work pool to delete. #### `filter_flow_run_input` ```python filter_flow_run_input(self, flow_run_id: 'UUID', key_prefix: str, limit: int, exclude_keys: 'set[str]') -> 'list[FlowRunInput]' ``` #### `find_automation` ```python find_automation(self, id_or_name: 'str | UUID') -> 'Automation | None' ``` #### `get_most_recent_block_schema_for_block_type` ```python get_most_recent_block_schema_for_block_type(self, block_type_id: 'UUID') -> 'BlockSchema | None' ``` Fetches the most recent block schema for a specified block type ID. **Args:** * `block_type_id`: The ID of the block type. **Raises:** * `httpx.RequestError`: If the request fails for any reason. **Returns:** * The most recent block schema or None. #### `get_scheduled_flow_runs_for_deployments` ```python get_scheduled_flow_runs_for_deployments(self, deployment_ids: list[UUID], scheduled_before: 'datetime.datetime | None' = None, limit: int | None = None) -> list['FlowRunResponse'] ``` #### `get_scheduled_flow_runs_for_work_pool` ```python get_scheduled_flow_runs_for_work_pool(self, work_pool_name: str, work_queue_names: list[str] | None = None, scheduled_before: datetime | None = None) -> list['WorkerFlowRunResponse'] ``` Retrieves scheduled flow runs for the provided set of work pool queues. **Args:** * `work_pool_name`: The name of the work pool that the work pool queues are associated with. * `work_queue_names`: The names of the work pool queues from which to get scheduled flow runs. * `scheduled_before`: Datetime used to filter returned flow runs. Flow runs scheduled for after the given datetime string will not be returned. **Returns:** * A list of worker flow run responses containing information about the * retrieved flow runs. #### `hello` ```python hello(self) -> httpx.Response ``` Send a GET request to /hello for testing purposes. #### `increment_concurrency_slots` ```python increment_concurrency_slots(self, names: list[str], slots: int, mode: str) -> 'Response' ``` Increment concurrency slots for the specified limits. **Args:** * `names`: A list of limit names for which to occupy slots. * `slots`: The number of concurrency slots to occupy. * `mode`: The mode of the concurrency limits. #### `increment_concurrency_slots_with_lease` ```python increment_concurrency_slots_with_lease(self, names: list[str], slots: int, mode: Literal['concurrency', 'rate_limit'], lease_duration: float, holder: 'ConcurrencyLeaseHolder | None' = None) -> 'Response' ``` Increment concurrency slots for the specified limits with a lease. **Args:** * `names`: A list of limit names for which to occupy slots. * `slots`: The number of concurrency slots to occupy. * `mode`: The mode of the concurrency limits. * `lease_duration`: The duration of the lease in seconds. * `holder`: Optional holder information for tracking who holds the slots. #### `increment_v1_concurrency_slots` ```python increment_v1_concurrency_slots(self, names: list[str], task_run_id: 'UUID') -> 'Response' ``` Increment concurrency limit slots for the specified limits. **Args:** * `names`: A list of limit names for which to increment limits. * `task_run_id`: The task run ID incrementing the limits. #### `pause_automation` ```python pause_automation(self, automation_id: 'UUID') -> None ``` #### `pause_deployment` ```python pause_deployment(self, deployment_id: Union[UUID, str]) -> None ``` Pause a deployment by ID. **Args:** * `deployment_id`: The deployment ID of interest (can be a UUID or a string). **Raises:** * `ObjectNotFound`: If request returns 404 * `RequestError`: If request fails #### `raise_for_api_version_mismatch` ```python raise_for_api_version_mismatch(self) -> None ``` #### `read_artifacts` ```python read_artifacts(self, **kwargs: Unpack['ArtifactReadParams']) -> list['Artifact'] ``` #### `read_automation` ```python read_automation(self, automation_id: 'UUID | str') -> 'Automation | None' ``` #### `read_automations` ```python read_automations(self) -> list['Automation'] ``` #### `read_automations_by_name` ```python read_automations_by_name(self, name: str) -> list['Automation'] ``` Query the Prefect API for an automation by name. Only automations matching the provided name will be returned. **Args:** * `name`: the name of the automation to query **Returns:** * a list of Automation model representations of the automations #### `read_block_document` ```python read_block_document(self, block_document_id: 'UUID', include_secrets: bool = True) -> 'BlockDocument' ``` Read the block document with the specified ID. **Args:** * `block_document_id`: the block document id * `include_secrets`: whether to include secret values on the Block, corresponding to Pydantic's `SecretStr` and `SecretBytes` fields. These fields are automatically obfuscated by Pydantic, but users can additionally choose not to receive their values from the API. Note that any business logic on the Block may not work if this is `False`. **Raises:** * `httpx.RequestError`: if the block document was not found for any reason **Returns:** * A block document or None. #### `read_block_document_by_name` ```python read_block_document_by_name(self, name: str, block_type_slug: str, include_secrets: bool = True) -> 'BlockDocument' ``` Read the block document with the specified name that corresponds to a specific block type name. **Args:** * `name`: The block document name. * `block_type_slug`: The block type slug. * `include_secrets`: whether to include secret values on the Block, corresponding to Pydantic's `SecretStr` and `SecretBytes` fields. These fields are automatically obfuscated by Pydantic, but users can additionally choose not to receive their values from the API. Note that any business logic on the Block may not work if this is `False`. **Raises:** * `httpx.RequestError`: if the block document was not found for any reason **Returns:** * A block document or None. #### `read_block_documents` ```python read_block_documents(self, block_schema_type: str | None = None, offset: int | None = None, limit: int | None = None, include_secrets: bool = True) -> 'list[BlockDocument]' ``` Read block documents **Args:** * `block_schema_type`: an optional block schema type * `offset`: an offset * `limit`: the number of blocks to return * `include_secrets`: whether to include secret values on the Block, corresponding to Pydantic's `SecretStr` and `SecretBytes` fields. These fields are automatically obfuscated by Pydantic, but users can additionally choose not to receive their values from the API. Note that any business logic on the Block may not work if this is `False`. **Returns:** * A list of block documents #### `read_block_documents_by_type` ```python read_block_documents_by_type(self, block_type_slug: str, offset: int | None = None, limit: int | None = None, include_secrets: bool = True) -> 'list[BlockDocument]' ``` Retrieve block documents by block type slug. **Args:** * `block_type_slug`: The block type slug. * `offset`: an offset * `limit`: the number of blocks to return * `include_secrets`: whether to include secret values **Returns:** * A list of block documents #### `read_block_schema_by_checksum` ```python read_block_schema_by_checksum(self, checksum: str, version: str | None = None) -> 'BlockSchema' ``` Look up a block schema checksum #### `read_block_schemas` ```python read_block_schemas(self) -> 'list[BlockSchema]' ``` Read all block schemas Raises: httpx.RequestError: if a valid block schema was not found **Returns:** * A BlockSchema. #### `read_block_type_by_slug` ```python read_block_type_by_slug(self, slug: str) -> 'BlockType' ``` Read a block type by its slug. #### `read_block_types` ```python read_block_types(self) -> 'list[BlockType]' ``` Read all block types Raises: httpx.RequestError: if the block types were not found **Returns:** * List of BlockTypes. #### `read_concurrency_limit_by_tag` ```python read_concurrency_limit_by_tag(self, tag: str) -> 'ConcurrencyLimit' ``` Read the concurrency limit set on a specific tag. **Args:** * `tag`: a tag the concurrency limit is applied to **Raises:** * `ObjectNotFound`: If request returns 404 * `httpx.RequestError`: if the concurrency limit was not created for any reason **Returns:** * the concurrency limit set on a specific tag #### `read_concurrency_limits` ```python read_concurrency_limits(self, limit: int, offset: int) -> list['ConcurrencyLimit'] ``` Lists concurrency limits set on task run tags. **Args:** * `limit`: the maximum number of concurrency limits returned * `offset`: the concurrency limit query offset **Returns:** * a list of concurrency limits #### `read_deployment` ```python read_deployment(self, deployment_id: Union[UUID, str]) -> 'DeploymentResponse' ``` Query the Prefect API for a deployment by id. **Args:** * `deployment_id`: the deployment ID of interest **Returns:** * a Deployment model representation of the deployment #### `read_deployment_by_name` ```python read_deployment_by_name(self, name: str) -> 'DeploymentResponse' ``` Query the Prefect API for a deployment by name. **Args:** * `name`: A deployed flow's name: \/\ **Raises:** * `ObjectNotFound`: If request returns 404 * `RequestError`: If request fails **Returns:** * a Deployment model representation of the deployment #### `read_deployment_schedules` ```python read_deployment_schedules(self, deployment_id: UUID) -> list['DeploymentSchedule'] ``` Query the Prefect API for a deployment's schedules. **Args:** * `deployment_id`: the deployment ID **Returns:** * a list of DeploymentSchedule model representations of the deployment schedules #### `read_deployments` ```python read_deployments(self) -> list['DeploymentResponse'] ``` Query the Prefect API for deployments. Only deployments matching all the provided criteria will be returned. **Args:** * `flow_filter`: filter criteria for flows * `flow_run_filter`: filter criteria for flow runs * `task_run_filter`: filter criteria for task runs * `deployment_filter`: filter criteria for deployments * `work_pool_filter`: filter criteria for work pools * `work_queue_filter`: filter criteria for work pool queues * `limit`: a limit for the deployment query * `offset`: an offset for the deployment query **Returns:** * a list of Deployment model representations of the deployments #### `read_flow` ```python read_flow(self, flow_id: 'UUID') -> 'Flow' ``` Query the Prefect API for a flow by id. **Args:** * `flow_id`: the flow ID of interest **Returns:** * a Flow model representation of the flow #### `read_flow_by_name` ```python read_flow_by_name(self, flow_name: str) -> 'Flow' ``` Query the Prefect API for a flow by name. **Args:** * `flow_name`: the name of a flow **Returns:** * a fully hydrated Flow model #### `read_flow_run` ```python read_flow_run(self, flow_run_id: 'UUID') -> 'FlowRun' ``` Query the Prefect API for a flow run by id. **Args:** * `flow_run_id`: the flow run ID of interest **Returns:** * a Flow Run model representation of the flow run #### `read_flow_run_input` ```python read_flow_run_input(self, flow_run_id: 'UUID', key: str) -> str ``` Reads a flow run input. **Args:** * `flow_run_id`: The flow run id. * `key`: The input key. #### `read_flow_run_states` ```python read_flow_run_states(self, flow_run_id: 'UUID') -> 'list[State]' ``` Query for the states of a flow run **Args:** * `flow_run_id`: the id of the flow run **Returns:** * a list of State model representations of the flow run states #### `read_flow_runs` ```python read_flow_runs(self) -> 'list[FlowRun]' ``` Query the Prefect API for flow runs. Only flow runs matching all criteria will be returned. **Args:** * `flow_filter`: filter criteria for flows * `flow_run_filter`: filter criteria for flow runs * `task_run_filter`: filter criteria for task runs * `deployment_filter`: filter criteria for deployments * `work_pool_filter`: filter criteria for work pools * `work_queue_filter`: filter criteria for work pool queues * `sort`: sort criteria for the flow runs * `limit`: limit for the flow run query * `offset`: offset for the flow run query **Returns:** * a list of Flow Run model representations of the flow runs #### `read_flows` ```python read_flows(self) -> list['Flow'] ``` Query the Prefect API for flows. Only flows matching all criteria will be returned. **Args:** * `flow_filter`: filter criteria for flows * `flow_run_filter`: filter criteria for flow runs * `task_run_filter`: filter criteria for task runs * `deployment_filter`: filter criteria for deployments * `work_pool_filter`: filter criteria for work pools * `work_queue_filter`: filter criteria for work pool queues * `sort`: sort criteria for the flows * `limit`: limit for the flow query * `offset`: offset for the flow query **Returns:** * a list of Flow model representations of the flows #### `read_global_concurrency_limit_by_name` ```python read_global_concurrency_limit_by_name(self, name: str) -> 'GlobalConcurrencyLimitResponse' ``` #### `read_global_concurrency_limits` ```python read_global_concurrency_limits(self, limit: int = 10, offset: int = 0) -> list['GlobalConcurrencyLimitResponse'] ``` #### `read_latest_artifacts` ```python read_latest_artifacts(self, **kwargs: Unpack['ArtifactCollectionReadParams']) -> list['ArtifactCollection'] ``` #### `read_logs` ```python read_logs(self, log_filter: 'LogFilter | None' = None, limit: int | None = None, offset: int | None = None, sort: 'LogSort | None' = None) -> list['Log'] ``` Read flow and task run logs. #### `read_resource_related_automations` ```python read_resource_related_automations(self, resource_id: str) -> list['Automation'] ``` #### `read_task_run` ```python read_task_run(self, task_run_id: UUID) -> TaskRun ``` Query the Prefect API for a task run by id. **Args:** * `task_run_id`: the task run ID of interest **Returns:** * a Task Run model representation of the task run #### `read_task_run_states` ```python read_task_run_states(self, task_run_id: UUID) -> list[prefect.states.State] ``` Query for the states of a task run **Args:** * `task_run_id`: the id of the task run **Returns:** * a list of State model representations of the task run states #### `read_task_runs` ```python read_task_runs(self) -> list[TaskRun] ``` Query the Prefect API for task runs. Only task runs matching all criteria will be returned. **Args:** * `flow_filter`: filter criteria for flows * `flow_run_filter`: filter criteria for flow runs * `task_run_filter`: filter criteria for task runs * `deployment_filter`: filter criteria for deployments * `sort`: sort criteria for the task runs * `limit`: a limit for the task run query * `offset`: an offset for the task run query **Returns:** * a list of Task Run model representations of the task runs #### `read_variable_by_name` ```python read_variable_by_name(self, name: str) -> 'Variable | None' ``` Reads a variable by name. Returns None if no variable is found. #### `read_variables` ```python read_variables(self, limit: int | None = None) -> list['Variable'] ``` Reads all variables. #### `read_work_pool` ```python read_work_pool(self, work_pool_name: str) -> 'WorkPool' ``` Reads information for a given work pool **Args:** * `work_pool_name`: The name of the work pool to for which to get information. **Returns:** * Information about the requested work pool. #### `read_work_pools` ```python read_work_pools(self, limit: int | None = None, offset: int = 0, work_pool_filter: 'WorkPoolFilter | None' = None) -> list['WorkPool'] ``` Reads work pools. **Args:** * `limit`: Limit for the work pool query. * `offset`: Offset for the work pool query. * `work_pool_filter`: Criteria by which to filter work pools. **Returns:** * A list of work pools. #### `read_workers_for_work_pool` ```python read_workers_for_work_pool(self, work_pool_name: str, worker_filter: 'WorkerFilter | None' = None, offset: int | None = None, limit: int | None = None) -> list['Worker'] ``` Reads workers for a given work pool. **Args:** * `work_pool_name`: The name of the work pool for which to get member workers. * `worker_filter`: Criteria by which to filter workers. * `limit`: Limit for the worker query. * `offset`: Limit for the worker query. #### `release_concurrency_slots` ```python release_concurrency_slots(self, names: list[str], slots: int, occupancy_seconds: float) -> 'Response' ``` Release concurrency slots for the specified limits. **Args:** * `names`: A list of limit names for which to release slots. * `slots`: The number of concurrency slots to release. * `occupancy_seconds`: The duration in seconds that the slots were occupied. **Returns:** * "Response": The HTTP response from the server. #### `release_concurrency_slots_with_lease` ```python release_concurrency_slots_with_lease(self, lease_id: 'UUID') -> 'Response' ``` Release concurrency slots for the specified lease. **Args:** * `lease_id`: The ID of the lease corresponding to the concurrency limits to release. #### `renew_concurrency_lease` ```python renew_concurrency_lease(self, lease_id: 'UUID', lease_duration: float) -> 'Response' ``` Renew a concurrency lease. **Args:** * `lease_id`: The ID of the lease to renew. * `lease_duration`: The new lease duration in seconds. #### `reset_concurrency_limit_by_tag` ```python reset_concurrency_limit_by_tag(self, tag: str, slot_override: list['UUID | str'] | None = None) -> None ``` Resets the concurrency limit slots set on a specific tag. **Args:** * `tag`: a tag the concurrency limit is applied to * `slot_override`: a list of task run IDs that are currently using a concurrency slot, please check that any task run IDs included in `slot_override` are currently running, otherwise those concurrency slots will never be released. **Raises:** * `ObjectNotFound`: If request returns 404 * `httpx.RequestError`: If request fails #### `resume_automation` ```python resume_automation(self, automation_id: 'UUID') -> None ``` #### `resume_deployment` ```python resume_deployment(self, deployment_id: Union[UUID, str]) -> None ``` Resume (unpause) a deployment by ID. **Args:** * `deployment_id`: The deployment ID of interest (can be a UUID or a string). **Raises:** * `ObjectNotFound`: If request returns 404 * `RequestError`: If request fails #### `resume_flow_run` ```python resume_flow_run(self, flow_run_id: 'UUID', run_input: dict[str, Any] | None = None) -> 'OrchestrationResult[Any]' ``` Resumes a paused flow run. **Args:** * `flow_run_id`: the flow run ID of interest * `run_input`: the input to resume the flow run with **Returns:** * an OrchestrationResult model representation of state orchestration output #### `send_worker_heartbeat` ```python send_worker_heartbeat(self, work_pool_name: str, worker_name: str, heartbeat_interval_seconds: float | None = None, get_worker_id: bool = False, worker_metadata: 'WorkerMetadata | None' = None) -> 'UUID | None' ``` Sends a worker heartbeat for a given work pool. **Args:** * `work_pool_name`: The name of the work pool to heartbeat against. * `worker_name`: The name of the worker sending the heartbeat. * `return_id`: Whether to return the worker ID. Note: will return `None` if the connected server does not support returning worker IDs, even if `return_id` is `True`. * `worker_metadata`: Metadata about the worker to send to the server. #### `set_deployment_paused_state` ```python set_deployment_paused_state(self, deployment_id: UUID, paused: bool) -> None ``` DEPRECATED: Use pause\_deployment or resume\_deployment instead. Set the paused state of a deployment. **Args:** * `deployment_id`: the deployment ID to update * `paused`: whether the deployment should be paused #### `set_flow_run_name` ```python set_flow_run_name(self, flow_run_id: 'UUID', name: str) -> httpx.Response ``` #### `set_flow_run_state` ```python set_flow_run_state(self, flow_run_id: 'UUID | str', state: 'State[T]', force: bool = False) -> 'OrchestrationResult[T]' ``` Set the state of a flow run. **Args:** * `flow_run_id`: the id of the flow run * `state`: the state to set * `force`: if True, disregard orchestration logic when setting the state, forcing the Prefect API to accept the state **Returns:** * an OrchestrationResult model representation of state orchestration output #### `set_task_run_name` ```python set_task_run_name(self, task_run_id: UUID, name: str) -> httpx.Response ``` #### `set_task_run_state` ```python set_task_run_state(self, task_run_id: UUID, state: prefect.states.State[Any], force: bool = False) -> OrchestrationResult[Any] ``` Set the state of a task run. **Args:** * `task_run_id`: the id of the task run * `state`: the state to set * `force`: if True, disregard orchestration logic when setting the state, forcing the Prefect API to accept the state **Returns:** * an OrchestrationResult model representation of state orchestration output #### `update_artifact` ```python update_artifact(self, artifact_id: 'UUID', artifact: 'ArtifactUpdate') -> None ``` #### `update_automation` ```python update_automation(self, automation_id: 'UUID', automation: 'AutomationCore') -> None ``` Updates an automation in Prefect Cloud. #### `update_block_document` ```python update_block_document(self, block_document_id: 'UUID', block_document: 'BlockDocumentUpdate') -> None ``` Update a block document in the Prefect API. #### `update_block_type` ```python update_block_type(self, block_type_id: 'UUID', block_type: 'BlockTypeUpdate') -> None ``` Update a block document in the Prefect API. #### `update_deployment` ```python update_deployment(self, deployment_id: UUID, deployment: 'DeploymentUpdate') -> None ``` #### `update_deployment_schedule` ```python update_deployment_schedule(self, deployment_id: UUID, schedule_id: UUID, active: bool | None = None, schedule: 'SCHEDULE_TYPES | None' = None) -> None ``` Update a deployment schedule by ID. **Args:** * `deployment_id`: the deployment ID * `schedule_id`: the deployment schedule ID of interest * `active`: whether or not the schedule should be active * `schedule`: the cron, rrule, or interval schedule this deployment schedule should use #### `update_flow_run` ```python update_flow_run(self, flow_run_id: 'UUID', flow_version: str | None = None, parameters: dict[str, Any] | None = None, name: str | None = None, tags: 'Iterable[str] | None' = None, empirical_policy: 'FlowRunPolicy | None' = None, infrastructure_pid: str | None = None, job_variables: dict[str, Any] | None = None) -> httpx.Response ``` Update a flow run's details. **Args:** * `flow_run_id`: The identifier for the flow run to update. * `flow_version`: A new version string for the flow run. * `parameters`: A dictionary of parameter values for the flow run. This will not be merged with any existing parameters. * `name`: A new name for the flow run. * `empirical_policy`: A new flow run orchestration policy. This will not be merged with any existing policy. * `tags`: An iterable of new tags for the flow run. These will not be merged with any existing tags. * `infrastructure_pid`: The id of flow run as returned by an infrastructure block. **Returns:** * an `httpx.Response` object from the PATCH request #### `update_flow_run_labels` ```python update_flow_run_labels(self, flow_run_id: 'UUID', labels: 'KeyValueLabelsField') -> None ``` Updates the labels of a flow run. #### `update_global_concurrency_limit` ```python update_global_concurrency_limit(self, name: str, concurrency_limit: 'GlobalConcurrencyLimitUpdate') -> 'Response' ``` #### `update_variable` ```python update_variable(self, variable: 'VariableUpdate') -> None ``` Updates a variable with the provided configuration. **Args:** * `variable`: Desired configuration for the updated variable. Returns: Information about the updated variable. #### `update_work_pool` ```python update_work_pool(self, work_pool_name: str, work_pool: 'WorkPoolUpdate') -> None ``` Updates a work pool. **Args:** * `work_pool_name`: Name of the work pool to update. * `work_pool`: Fields to update in the work pool. #### `upsert_global_concurrency_limit_by_name` ```python upsert_global_concurrency_limit_by_name(self, name: str, limit: int) -> None ``` Creates a global concurrency limit with the given name and limit if one does not already exist. If one does already exist matching the name then update it's limit if it is different. Note: This is not done atomically. # base Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-client-orchestration-base # `prefect.client.orchestration.base` ## Classes ### `BaseClient` **Methods:** #### `request` ```python request(self, method: HTTP_METHODS, path: 'ServerRoutes', params: dict[str, Any] | None = None, path_params: dict[str, Any] | None = None, **kwargs: Any) -> 'Response' ``` ### `BaseAsyncClient` **Methods:** #### `request` ```python request(self, method: HTTP_METHODS, path: 'ServerRoutes', params: dict[str, Any] | None = None, path_params: dict[str, Any] | None = None, **kwargs: Any) -> 'Response' ``` # routes Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-client-orchestration-routes # `prefect.client.orchestration.routes` *This module is empty or contains only private/internal implementations.* # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-client-schemas-__init__ # `prefect.client.schemas` *This module is empty or contains only private/internal implementations.* # actions Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-client-schemas-actions # `prefect.client.schemas.actions` ## Classes ### `StateCreate` Data used by the Prefect REST API to create a new state. ### `FlowCreate` Data used by the Prefect REST API to create a flow. ### `FlowUpdate` Data used by the Prefect REST API to update a flow. ### `DeploymentScheduleCreate` **Methods:** #### `from_schedule` ```python from_schedule(cls, schedule: Schedule) -> 'DeploymentScheduleCreate' ``` #### `validate_active` ```python validate_active(cls, v: Any, handler: Callable[[Any], Any]) -> bool ``` #### `validate_max_scheduled_runs` ```python validate_max_scheduled_runs(cls, v: Optional[int]) -> Optional[int] ``` ### `DeploymentScheduleUpdate` **Methods:** #### `validate_max_scheduled_runs` ```python validate_max_scheduled_runs(cls, v: Optional[int]) -> Optional[int] ``` ### `DeploymentCreate` Data used by the Prefect REST API to create a deployment. **Methods:** #### `check_valid_configuration` ```python check_valid_configuration(self, base_job_template: dict[str, Any]) -> None ``` Check that the combination of base\_job\_template defaults and job\_variables conforms to the specified schema. #### `convert_to_strings` ```python convert_to_strings(cls, values: Optional[Union[str, list[str]]]) -> Union[str, list[str]] ``` #### `remove_old_fields` ```python remove_old_fields(cls, values: dict[str, Any]) -> dict[str, Any] ``` ### `DeploymentUpdate` Data used by the Prefect REST API to update a deployment. **Methods:** #### `check_valid_configuration` ```python check_valid_configuration(self, base_job_template: dict[str, Any]) -> None ``` Check that the combination of base\_job\_template defaults and job\_variables conforms to the specified schema. #### `remove_old_fields` ```python remove_old_fields(cls, values: dict[str, Any]) -> dict[str, Any] ``` ### `DeploymentBranch` **Methods:** #### `validate_branch_length` ```python validate_branch_length(cls, v: str) -> str ``` ### `FlowRunUpdate` Data used by the Prefect REST API to update a flow run. ### `TaskRunCreate` Data used by the Prefect REST API to create a task run ### `TaskRunUpdate` Data used by the Prefect REST API to update a task run ### `FlowRunCreate` Data used by the Prefect REST API to create a flow run. ### `DeploymentFlowRunCreate` Data used by the Prefect REST API to create a flow run from a deployment. **Methods:** #### `convert_parameters_to_plain_data` ```python convert_parameters_to_plain_data(cls, values: dict[str, Any]) -> dict[str, Any] ``` ### `SavedSearchCreate` Data used by the Prefect REST API to create a saved search. ### `ConcurrencyLimitCreate` Data used by the Prefect REST API to create a concurrency limit. ### `ConcurrencyLimitV2Create` Data used by the Prefect REST API to create a v2 concurrency limit. ### `ConcurrencyLimitV2Update` Data used by the Prefect REST API to update a v2 concurrency limit. ### `BlockTypeCreate` Data used by the Prefect REST API to create a block type. ### `BlockTypeUpdate` Data used by the Prefect REST API to update a block type. **Methods:** #### `updatable_fields` ```python updatable_fields(cls) -> set[str] ``` ### `BlockSchemaCreate` Data used by the Prefect REST API to create a block schema. ### `BlockDocumentCreate` Data used by the Prefect REST API to create a block document. **Methods:** #### `validate_name_is_present_if_not_anonymous` ```python validate_name_is_present_if_not_anonymous(cls, values: dict[str, Any]) -> dict[str, Any] ``` ### `BlockDocumentUpdate` Data used by the Prefect REST API to update a block document. ### `BlockDocumentReferenceCreate` Data used to create block document reference. ### `LogCreate` Data used by the Prefect REST API to create a log. **Methods:** #### `model_dump` ```python model_dump(self, *args: Any, **kwargs: Any) -> dict[str, Any] ``` The worker\_id field is only included in logs sent to Prefect Cloud. If it's unset, we should not include it in the log payload. ### `WorkPoolCreate` Data used by the Prefect REST API to create a work pool. ### `WorkPoolUpdate` Data used by the Prefect REST API to update a work pool. ### `WorkQueueCreate` Data used by the Prefect REST API to create a work queue. ### `WorkQueueUpdate` Data used by the Prefect REST API to update a work queue. ### `ArtifactCreate` Data used by the Prefect REST API to create an artifact. ### `ArtifactUpdate` Data used by the Prefect REST API to update an artifact. ### `VariableCreate` Data used by the Prefect REST API to create a Variable. ### `VariableUpdate` Data used by the Prefect REST API to update a Variable. ### `GlobalConcurrencyLimitCreate` Data used by the Prefect REST API to create a global concurrency limit. ### `GlobalConcurrencyLimitUpdate` Data used by the Prefect REST API to update a global concurrency limit. # filters Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-client-schemas-filters # `prefect.client.schemas.filters` Schemas that define Prefect REST API filtering operations. ## Classes ### `Operator` Operators for combining filter criteria. **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `OperatorMixin` Base model for Prefect filters that combines criteria with a user-provided operator ### `FlowFilterId` Filter by `Flow.id`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowFilterName` Filter by `Flow.name`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowFilterTags` Filter by `Flow.tags`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowFilter` Filter for flows. Only flows matching all criteria will be returned. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilterId` Filter by FlowRun.id. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilterName` Filter by `FlowRun.name`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilterTags` Filter by `FlowRun.tags`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilterDeploymentId` Filter by `FlowRun.deployment_id`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilterWorkQueueName` Filter by `FlowRun.work_queue_name`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilterStateType` Filter by `FlowRun.state_type`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilterStateName` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilterState` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilterFlowVersion` Filter by `FlowRun.flow_version`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilterStartTime` Filter by `FlowRun.start_time`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilterExpectedStartTime` Filter by `FlowRun.expected_start_time`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilterNextScheduledStartTime` Filter by `FlowRun.next_scheduled_start_time`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilterParentFlowRunId` Filter for subflows of the given flow runs **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilterParentTaskRunId` Filter by `FlowRun.parent_task_run_id`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilterIdempotencyKey` Filter by FlowRun.idempotency\_key. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilter` Filter flow runs. Only flow runs matching all criteria will be returned **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `TaskRunFilterFlowRunId` Filter by `TaskRun.flow_run_id`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `TaskRunFilterId` Filter by `TaskRun.id`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `TaskRunFilterName` Filter by `TaskRun.name`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `TaskRunFilterTags` Filter by `TaskRun.tags`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `TaskRunFilterStateType` Filter by `TaskRun.state_type`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `TaskRunFilterStateName` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `TaskRunFilterState` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `TaskRunFilterSubFlowRuns` Filter by `TaskRun.subflow_run`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `TaskRunFilterStartTime` Filter by `TaskRun.start_time`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `TaskRunFilter` Filter task runs. Only task runs matching all criteria will be returned **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `DeploymentFilterId` Filter by `Deployment.id`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `DeploymentFilterName` Filter by `Deployment.name`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `DeploymentFilterWorkQueueName` Filter by `Deployment.work_queue_name`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `DeploymentFilterTags` Filter by `Deployment.tags`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `DeploymentFilterConcurrencyLimit` DEPRECATED: Prefer `Deployment.concurrency_limit_id` over `Deployment.concurrency_limit`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `DeploymentFilter` Filter for deployments. Only deployments matching all criteria will be returned. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `LogFilterName` Filter by `Log.name`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `LogFilterLevel` Filter by `Log.level`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `LogFilterTimestamp` Filter by `Log.timestamp`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `LogFilterFlowRunId` Filter by `Log.flow_run_id`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `LogFilterTaskRunId` Filter by `Log.task_run_id`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `LogFilterTextSearch` Filter by text search across log content. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `LogFilter` Filter logs. Only logs matching all criteria will be returned **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FilterSet` A collection of filters for common objects **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `BlockTypeFilterName` Filter by `BlockType.name` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `BlockTypeFilterSlug` Filter by `BlockType.slug` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `BlockTypeFilter` Filter BlockTypes **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `BlockSchemaFilterBlockTypeId` Filter by `BlockSchema.block_type_id`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `BlockSchemaFilterId` Filter by BlockSchema.id **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `BlockSchemaFilterCapabilities` Filter by `BlockSchema.capabilities` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `BlockSchemaFilterVersion` Filter by `BlockSchema.capabilities` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `BlockSchemaFilter` Filter BlockSchemas **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `BlockDocumentFilterIsAnonymous` Filter by `BlockDocument.is_anonymous`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `BlockDocumentFilterBlockTypeId` Filter by `BlockDocument.block_type_id`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `BlockDocumentFilterId` Filter by `BlockDocument.id`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `BlockDocumentFilterName` Filter by `BlockDocument.name`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `BlockDocumentFilter` Filter BlockDocuments. Only BlockDocuments matching all criteria will be returned **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkQueueFilterId` Filter by `WorkQueue.id`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkQueueFilterName` Filter by `WorkQueue.name`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkQueueFilter` Filter work queues. Only work queues matching all criteria will be returned **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkPoolFilterId` Filter by `WorkPool.id`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkPoolFilterName` Filter by `WorkPool.name`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkPoolFilterType` Filter by `WorkPool.type`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkPoolFilter` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkerFilterWorkPoolId` Filter by `Worker.worker_config_id`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkerFilterLastHeartbeatTime` Filter by `Worker.last_heartbeat_time`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkerFilterStatus` Filter by `Worker.status`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkerFilter` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `ArtifactFilterId` Filter by `Artifact.id`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `ArtifactFilterKey` Filter by `Artifact.key`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `ArtifactFilterFlowRunId` Filter by `Artifact.flow_run_id`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `ArtifactFilterTaskRunId` Filter by `Artifact.task_run_id`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `ArtifactFilterType` Filter by `Artifact.type`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `ArtifactFilter` Filter artifacts. Only artifacts matching all criteria will be returned **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `ArtifactCollectionFilterLatestId` Filter by `ArtifactCollection.latest_id`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `ArtifactCollectionFilterKey` Filter by `ArtifactCollection.key`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `ArtifactCollectionFilterFlowRunId` Filter by `ArtifactCollection.flow_run_id`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `ArtifactCollectionFilterTaskRunId` Filter by `ArtifactCollection.task_run_id`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `ArtifactCollectionFilterType` Filter by `ArtifactCollection.type`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `ArtifactCollectionFilter` Filter artifact collections. Only artifact collections matching all criteria will be returned **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `VariableFilterId` Filter by `Variable.id`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `VariableFilterName` Filter by `Variable.name`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `VariableFilterTags` Filter by `Variable.tags`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `VariableFilter` Filter variables. Only variables matching all criteria will be returned **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. # objects Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-client-schemas-objects # `prefect.client.schemas.objects` ## Functions ### `data_discriminator` ```python data_discriminator(x: Any) -> str ``` ## Classes ### `RunType` **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `StateType` Enumeration of state types. **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `WorkPoolStatus` Enumeration of work pool statuses. **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` #### `display_name` ```python display_name(self) -> str ``` ### `WorkerStatus` Enumeration of worker statuses. **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `DeploymentStatus` Enumeration of deployment statuses. **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `WorkQueueStatus` Enumeration of work queue statuses. **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `ConcurrencyLimitStrategy` Enumeration of concurrency limit strategies. **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `ConcurrencyOptions` Class for storing the concurrency config in database. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `ConcurrencyLimitConfig` Class for storing the concurrency limit config in database. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `ConcurrencyLeaseHolder` Model for validating concurrency lease holder information. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `StateDetails` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `to_run_result` ```python to_run_result(self, run_type: RunType) -> Optional[Union[FlowRunResult, TaskRunResult]] ``` ### `State` The state of a run. **Methods:** #### `aresult` ```python aresult(self: 'State[R]', raise_on_failure: Literal[True] = ..., retry_result_failure: bool = ...) -> R ``` #### `aresult` ```python aresult(self: 'State[R]', raise_on_failure: Literal[False] = False, retry_result_failure: bool = ...) -> Union[R, Exception] ``` #### `aresult` ```python aresult(self: 'State[R]', raise_on_failure: bool = ..., retry_result_failure: bool = ...) -> Union[R, Exception] ``` #### `aresult` ```python aresult(self, raise_on_failure: bool = True, retry_result_failure: bool = True) -> Union[R, Exception] ``` Retrieve the result attached to this state. #### `default_name_from_type` ```python default_name_from_type(self) -> Self ``` If a name is not provided, use the type #### `default_scheduled_start_time` ```python default_scheduled_start_time(self) -> Self ``` #### `fresh_copy` ```python fresh_copy(self, **kwargs: Any) -> Self ``` Return a fresh copy of the state with a new ID. #### `is_cancelled` ```python is_cancelled(self) -> bool ``` #### `is_cancelling` ```python is_cancelling(self) -> bool ``` #### `is_completed` ```python is_completed(self) -> bool ``` #### `is_crashed` ```python is_crashed(self) -> bool ``` #### `is_failed` ```python is_failed(self) -> bool ``` #### `is_final` ```python is_final(self) -> bool ``` #### `is_paused` ```python is_paused(self) -> bool ``` #### `is_pending` ```python is_pending(self) -> bool ``` #### `is_running` ```python is_running(self) -> bool ``` #### `is_scheduled` ```python is_scheduled(self) -> bool ``` #### `model_copy` ```python model_copy(self) -> Self ``` Copying API models should return an object that could be inserted into the database again. The 'timestamp' is reset using the default factory. #### `result` ```python result(self: 'State[R]', raise_on_failure: Literal[True] = ..., retry_result_failure: bool = ...) -> R ``` #### `result` ```python result(self: 'State[R]', raise_on_failure: Literal[False] = False, retry_result_failure: bool = ...) -> Union[R, Exception] ``` #### `result` ```python result(self: 'State[R]', raise_on_failure: bool = ..., retry_result_failure: bool = ...) -> Union[R, Exception] ``` #### `result` ```python result(self, raise_on_failure: bool = True, retry_result_failure: bool = True) -> Union[R, Exception] ``` Retrieve the result attached to this state. **Args:** * `raise_on_failure`: a boolean specifying whether to raise an exception if the state is of type `FAILED` and the underlying data is an exception. When flow was run in a different memory space (using `run_deployment`), this will only raise if `fetch` is `True`. * `retry_result_failure`: a boolean specifying whether to retry on failures to load the result from result storage **Raises:** * `TypeError`: If the state is failed but the result is not an exception. **Returns:** * The result of the run **Examples:** Get the result from a flow state ```python @flow def my_flow(): return "hello" my_flow(return_state=True).result() # hello ``` Get the result from a failed state ```python @flow def my_flow(): raise ValueError("oh no!") state = my_flow(return_state=True) # Error is wrapped in FAILED state state.result() # Raises `ValueError` ``` Get the result from a failed state without erroring ```python @flow def my_flow(): raise ValueError("oh no!") state = my_flow(return_state=True) result = state.result(raise_on_failure=False) print(result) # ValueError("oh no!") ``` Get the result from a flow state in an async context ```python @flow async def my_flow(): return "hello" state = await my_flow(return_state=True) await state.result() # hello ``` Get the result with `raise_on_failure` from a flow run in a different memory space ```python @flow async def my_flow(): raise ValueError("oh no!") my_flow.deploy("my_deployment/my_flow") flow_run = run_deployment("my_deployment/my_flow") await flow_run.state.result(raise_on_failure=True) # Raises `ValueError("oh no!")` ``` #### `set_unpersisted_results_to_none` ```python set_unpersisted_results_to_none(self) -> Self ``` ### `FlowRunPolicy` Defines of how a flow run should be orchestrated. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `populate_deprecated_fields` ```python populate_deprecated_fields(cls, values: Any) -> Any ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRun` **Methods:** #### `set_default_name` ```python set_default_name(cls, name: Optional[str]) -> str ``` ### `TaskRunPolicy` Defines of how a task run should retry. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `populate_deprecated_fields` ```python populate_deprecated_fields(self) ``` If deprecated fields are provided, populate the corresponding new fields to preserve orchestration behavior. #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `validate_configured_retry_delays` ```python validate_configured_retry_delays(cls, v: Optional[int | float | list[int] | list[float]]) -> Optional[int | float | list[int] | list[float]] ``` #### `validate_jitter_factor` ```python validate_jitter_factor(cls, v: Optional[float]) -> Optional[float] ``` ### `RunInput` Base class for classes that represent inputs to task runs, which could include, constants, parameters, or other task runs. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `TaskRunResult` Represents a task run result input to another task run. ### `FlowRunResult` ### `Parameter` Represents a parameter input to a task run. ### `Constant` Represents constant input value to a task run. ### `TaskRun` **Methods:** #### `set_default_name` ```python set_default_name(cls, name: Optional[str]) -> Name ``` ### `Workspace` A Prefect Cloud workspace. Expected payload for each workspace returned by the `me/workspaces` route. **Methods:** #### `api_url` ```python api_url(self) -> str ``` Generate the API URL for accessing this workspace #### `handle` ```python handle(self) -> str ``` The full handle of the workspace as `account_handle` / `workspace_handle` #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `ui_url` ```python ui_url(self) -> str ``` Generate the UI URL for accessing this workspace ### `IPAllowlistEntry` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `IPAllowlist` A Prefect Cloud IP allowlist. Expected payload for an IP allowlist from the Prefect Cloud API. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `IPAllowlistMyAccessResponse` Expected payload for an IP allowlist access response from the Prefect Cloud API. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `BlockType` An ORM representation of a block type ### `BlockSchema` A representation of a block schema. ### `BlockDocument` An ORM representation of a block document. **Methods:** #### `serialize_data` ```python serialize_data(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `validate_name_is_present_if_not_anonymous` ```python validate_name_is_present_if_not_anonymous(cls, values: dict[str, Any]) -> dict[str, Any] ``` ### `Flow` An ORM representation of flow data. ### `DeploymentSchedule` ### `VersionInfo` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `BranchingScheduleHandling` ### `DeploymentBranchingOptions` ### `Deployment` An ORM representation of deployment data. ### `ConcurrencyLimit` An ORM representation of a concurrency limit. ### `BlockSchemaReference` An ORM representation of a block schema reference. ### `BlockDocumentReference` An ORM representation of a block document reference. **Methods:** #### `validate_parent_and_ref_are_different` ```python validate_parent_and_ref_are_different(cls, values: Any) -> Any ``` ### `Configuration` An ORM representation of account info. ### `SavedSearchFilter` A filter for a saved search model. Intended for use by the Prefect UI. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `SavedSearch` An ORM representation of saved search data. Represents a set of filter criteria. ### `Log` An ORM representation of log data. ### `QueueFilter` Filter criteria definition for a work queue. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkQueue` An ORM representation of a work queue ### `WorkQueueHealthPolicy` **Methods:** #### `evaluate_health_status` ```python evaluate_health_status(self, late_runs_count: int, last_polled: datetime.datetime | None = None) -> bool ``` Given empirical information about the state of the work queue, evaluate its health status. **Args:** * `late_runs`: the count of late runs for the work queue. * `last_polled`: the last time the work queue was polled, if available. **Returns:** * whether or not the work queue is healthy. #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkQueueStatusDetail` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `Agent` An ORM representation of an agent ### `WorkPoolStorageConfiguration` A work pool storage configuration **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkPool` An ORM representation of a work pool **Methods:** #### `helpful_error_for_missing_default_queue_id` ```python helpful_error_for_missing_default_queue_id(cls, v: Optional[UUID]) -> UUID ``` #### `is_managed_pool` ```python is_managed_pool(self) -> bool ``` #### `is_push_pool` ```python is_push_pool(self) -> bool ``` ### `Worker` An ORM representation of a worker ### `Artifact` **Methods:** #### `validate_metadata_length` ```python validate_metadata_length(cls, v: Optional[dict[str, str]]) -> Optional[dict[str, str]] ``` ### `ArtifactCollection` ### `Variable` ### `FlowRunInput` **Methods:** #### `decoded_value` ```python decoded_value(self) -> Any ``` Decode the value of the input. **Returns:** * the decoded value ### `GlobalConcurrencyLimit` An ORM representation of a global concurrency limit ### `CsrfToken` ### `Integration` A representation of an installed Prefect integration. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkerMetadata` Worker metadata. We depend on the structure of `integrations`, but otherwise, worker classes should support flexible metadata. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. # responses Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-client-schemas-responses # `prefect.client.schemas.responses` ## Classes ### `SetStateStatus` Enumerates return statuses for setting run states. **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `StateAcceptDetails` Details associated with an ACCEPT state transition. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `StateRejectDetails` Details associated with a REJECT state transition. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `StateAbortDetails` Details associated with an ABORT state transition. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `StateWaitDetails` Details associated with a WAIT state transition. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `HistoryResponseState` Represents a single state's history over an interval. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `HistoryResponse` Represents a history of aggregation states over an interval **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `OrchestrationResult` A container for the output of state orchestration. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkerFlowRunResponse` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunResponse` ### `DeploymentResponse` **Methods:** #### `as_related_resource` ```python as_related_resource(self, role: str = 'deployment') -> 'RelatedResource' ``` ### `MinimalConcurrencyLimitResponse` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `ConcurrencyLimitWithLeaseResponse` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `GlobalConcurrencyLimitResponse` A response object for global concurrency limits. # schedules Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-client-schemas-schedules # `prefect.client.schemas.schedules` Schedule schemas ## Functions ### `is_valid_timezone` ```python is_valid_timezone(v: str) -> bool ``` Validate that the provided timezone is a valid IANA timezone. Unfortunately this list is slightly different from the list of valid timezones we use for cron and interval timezone validation. ### `is_schedule_type` ```python is_schedule_type(obj: Any) -> TypeGuard[SCHEDULE_TYPES] ``` ### `construct_schedule` ```python construct_schedule(interval: Optional[Union[int, float, datetime.timedelta]] = None, anchor_date: Optional[Union[datetime.datetime, str]] = None, cron: Optional[str] = None, rrule: Optional[str] = None, timezone: Optional[str] = None) -> SCHEDULE_TYPES ``` Construct a schedule from the provided arguments. **Args:** * `interval`: An interval on which to schedule runs. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds. * `anchor_date`: The start date for an interval schedule. * `cron`: A cron schedule for runs. * `rrule`: An rrule schedule of when to execute runs of this flow. * `timezone`: A timezone to use for the schedule. Defaults to UTC. ## Classes ### `IntervalSchedule` A schedule formed by adding `interval` increments to an `anchor_date`. If no `anchor_date` is supplied, the current UTC time is used. If a timezone-naive datetime is provided for `anchor_date`, it is assumed to be in the schedule's timezone (or UTC). Even if supplied with an IANA timezone, anchor dates are always stored as UTC offsets, so a `timezone` can be provided to determine localization behaviors like DST boundary handling. If none is provided it will be inferred from the anchor date. NOTE: If the `IntervalSchedule` `anchor_date` or `timezone` is provided in a DST-observing timezone, then the schedule will adjust itself appropriately. Intervals greater than 24 hours will follow DST conventions, while intervals of less than 24 hours will follow UTC intervals. For example, an hourly schedule will fire every UTC hour, even across DST boundaries. When clocks are set back, this will result in two runs that *appear* to both be scheduled for 1am local time, even though they are an hour apart in UTC time. For longer intervals, like a daily schedule, the interval schedule will adjust for DST boundaries so that the clock-hour remains constant. This means that a daily schedule that always fires at 9am will observe DST and continue to fire at 9am in the local time zone. **Args:** * `interval`: an interval to schedule on * `anchor_date`: an anchor date to schedule increments against; if not provided, the current timestamp will be used * `timezone`: a valid timezone string **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `validate_timezone` ```python validate_timezone(self) ``` ### `CronSchedule` Cron schedule NOTE: If the timezone is a DST-observing one, then the schedule will adjust itself appropriately. Cron's rules for DST are based on schedule times, not intervals. This means that an hourly cron schedule will fire on every new schedule hour, not every elapsed hour; for example, when clocks are set back this will result in a two-hour pause as the schedule will fire *the first time* 1am is reached and *the first time* 2am is reached, 120 minutes later. Longer schedules, such as one that fires at 9am every morning, will automatically adjust for DST. **Args:** * `cron`: a valid cron string * `timezone`: a valid timezone string in IANA tzdata format (for example, America/New\_York). * `day_or`: Control how croniter handles `day` and `day_of_week` entries. Defaults to True, matching cron which connects those values using OR. If the switch is set to False, the values are connected using AND. This behaves like fcron and enables you to e.g. define a job that executes each 2nd friday of a month by setting the days of month and the weekday. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `valid_cron_string` ```python valid_cron_string(cls, v: str) -> str ``` #### `valid_timezone` ```python valid_timezone(cls, v: Optional[str]) -> Optional[str] ``` ### `RRuleSchedule` RRule schedule, based on the iCalendar standard ([RFC 5545](https://datatracker.ietf.org/doc/html/rfc5545)) as implemented in `dateutils.rrule`. RRules are appropriate for any kind of calendar-date manipulation, including irregular intervals, repetition, exclusions, week day or day-of-month adjustments, and more. Note that as a calendar-oriented standard, `RRuleSchedules` are sensitive to to the initial timezone provided. A 9am daily schedule with a daylight saving time-aware start date will maintain a local 9am time through DST boundaries; a 9am daily schedule with a UTC start date will maintain a 9am UTC time. **Args:** * `rrule`: a valid RRule string * `timezone`: a valid timezone string **Methods:** #### `from_rrule` ```python from_rrule(cls, rrule: Union[dateutil.rrule.rrule, dateutil.rrule.rruleset]) -> 'RRuleSchedule' ``` #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `to_rrule` ```python to_rrule(self) -> Union[dateutil.rrule.rrule, dateutil.rrule.rruleset] ``` Since rrule doesn't properly serialize/deserialize timezones, we localize dates here #### `valid_timezone` ```python valid_timezone(cls, v: Optional[str]) -> str ``` Validate that the provided timezone is a valid IANA timezone. Unfortunately this list is slightly different from the list of valid timezones we use for cron and interval timezone validation. #### `validate_rrule_str` ```python validate_rrule_str(cls, v: str) -> str ``` ### `NoSchedule` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. # sorting Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-client-schemas-sorting # `prefect.client.schemas.sorting` ## Classes ### `FlowRunSort` Defines flow run sorting options. **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `TaskRunSort` Defines task run sorting options. **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `AutomationSort` Defines automation sorting options. **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `LogSort` Defines log sorting options. **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `FlowSort` Defines flow sorting options. **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `DeploymentSort` Defines deployment sorting options. **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `ArtifactSort` Defines artifact sorting options. **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `ArtifactCollectionSort` Defines artifact collection sorting options. **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `VariableSort` Defines variables sorting options. **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `BlockDocumentSort` Defines block document sorting options. **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` # subscriptions Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-client-subscriptions # `prefect.client.subscriptions` ## Classes ### `Subscription` **Methods:** #### `websocket` ```python websocket(self) -> websockets.asyncio.client.ClientConnection ``` # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-client-types-__init__ # `prefect.client.types` *This module is empty or contains only private/internal implementations.* # flexible_schedule_list Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-client-types-flexible_schedule_list # `prefect.client.types.flexible_schedule_list` *This module is empty or contains only private/internal implementations.* # utilities Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-client-utilities # `prefect.client.utilities` Utilities for working with clients. ## Functions ### `get_or_create_client` ```python get_or_create_client(client: Optional['PrefectClient'] = None) -> tuple['PrefectClient', bool] ``` Returns provided client, infers a client from context if available, or creates a new client. **Args:** * `- client`: an optional client to use **Returns:** * * tuple: a tuple of the client and a boolean indicating if the client was inferred from context ### `client_injector` ```python client_injector(func: Callable[Concatenate['PrefectClient', P], Coroutine[Any, Any, R]]) -> Callable[P, Coroutine[Any, Any, R]] ``` ### `inject_client` ```python inject_client(fn: Callable[P, Coroutine[Any, Any, R]]) -> Callable[P, Coroutine[Any, Any, R]] ``` Simple helper to provide a context managed client to an asynchronous function. The decorated function *must* take a `client` kwarg and if a client is passed when called it will be used instead of creating a new one, but it will not be context managed as it is assumed that the caller is managing the context. # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-concurrency-__init__ # `prefect.concurrency` *This module is empty or contains only private/internal implementations.* # asyncio Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-concurrency-asyncio # `prefect.concurrency.asyncio` ## Functions ### `concurrency` ```python concurrency(names: Union[str, list[str]], occupy: int = 1, timeout_seconds: Optional[float] = None, max_retries: Optional[int] = None, lease_duration: float = 300, strict: bool = False, holder: 'Optional[ConcurrencyLeaseHolder]' = None) -> AsyncGenerator[None, None] ``` A context manager that acquires and releases concurrency slots from the given concurrency limits. **Args:** * `names`: The names of the concurrency limits to acquire slots from. * `occupy`: The number of slots to acquire and hold from each limit. * `timeout_seconds`: The number of seconds to wait for the slots to be acquired before raising a `TimeoutError`. A timeout of `None` will wait indefinitely. * `max_retries`: The maximum number of retries to acquire the concurrency slots. * `lease_duration`: The duration of the lease for the acquired slots in seconds. * `strict`: A boolean specifying whether to raise an error if the concurrency limit does not exist. Defaults to `False`. * `holder`: A dictionary containing information about the holder of the concurrency slots. Typically includes 'type' and 'id' keys. **Raises:** * `TimeoutError`: If the slots are not acquired within the given timeout. * `ConcurrencySlotAcquisitionError`: If the concurrency limit does not exist and `strict` is `True`. Example: A simple example of using the async `concurrency` context manager: ```python from prefect.concurrency.asyncio import concurrency async def resource_heavy(): async with concurrency("test", occupy=1): print("Resource heavy task") async def main(): await resource_heavy() ``` ### `rate_limit` ```python rate_limit(names: Union[str, list[str]], occupy: int = 1, timeout_seconds: Optional[float] = None, strict: bool = False) -> None ``` Block execution until an `occupy` number of slots of the concurrency limits given in `names` are acquired. Requires that all given concurrency limits have a slot decay. **Args:** * `names`: The names of the concurrency limits to acquire slots from. * `occupy`: The number of slots to acquire and hold from each limit. * `timeout_seconds`: The number of seconds to wait for the slots to be acquired before raising a `TimeoutError`. A timeout of `None` will wait indefinitely. * `strict`: A boolean specifying whether to raise an error if the concurrency limit does not exist. Defaults to `False`. **Raises:** * `TimeoutError`: If the slots are not acquired within the given timeout. * `ConcurrencySlotAcquisitionError`: If the concurrency limit does not exist and `strict` is `True`. # context Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-concurrency-context # `prefect.concurrency.context` ## Classes ### `ConcurrencyContext` **Methods:** #### `get` ```python get(cls: type[Self]) -> Optional[Self] ``` Get the current context instance #### `model_copy` ```python model_copy(self: Self) -> Self ``` Duplicate the context model, optionally choosing which fields to include, exclude, or change. **Returns:** * A new model instance. #### `serialize` ```python serialize(self, include_secrets: bool = True) -> dict[str, Any] ``` Serialize the context model to a dictionary that can be pickled with cloudpickle. # services Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-concurrency-services # `prefect.concurrency.services` ## Classes ### `ConcurrencySlotAcquisitionService` **Methods:** #### `acquire` ```python acquire(self, slots: int, mode: Literal['concurrency', 'rate_limit'], timeout_seconds: Optional[float] = None, max_retries: Optional[int] = None) -> httpx.Response ``` # sync Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-concurrency-sync # `prefect.concurrency.sync` ## Functions ### `concurrency` ```python concurrency(names: Union[str, list[str]], occupy: int = 1, timeout_seconds: Optional[float] = None, max_retries: Optional[int] = None, lease_duration: float = 300, strict: bool = False, holder: 'Optional[ConcurrencyLeaseHolder]' = None) -> Generator[None, None, None] ``` A context manager that acquires and releases concurrency slots from the given concurrency limits. **Args:** * `names`: The names of the concurrency limits to acquire slots from. * `occupy`: The number of slots to acquire and hold from each limit. * `timeout_seconds`: The number of seconds to wait for the slots to be acquired before raising a `TimeoutError`. A timeout of `None` will wait indefinitely. * `max_retries`: The maximum number of retries to acquire the concurrency slots. * `lease_duration`: The duration of the lease for the acquired slots in seconds. * `strict`: A boolean specifying whether to raise an error if the concurrency limit does not exist. Defaults to `False`. * `holder`: A dictionary containing information about the holder of the concurrency slots. Typically includes 'type' and 'id' keys. **Raises:** * `TimeoutError`: If the slots are not acquired within the given timeout. * `ConcurrencySlotAcquisitionError`: If the concurrency limit does not exist and `strict` is `True`. Example: A simple example of using the sync `concurrency` context manager: ```python from prefect.concurrency.sync import concurrency def resource_heavy(): with concurrency("test", occupy=1): print("Resource heavy task") def main(): resource_heavy() ``` ### `rate_limit` ```python rate_limit(names: Union[str, list[str]], occupy: int = 1, timeout_seconds: Optional[float] = None, strict: bool = False) -> None ``` Block execution until an `occupy` number of slots of the concurrency limits given in `names` are acquired. Requires that all given concurrency limits have a slot decay. **Args:** * `names`: The names of the concurrency limits to acquire slots from. * `occupy`: The number of slots to acquire and hold from each limit. * `timeout_seconds`: The number of seconds to wait for the slots to be acquired before raising a `TimeoutError`. A timeout of `None` will wait indefinitely. * `strict`: A boolean specifying whether to raise an error if the concurrency limit does not exist. Defaults to `False`. **Raises:** * `TimeoutError`: If the slots are not acquired within the given timeout. * `ConcurrencySlotAcquisitionError`: If the concurrency limit does not exist and `strict` is `True`. # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-concurrency-v1-__init__ # `prefect.concurrency.v1` *This module is empty or contains only private/internal implementations.* # asyncio Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-concurrency-v1-asyncio # `prefect.concurrency.v1.asyncio` ## Functions ### `concurrency` ```python concurrency(names: Union[str, list[str]], task_run_id: UUID, timeout_seconds: Optional[float] = None) -> AsyncGenerator[None, None] ``` A context manager that acquires and releases concurrency slots from the given concurrency limits. **Args:** * `names`: The names of the concurrency limits to acquire slots from. * `task_run_id`: The name of the task\_run\_id that is incrementing the slots. * `timeout_seconds`: The number of seconds to wait for the slots to be acquired before raising a `TimeoutError`. A timeout of `None` will wait indefinitely. **Raises:** * `TimeoutError`: If the slots are not acquired within the given timeout. Example: A simple example of using the async `concurrency` context manager: ```python from prefect.concurrency.v1.asyncio import concurrency async def resource_heavy(): async with concurrency("test", task_run_id): print("Resource heavy task") async def main(): await resource_heavy() ``` # context Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-concurrency-v1-context # `prefect.concurrency.v1.context` ## Classes ### `ConcurrencyContext` **Methods:** #### `get` ```python get(cls: type[Self]) -> Optional[Self] ``` Get the current context instance #### `model_copy` ```python model_copy(self: Self) -> Self ``` Duplicate the context model, optionally choosing which fields to include, exclude, or change. **Returns:** * A new model instance. #### `serialize` ```python serialize(self, include_secrets: bool = True) -> dict[str, Any] ``` Serialize the context model to a dictionary that can be pickled with cloudpickle. # services Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-concurrency-v1-services # `prefect.concurrency.v1.services` ## Classes ### `ConcurrencySlotAcquisitionServiceError` Raised when an error occurs while acquiring concurrency slots. ### `ConcurrencySlotAcquisitionService` **Methods:** #### `acquire` ```python acquire(self, task_run_id: UUID, timeout_seconds: Optional[float] = None) -> httpx.Response ``` # sync Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-concurrency-v1-sync # `prefect.concurrency.v1.sync` ## Functions ### `concurrency` ```python concurrency(names: Union[str, list[str]], task_run_id: UUID, timeout_seconds: Optional[float] = None) -> Generator[None, None, None] ``` A context manager that acquires and releases concurrency slots from the given concurrency limits. **Args:** * `names`: The names of the concurrency limits to acquire. * `task_run_id`: The task run ID acquiring the limits. * `timeout_seconds`: The number of seconds to wait to acquire the limits before raising a `TimeoutError`. A timeout of `None` will wait indefinitely. **Raises:** * `TimeoutError`: If the limits are not acquired within the given timeout. Example: A simple example of using the sync `concurrency` context manager: ```python from prefect.concurrency.v1.sync import concurrency def resource_heavy(): with concurrency("test"): print("Resource heavy task") def main(): resource_heavy() ``` # context Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-context # `prefect.context` Async and thread safe models for passing runtime context data. These contexts should never be directly mutated by the user. For more user-accessible information about the current run, see [`prefect.runtime`](https://docs.prefect.io/v3/api-ref/python/prefect-runtime-flow_run). ## Functions ### `serialize_context` ```python serialize_context(asset_ctx_kwargs: Union[dict[str, Any], None] = None) -> dict[str, Any] ``` Serialize the current context for use in a remote execution environment. Optionally provide asset\_ctx\_kwargs to create new AssetContext, that will be used in the remote execution environment. This is useful for TaskRunners, who rely on creating the task run in the remote environment. ### `hydrated_context` ```python hydrated_context(serialized_context: Optional[dict[str, Any]] = None, client: Union[PrefectClient, SyncPrefectClient, None] = None) -> Generator[None, Any, None] ``` ### `get_run_context` ```python get_run_context() -> Union[FlowRunContext, TaskRunContext] ``` Get the current run context from within a task or flow function. **Returns:** * A `FlowRunContext` or `TaskRunContext` depending on the function type. **Raises:** * `RuntimeError`: If called outside of a flow or task run. ### `get_settings_context` ```python get_settings_context() -> SettingsContext ``` Get the current settings context which contains profile information and the settings that are being used. Generally, the settings that are being used are a combination of values from the profile and environment. See `prefect.context.use_profile` for more details. ### `tags` ```python tags(*new_tags: str) -> Generator[set[str], None, None] ``` Context manager to add tags to flow and task run calls. Tags are always combined with any existing tags. **Examples:** ```python from prefect import tags, task, flow @task def my_task(): pass ``` Run a task with tags ```python @flow def my_flow(): with tags("a", "b"): my_task() # has tags: a, b ``` Run a flow with tags ```python @flow def my_flow(): pass with tags("a", "b"): my_flow() # has tags: a, b ``` Run a task with nested tag contexts ```python @flow def my_flow(): with tags("a", "b"): with tags("c", "d"): my_task() # has tags: a, b, c, d my_task() # has tags: a, b ``` Inspect the current tags ```python @flow def my_flow(): with tags("c", "d"): with tags("e", "f") as current_tags: print(current_tags) with tags("a", "b"): my_flow() # {"a", "b", "c", "d", "e", "f"} ``` ### `use_profile` ```python use_profile(profile: Union[Profile, str], override_environment_variables: bool = False, include_current_context: bool = True) -> Generator[SettingsContext, Any, None] ``` Switch to a profile for the duration of this context. Profile contexts are confined to an async context in a single thread. **Args:** * `profile`: The name of the profile to load or an instance of a Profile. * `override_environment_variable`: If set, variables in the profile will take precedence over current environment variables. By default, environment variables will override profile settings. * `include_current_context`: If set, the new settings will be constructed with the current settings context as a base. If not set, the use\_base settings will be loaded from the environment and defaults. ### `root_settings_context` ```python root_settings_context() -> SettingsContext ``` Return the settings context that will exist as the root context for the module. The profile to use is determined with the following precedence * Command line via 'prefect --profile \' * Environment variable via 'PREFECT\_PROFILE' * Profiles file via the 'active' key ## Classes ### `ContextModel` A base model for context data that forbids mutation and extra data while providing a context manager **Methods:** #### `get` ```python get(cls: type[Self]) -> Optional[Self] ``` Get the current context instance #### `model_copy` ```python model_copy(self: Self) -> Self ``` Duplicate the context model, optionally choosing which fields to include, exclude, or change. **Returns:** * A new model instance. #### `serialize` ```python serialize(self, include_secrets: bool = True) -> dict[str, Any] ``` Serialize the context model to a dictionary that can be pickled with cloudpickle. ### `SyncClientContext` A context for managing the sync Prefect client instances. Clients were formerly tracked on the TaskRunContext and FlowRunContext, but having two separate places and the addition of both sync and async clients made it difficult to manage. This context is intended to be the single source for sync clients. The client creates a sync client, which can either be read directly from the context object OR loaded with get\_client, inject\_client, or other Prefect utilities. with SyncClientContext.get\_or\_create() as ctx: c1 = get\_client(sync\_client=True) c2 = get\_client(sync\_client=True) assert c1 is c2 assert c1 is ctx.client **Methods:** #### `get` ```python get(cls: type[Self]) -> Optional[Self] ``` Get the current context instance #### `get_or_create` ```python get_or_create(cls) -> Generator[Self, None, None] ``` #### `model_copy` ```python model_copy(self: Self) -> Self ``` Duplicate the context model, optionally choosing which fields to include, exclude, or change. **Returns:** * A new model instance. #### `serialize` ```python serialize(self, include_secrets: bool = True) -> dict[str, Any] ``` Serialize the context model to a dictionary that can be pickled with cloudpickle. ### `AsyncClientContext` A context for managing the async Prefect client instances. Clients were formerly tracked on the TaskRunContext and FlowRunContext, but having two separate places and the addition of both sync and async clients made it difficult to manage. This context is intended to be the single source for async clients. The client creates an async client, which can either be read directly from the context object OR loaded with get\_client, inject\_client, or other Prefect utilities. with AsyncClientContext.get\_or\_create() as ctx: c1 = get\_client(sync\_client=False) c2 = get\_client(sync\_client=False) assert c1 is c2 assert c1 is ctx.client **Methods:** #### `get` ```python get(cls: type[Self]) -> Optional[Self] ``` Get the current context instance #### `get_or_create` ```python get_or_create(cls) -> AsyncGenerator[Self, None] ``` #### `model_copy` ```python model_copy(self: Self) -> Self ``` Duplicate the context model, optionally choosing which fields to include, exclude, or change. **Returns:** * A new model instance. #### `serialize` ```python serialize(self, include_secrets: bool = True) -> dict[str, Any] ``` Serialize the context model to a dictionary that can be pickled with cloudpickle. ### `RunContext` The base context for a flow or task run. Data in this context will always be available when `get_run_context` is called. **Methods:** #### `get` ```python get(cls: type[Self]) -> Optional[Self] ``` Get the current context instance #### `model_copy` ```python model_copy(self: Self) -> Self ``` Duplicate the context model, optionally choosing which fields to include, exclude, or change. **Returns:** * A new model instance. #### `serialize` ```python serialize(self: Self, include_secrets: bool = True) -> dict[str, Any] ``` #### `serialize` ```python serialize(self, include_secrets: bool = True) -> dict[str, Any] ``` Serialize the context model to a dictionary that can be pickled with cloudpickle. ### `EngineContext` The context for a flow run. Data in this context is only available from within a flow run function. **Methods:** #### `serialize` ```python serialize(self: Self, include_secrets: bool = True) -> dict[str, Any] ``` #### `serialize` ```python serialize(self: Self, include_secrets: bool = True) -> dict[str, Any] ``` ### `TaskRunContext` The context for a task run. Data in this context is only available from within a task run function. **Methods:** #### `serialize` ```python serialize(self: Self, include_secrets: bool = True) -> dict[str, Any] ``` #### `serialize` ```python serialize(self: Self, include_secrets: bool = True) -> dict[str, Any] ``` ### `AssetContext` The asset context for a materializing task run. Contains all asset-related information needed for asset event emission and downstream asset dependency propagation. **Methods:** #### `add_asset_metadata` ```python add_asset_metadata(self, asset_key: str, metadata: dict[str, Any]) -> None ``` Add metadata for a materialized asset. **Args:** * `asset_key`: The asset key * `metadata`: Metadata dictionary to add **Raises:** * `ValueError`: If asset\_key is not in downstream\_assets #### `asset_as_related` ```python asset_as_related(asset: Asset) -> dict[str, str] ``` Convert Asset to event related format. #### `asset_as_resource` ```python asset_as_resource(asset: Asset) -> dict[str, str] ``` Convert Asset to event resource format. #### `emit_events` ```python emit_events(self, state: State) -> None ``` Emit asset events #### `from_task_and_inputs` ```python from_task_and_inputs(cls, task: 'Task[Any, Any]', task_run_id: UUID, task_inputs: Optional[dict[str, set[Any]]] = None, copy_to_child_ctx: bool = False) -> 'AssetContext' ``` Create an AssetContext from a task and its resolved inputs. **Args:** * `task`: The task instance * `task_run_id`: The task run ID * `task_inputs`: The resolved task inputs (TaskRunResult objects) * `copy_to_child_ctx`: Whether this context should be copied on a child AssetContext **Returns:** * Configured AssetContext #### `get` ```python get(cls: type[Self]) -> Optional[Self] ``` Get the current context instance #### `model_copy` ```python model_copy(self: Self) -> Self ``` Duplicate the context model, optionally choosing which fields to include, exclude, or change. **Returns:** * A new model instance. #### `related_materialized_by` ```python related_materialized_by(by: str) -> dict[str, str] ``` Create a related resource for the tool that performed the materialization #### `serialize` ```python serialize(self: Self, include_secrets: bool = True) -> dict[str, Any] ``` Serialize the AssetContext for distributed execution. #### `serialize` ```python serialize(self, include_secrets: bool = True) -> dict[str, Any] ``` Serialize the context model to a dictionary that can be pickled with cloudpickle. #### `update_tracked_assets` ```python update_tracked_assets(self) -> None ``` Update the flow run context with assets that should be propagated downstream. ### `TagsContext` The context for `prefect.tags` management. **Methods:** #### `get` ```python get(cls) -> Self ``` #### `get` ```python get(cls: type[Self]) -> Optional[Self] ``` Get the current context instance #### `model_copy` ```python model_copy(self: Self) -> Self ``` Duplicate the context model, optionally choosing which fields to include, exclude, or change. **Returns:** * A new model instance. #### `serialize` ```python serialize(self, include_secrets: bool = True) -> dict[str, Any] ``` Serialize the context model to a dictionary that can be pickled with cloudpickle. ### `SettingsContext` The context for a Prefect settings. This allows for safe concurrent access and modification of settings. **Methods:** #### `get` ```python get(cls) -> Optional['SettingsContext'] ``` #### `get` ```python get(cls: type[Self]) -> Optional[Self] ``` Get the current context instance #### `model_copy` ```python model_copy(self: Self) -> Self ``` Duplicate the context model, optionally choosing which fields to include, exclude, or change. **Returns:** * A new model instance. #### `serialize` ```python serialize(self, include_secrets: bool = True) -> dict[str, Any] ``` Serialize the context model to a dictionary that can be pickled with cloudpickle. # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-deployments-__init__ # `prefect.deployments` *This module is empty or contains only private/internal implementations.* # base Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-deployments-base # `prefect.deployments.base` Core primitives for managing Prefect deployments via `prefect deploy`, providing a minimally opinionated build system for managing flows and deployments. To get started, follow along with [the deployments tutorial](https://docs.prefect.io/v3/how-to-guides/deployments/create-deployments). ## Functions ### `create_default_prefect_yaml` ```python create_default_prefect_yaml(path: str, name: Optional[str] = None, contents: Optional[Dict[str, Any]] = None) -> bool ``` Creates default `prefect.yaml` file in the provided path if one does not already exist; returns boolean specifying whether a file was created. **Args:** * `name`: the name of the project; if not provided, the current directory name will be used * `contents`: a dictionary of contents to write to the file; if not provided, defaults will be used ### `configure_project_by_recipe` ```python configure_project_by_recipe(recipe: str, **formatting_kwargs: Any) -> dict[str, Any] | type[NotSet] ``` Given a recipe name, returns a dictionary representing base configuration options. **Args:** * `recipe`: the name of the recipe to use * `formatting_kwargs`: additional keyword arguments to format the recipe **Raises:** * `ValueError`: if provided recipe name does not exist. ### `initialize_project` ```python initialize_project(name: Optional[str] = None, recipe: Optional[str] = None, inputs: Optional[Dict[str, Any]] = None) -> List[str] ``` Initializes a basic project structure with base files. If no name is provided, the name of the current directory is used. If no recipe is provided, one is inferred. **Args:** * `name`: the name of the project; if not provided, the current directory name * `recipe`: the name of the recipe to use; if not provided, one is inferred * `inputs`: a dictionary of inputs to use when formatting the recipe **Returns:** * List\[str]: a list of files / directories that were created # deployments Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-deployments-deployments # `prefect.deployments.deployments` *This module is empty or contains only private/internal implementations.* # flow_runs Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-deployments-flow_runs # `prefect.deployments.flow_runs` ## Functions ### `run_deployment` ```python run_deployment(name: Union[str, UUID], client: Optional['PrefectClient'] = None, parameters: Optional[dict[str, Any]] = None, scheduled_time: Optional[datetime] = None, flow_run_name: Optional[str] = None, timeout: Optional[float] = None, poll_interval: Optional[float] = 5, tags: Optional[Iterable[str]] = None, idempotency_key: Optional[str] = None, work_queue_name: Optional[str] = None, as_subflow: Optional[bool] = True, job_variables: Optional[dict[str, Any]] = None) -> 'FlowRun' ``` Create a flow run for a deployment and return it after completion or a timeout. By default, this function blocks until the flow run finishes executing. Specify a timeout (in seconds) to wait for the flow run to execute before returning flow run metadata. To return immediately, without waiting for the flow run to execute, set `timeout=0`. Note that if you specify a timeout, this function will return the flow run metadata whether or not the flow run finished executing. If called within a flow or task, the flow run this function creates will be linked to the current flow run as a subflow. Disable this behavior by passing `as_subflow=False`. **Args:** * `name`: The deployment id or deployment name in the form: `"flow name/deployment name"` * `parameters`: Parameter overrides for this flow run. Merged with the deployment defaults. * `scheduled_time`: The time to schedule the flow run for, defaults to scheduling the flow run to start now. * `flow_run_name`: A name for the created flow run * `timeout`: The amount of time to wait (in seconds) for the flow run to complete before returning. Setting `timeout` to 0 will return the flow run metadata immediately. Setting `timeout` to None will allow this function to poll indefinitely. Defaults to None. * `poll_interval`: The number of seconds between polls * `tags`: A list of tags to associate with this flow run; tags can be used in automations and for organizational purposes. * `idempotency_key`: A unique value to recognize retries of the same run, and prevent creating multiple flow runs. * `work_queue_name`: The name of a work queue to use for this run. Defaults to the default work queue for the deployment. * `as_subflow`: Whether to link the flow run as a subflow of the current flow or task run. * `job_variables`: A dictionary of dot delimited infrastructure overrides that will be applied at runtime; for example `env.CONFIG_KEY=config_value` or `namespace='prefect'` # runner Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-deployments-runner # `prefect.deployments.runner` Objects for creating and configuring deployments for flows using `serve` functionality. Example: ```python import time from prefect import flow, serve @flow def slow_flow(sleep: int = 60): "Sleepy flow - sleeps the provided amount of time (in seconds)." time.sleep(sleep) @flow def fast_flow(): "Fastest flow this side of the Mississippi." return if __name__ == "__main__": # to_deployment creates RunnerDeployment instances slow_deploy = slow_flow.to_deployment(name="sleeper", interval=45) fast_deploy = fast_flow.to_deployment(name="fast") serve(slow_deploy, fast_deploy) ``` ## Functions ### `deploy` ```python deploy(*deployments: RunnerDeployment) -> List[UUID] ``` Deploy the provided list of deployments to dynamic infrastructure via a work pool. By default, calling this function will build a Docker image for the deployments, push it to a registry, and create each deployment via the Prefect API that will run the corresponding flow on the given schedule. If you want to use an existing image, you can pass `build=False` to skip building and pushing an image. **Args:** * `*deployments`: A list of deployments to deploy. * `work_pool_name`: The name of the work pool to use for these deployments. Defaults to the value of `PREFECT_DEFAULT_WORK_POOL_NAME`. * `image`: The name of the Docker image to build, including the registry and repository. Pass a DockerImage instance to customize the Dockerfile used and build arguments. * `build`: Whether or not to build a new image for the flow. If False, the provided image will be used as-is and pulled at runtime. * `push`: Whether or not to skip pushing the built image to a registry. * `print_next_steps_message`: Whether or not to print a message with next steps after deploying the deployments. **Returns:** * A list of deployment IDs for the created/updated deployments. **Examples:** Deploy a group of flows to a work pool: ```python from prefect import deploy, flow @flow(log_prints=True) def local_flow(): print("I'm a locally defined flow!") if __name__ == "__main__": deploy( local_flow.to_deployment(name="example-deploy-local-flow"), flow.from_source( source="https://github.com/org/repo.git", entrypoint="flows.py:my_flow", ).to_deployment( name="example-deploy-remote-flow", ), work_pool_name="my-work-pool", image="my-registry/my-image:dev", ) ``` ## Classes ### `DeploymentApplyError` Raised when an error occurs while applying a deployment. ### `RunnerDeployment` A Prefect RunnerDeployment definition, used for specifying and building deployments. **Methods:** #### `afrom_storage` ```python afrom_storage(cls, storage: RunnerStorage, entrypoint: str, name: str, flow_name: Optional[str] = None, interval: Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]] = None, cron: Optional[Union[Iterable[str], str]] = None, rrule: Optional[Union[Iterable[str], str]] = None, paused: Optional[bool] = None, schedule: Optional[Schedule] = None, schedules: Optional['FlexibleScheduleList'] = None, concurrency_limit: Optional[Union[int, ConcurrencyLimitConfig, None]] = None, parameters: Optional[dict[str, Any]] = None, triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None, description: Optional[str] = None, tags: Optional[List[str]] = None, version: Optional[str] = None, version_type: Optional[VersionType] = None, enforce_parameter_schema: bool = True, work_pool_name: Optional[str] = None, work_queue_name: Optional[str] = None, job_variables: Optional[dict[str, Any]] = None, _sla: Optional[Union[SlaTypes, list[SlaTypes]]] = None) -> 'RunnerDeployment' ``` Create a RunnerDeployment from a flow located at a given entrypoint and stored in a local storage location. **Args:** * `entrypoint`: The path to a file containing a flow and the name of the flow function in the format `./path/to/file.py\:flow_func_name`. * `name`: A name for the deployment * `flow_name`: The name of the flow to deploy * `storage`: A storage object to use for retrieving flow code. If not provided, a URL must be provided. * `interval`: An interval on which to execute the current flow. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds. * `cron`: A cron schedule of when to execute runs of this flow. * `rrule`: An rrule schedule of when to execute runs of this flow. * `paused`: Whether or not the deployment is paused. * `schedule`: A schedule object defining when to execute runs of this deployment. Used to provide additional scheduling options like `timezone` or `parameters`. * `schedules`: A list of schedule objects defining when to execute runs of this deployment. Used to provide additional scheduling options like `timezone` or `parameters`. * `triggers`: A list of triggers that should kick of a run of this flow. * `parameters`: A dictionary of default parameter values to pass to runs of this flow. * `description`: A description for the created deployment. Defaults to the flow's description if not provided. * `tags`: A list of tags to associate with the created deployment for organizational purposes. * `version`: A version for the created deployment. Defaults to the flow's version. * `version_type`: The type of version information to use for the deployment. The version type will be inferred if not provided. * `enforce_parameter_schema`: Whether or not the Prefect API should enforce the parameter schema for this deployment. * `work_pool_name`: The name of the work pool to use for this deployment. * `work_queue_name`: The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used. * `job_variables`: Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings. * `_sla`: (Experimental) SLA configuration for the deployment. May be removed or modified at any time. Currently only supported on Prefect Cloud. #### `apply` ```python apply(self, schedules: Optional[List[dict[str, Any]]] = None, work_pool_name: Optional[str] = None, image: Optional[str] = None, version_info: Optional[VersionInfo] = None) -> UUID ``` Registers this deployment with the API and returns the deployment's ID. **Args:** * `work_pool_name`: The name of the work pool to use for this deployment. * `image`: The registry, name, and tag of the Docker image to use for this deployment. Only used when the deployment is deployed to a work pool. * `version_info`: The version information to use for the deployment. Returns: The ID of the created deployment. #### `entrypoint_type` ```python entrypoint_type(self) -> EntrypointType ``` #### `from_entrypoint` ```python from_entrypoint(cls, entrypoint: str, name: str, flow_name: Optional[str] = None, interval: Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]] = None, cron: Optional[Union[Iterable[str], str]] = None, rrule: Optional[Union[Iterable[str], str]] = None, paused: Optional[bool] = None, schedule: Optional[Schedule] = None, schedules: Optional['FlexibleScheduleList'] = None, concurrency_limit: Optional[Union[int, ConcurrencyLimitConfig, None]] = None, parameters: Optional[dict[str, Any]] = None, triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None, description: Optional[str] = None, tags: Optional[List[str]] = None, version: Optional[str] = None, enforce_parameter_schema: bool = True, work_pool_name: Optional[str] = None, work_queue_name: Optional[str] = None, job_variables: Optional[dict[str, Any]] = None, _sla: Optional[Union[SlaTypes, list[SlaTypes]]] = None) -> 'RunnerDeployment' ``` Configure a deployment for a given flow located at a given entrypoint. **Args:** * `entrypoint`: The path to a file containing a flow and the name of the flow function in the format `./path/to/file.py\:flow_func_name`. * `name`: A name for the deployment * `flow_name`: The name of the flow to deploy * `interval`: An interval on which to execute the current flow. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds. * `cron`: A cron schedule of when to execute runs of this flow. * `rrule`: An rrule schedule of when to execute runs of this flow. * `paused`: Whether or not to set this deployment as paused. * `schedules`: A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options like `timezone`. * `triggers`: A list of triggers that should kick of a run of this flow. * `parameters`: A dictionary of default parameter values to pass to runs of this flow. * `description`: A description for the created deployment. Defaults to the flow's description if not provided. * `tags`: A list of tags to associate with the created deployment for organizational purposes. * `version`: A version for the created deployment. Defaults to the flow's version. * `enforce_parameter_schema`: Whether or not the Prefect API should enforce the parameter schema for this deployment. * `work_pool_name`: The name of the work pool to use for this deployment. * `work_queue_name`: The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used. * `job_variables`: Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings. * `_sla`: (Experimental) SLA configuration for the deployment. May be removed or modified at any time. Currently only supported on Prefect Cloud. #### `from_flow` ```python from_flow(cls, flow: 'Flow[..., Any]', name: str, interval: Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]] = None, cron: Optional[Union[Iterable[str], str]] = None, rrule: Optional[Union[Iterable[str], str]] = None, paused: Optional[bool] = None, schedule: Optional[Schedule] = None, schedules: Optional['FlexibleScheduleList'] = None, concurrency_limit: Optional[Union[int, ConcurrencyLimitConfig, None]] = None, parameters: Optional[dict[str, Any]] = None, triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None, description: Optional[str] = None, tags: Optional[List[str]] = None, version: Optional[str] = None, version_type: Optional[VersionType] = None, enforce_parameter_schema: bool = True, work_pool_name: Optional[str] = None, work_queue_name: Optional[str] = None, job_variables: Optional[dict[str, Any]] = None, entrypoint_type: EntrypointType = EntrypointType.FILE_PATH, _sla: Optional[Union[SlaTypes, list[SlaTypes]]] = None) -> 'RunnerDeployment' ``` Configure a deployment for a given flow. **Args:** * `flow`: A flow function to deploy * `name`: A name for the deployment * `interval`: An interval on which to execute the current flow. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds. * `cron`: A cron schedule of when to execute runs of this flow. * `rrule`: An rrule schedule of when to execute runs of this flow. * `paused`: Whether or not to set this deployment as paused. * `schedule`: A schedule object defining when to execute runs of this deployment. Used to provide additional scheduling options like `timezone` or `parameters`. * `schedules`: A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options like `timezone`. * `concurrency_limit`: The maximum number of concurrent runs this deployment will allow. * `triggers`: A list of triggers that should kick of a run of this flow. * `parameters`: A dictionary of default parameter values to pass to runs of this flow. * `description`: A description for the created deployment. Defaults to the flow's description if not provided. * `tags`: A list of tags to associate with the created deployment for organizational purposes. * `version`: A version for the created deployment. Defaults to the flow's version. * `version_type`: The type of version information to use for the deployment. * `enforce_parameter_schema`: Whether or not the Prefect API should enforce the parameter schema for this deployment. * `work_pool_name`: The name of the work pool to use for this deployment. * `work_queue_name`: The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used. * `job_variables`: Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings. * `_sla`: (Experimental) SLA configuration for the deployment. May be removed or modified at any time. Currently only supported on Prefect Cloud. #### `from_storage` ```python from_storage(cls, storage: RunnerStorage, entrypoint: str, name: str, flow_name: Optional[str] = None, interval: Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]] = None, cron: Optional[Union[Iterable[str], str]] = None, rrule: Optional[Union[Iterable[str], str]] = None, paused: Optional[bool] = None, schedule: Optional[Schedule] = None, schedules: Optional['FlexibleScheduleList'] = None, concurrency_limit: Optional[Union[int, ConcurrencyLimitConfig, None]] = None, parameters: Optional[dict[str, Any]] = None, triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None, description: Optional[str] = None, tags: Optional[List[str]] = None, version: Optional[str] = None, version_type: Optional[VersionType] = None, enforce_parameter_schema: bool = True, work_pool_name: Optional[str] = None, work_queue_name: Optional[str] = None, job_variables: Optional[dict[str, Any]] = None, _sla: Optional[Union[SlaTypes, list[SlaTypes]]] = None) -> 'RunnerDeployment' ``` Create a RunnerDeployment from a flow located at a given entrypoint and stored in a local storage location. **Args:** * `entrypoint`: The path to a file containing a flow and the name of the flow function in the format `./path/to/file.py\:flow_func_name`. * `name`: A name for the deployment * `flow_name`: The name of the flow to deploy * `storage`: A storage object to use for retrieving flow code. If not provided, a URL must be provided. * `interval`: An interval on which to execute the current flow. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds. * `cron`: A cron schedule of when to execute runs of this flow. * `rrule`: An rrule schedule of when to execute runs of this flow. * `paused`: Whether or not the deployment is paused. * `schedule`: A schedule object defining when to execute runs of this deployment. Used to provide additional scheduling options like `timezone` or `parameters`. * `schedules`: A list of schedule objects defining when to execute runs of this deployment. Used to provide additional scheduling options like `timezone` or `parameters`. * `triggers`: A list of triggers that should kick of a run of this flow. * `parameters`: A dictionary of default parameter values to pass to runs of this flow. * `description`: A description for the created deployment. Defaults to the flow's description if not provided. * `tags`: A list of tags to associate with the created deployment for organizational purposes. * `version`: A version for the created deployment. Defaults to the flow's version. * `version_type`: The type of version information to use for the deployment. The version type will be inferred if not provided. * `enforce_parameter_schema`: Whether or not the Prefect API should enforce the parameter schema for this deployment. * `work_pool_name`: The name of the work pool to use for this deployment. * `work_queue_name`: The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used. * `job_variables`: Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings. * `_sla`: (Experimental) SLA configuration for the deployment. May be removed or modified at any time. Currently only supported on Prefect Cloud. #### `full_name` ```python full_name(self) -> str ``` #### `reconcile_paused` ```python reconcile_paused(cls, values: dict[str, Any]) -> dict[str, Any] ``` #### `reconcile_schedules` ```python reconcile_schedules(cls, values: dict[str, Any]) -> dict[str, Any] ``` #### `validate_automation_names` ```python validate_automation_names(self) ``` Ensure that each trigger has a name for its automation if none is provided. #### `validate_deployment_parameters` ```python validate_deployment_parameters(self) -> Self ``` Update the parameter schema to mark frozen parameters as readonly. #### `validate_name` ```python validate_name(cls, value: str) -> str ``` # schedules Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-deployments-schedules # `prefect.deployments.schedules` ## Functions ### `create_deployment_schedule_create` ```python create_deployment_schedule_create(schedule: Union['SCHEDULE_TYPES', 'Schedule'], active: Optional[bool] = True) -> DeploymentScheduleCreate ``` Create a DeploymentScheduleCreate object from common schedule parameters. ### `normalize_to_deployment_schedule` ```python normalize_to_deployment_schedule(schedules: Optional['FlexibleScheduleList']) -> List[Union[DeploymentScheduleCreate, DeploymentScheduleUpdate]] ``` # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-deployments-steps-__init__ # `prefect.deployments.steps` *This module is empty or contains only private/internal implementations.* # core Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-deployments-steps-core # `prefect.deployments.steps.core` Core primitives for running Prefect deployment steps. Deployment steps are YAML representations of Python functions along with their inputs. Whenever a step is run, the following actions are taken: * The step's inputs and block / variable references are resolved (see [the `prefect deploy` documentation](https://docs.prefect.io/v3/how-to-guides/deployments/prefect-yaml#templating-options) for more details) * The step's function is imported; if it cannot be found, the `requires` keyword is used to install the necessary packages * The step's function is called with the resolved inputs * The step's output is returned and used to resolve inputs for subsequent steps ## Functions ### `run_step` ```python run_step(step: dict[str, Any], upstream_outputs: Optional[dict[str, Any]] = None) -> dict[str, Any] ``` Runs a step, returns the step's output. Steps are assumed to be in the format `{"importable.func.name": {"kwarg1": "value1", ...}}`. The 'id and 'requires' keywords are reserved for specific purposes and will be removed from the inputs before passing to the step function: This keyword is used to specify packages that should be installed before running the step. ### `run_steps` ```python run_steps(steps: List[Dict[str, Any]], upstream_outputs: Optional[Dict[str, Any]] = None, print_function: Any = print) -> dict[str, Any] ``` ## Classes ### `StepExecutionError` Raised when a step fails to execute. # pull Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-deployments-steps-pull # `prefect.deployments.steps.pull` Core set of steps for specifying a Prefect project pull step. ## Functions ### `set_working_directory` ```python set_working_directory(directory: str) -> dict[str, str] ``` Sets the working directory; works with both absolute and relative paths. **Args:** * `directory`: the directory to set as the working directory **Returns:** * a dictionary containing a `directory` key of the absolute path of the directory that was set ### `agit_clone` ```python agit_clone(repository: str, branch: Optional[str] = None, commit_sha: Optional[str] = None, include_submodules: bool = False, access_token: Optional[str] = None, credentials: Optional['Block'] = None, directories: Optional[list[str]] = None) -> dict[str, str] ``` Asynchronously clones a git repository into the current working directory. **Args:** * `repository`: the URL of the repository to clone * `branch`: the branch to clone; if not provided, the default branch will be used * `commit_sha`: the commit SHA to clone; if not provided, the default branch will be used * `include_submodules`: whether to include git submodules when cloning the repository * `access_token`: an access token to use for cloning the repository; if not provided the repository will be cloned using the default git credentials * `credentials`: a GitHubCredentials, GitLabCredentials, or BitBucketCredentials block can be used to specify the credentials to use for cloning the repository. **Returns:** * a dictionary containing a `directory` key of the new directory that was created **Raises:** * `subprocess.CalledProcessError`: if the git clone command fails for any reason ### `git_clone` ```python git_clone(repository: str, branch: Optional[str] = None, commit_sha: Optional[str] = None, include_submodules: bool = False, access_token: Optional[str] = None, credentials: Optional['Block'] = None, directories: Optional[list[str]] = None) -> dict[str, str] ``` Clones a git repository into the current working directory. **Args:** * `repository`: the URL of the repository to clone * `branch`: the branch to clone; if not provided, the default branch will be used * `commit_sha`: the commit SHA to clone; if not provided, the default branch will be used * `include_submodules`: whether to include git submodules when cloning the repository * `access_token`: an access token to use for cloning the repository; if not provided the repository will be cloned using the default git credentials * `credentials`: a GitHubCredentials, GitLabCredentials, or BitBucketCredentials block can be used to specify the credentials to use for cloning the repository. * `directories`: Specify directories you want to be included (uses git sparse-checkout) **Returns:** * a dictionary containing a `directory` key of the new directory that was created **Raises:** * `subprocess.CalledProcessError`: if the git clone command fails for any reason **Examples:** Clone a public repository: ```yaml pull: - prefect.deployments.steps.git_clone: repository: https://github.com/PrefectHQ/prefect.git ``` Clone a branch of a public repository: ```yaml pull: - prefect.deployments.steps.git_clone: repository: https://github.com/PrefectHQ/prefect.git branch: my-branch ``` Clone a private repository using a GitHubCredentials block: ```yaml pull: - prefect.deployments.steps.git_clone: repository: https://github.com/org/repo.git credentials: "{{ prefect.blocks.github-credentials.my-github-credentials-block }}" ``` Clone a private repository using an access token: ```yaml pull: - prefect.deployments.steps.git_clone: repository: https://github.com/org/repo.git access_token: "{{ prefect.blocks.secret.github-access-token }}" # Requires creation of a Secret block ``` Note that you will need to [create a Secret block](https://docs.prefect.io/v3/concepts/blocks/#pre-registered-blocks) to store the value of your git credentials. You can also store a username/password combo or token prefix (e.g. `x-token-auth`) in your secret block. Refer to your git providers documentation for the correct authentication schema. Clone a repository with submodules: ```yaml pull: - prefect.deployments.steps.git_clone: repository: https://github.com/org/repo.git include_submodules: true ``` Clone a repository with an SSH key (note that the SSH key must be added to the worker before executing flows): ```yaml pull: - prefect.deployments.steps.git_clone: repository: git@github.com:org/repo.git ``` Clone a repository using sparse-checkout (allows specific folders of the repository to be checked out) ```yaml pull: - prefect.deployments.steps.git_clone: repository: https://github.com/org/repo.git directories: ["dir_1", "dir_2", "prefect"] ``` ### `pull_from_remote_storage` ```python pull_from_remote_storage(url: str, **settings: Any) -> dict[str, Any] ``` Pulls code from a remote storage location into the current working directory. Works with protocols supported by `fsspec`. **Args:** * `url`: the URL of the remote storage location. Should be a valid `fsspec` URL. Some protocols may require an additional `fsspec` dependency to be installed. Refer to the [`fsspec` docs](https://filesystem-spec.readthedocs.io/en/latest/api.html#other-known-implementations) for more details. * `**settings`: any additional settings to pass the `fsspec` filesystem class. **Returns:** * a dictionary containing a `directory` key of the new directory that was created **Examples:** Pull code from a remote storage location: ```yaml pull: - prefect.deployments.steps.pull_from_remote_storage: url: s3://my-bucket/my-folder ``` Pull code from a remote storage location with additional settings: ```yaml pull: - prefect.deployments.steps.pull_from_remote_storage: url: s3://my-bucket/my-folder key: {{ prefect.blocks.secret.my-aws-access-key }}} secret: {{ prefect.blocks.secret.my-aws-secret-key }}} ``` ### `pull_with_block` ```python pull_with_block(block_document_name: str, block_type_slug: str) -> dict[str, Any] ``` Pulls code using a block. **Args:** * `block_document_name`: The name of the block document to use * `block_type_slug`: The slug of the type of block to use # utility Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-deployments-steps-utility # `prefect.deployments.steps.utility` Utility project steps that are useful for managing a project's deployment lifecycle. Steps within this module can be used within a `build`, `push`, or `pull` deployment action. Example: Use the `run_shell_script` setp to retrieve the short Git commit hash of the current repository and use it as a Docker image tag: ```yaml build: - prefect.deployments.steps.run_shell_script: id: get-commit-hash script: git rev-parse --short HEAD stream_output: false - prefect_docker.deployments.steps.build_docker_image: requires: prefect-docker image_name: my-image image_tag: "{{ get-commit-hash.stdout }}" dockerfile: auto ``` ## Functions ### `run_shell_script` ```python run_shell_script(script: str, directory: Optional[str] = None, env: Optional[Dict[str, str]] = None, stream_output: bool = True, expand_env_vars: bool = False) -> RunShellScriptResult ``` Runs one or more shell commands in a subprocess. Returns the standard output and standard error of the script. **Args:** * `script`: The script to run * `directory`: The directory to run the script in. Defaults to the current working directory. * `env`: A dictionary of environment variables to set for the script * `stream_output`: Whether to stream the output of the script to stdout/stderr * `expand_env_vars`: Whether to expand environment variables in the script before running it **Returns:** * A dictionary with the keys `stdout` and `stderr` containing the output of the script **Examples:** Retrieve the short Git commit hash of the current repository to use as a Docker image tag: ```yaml build: - prefect.deployments.steps.run_shell_script: id: get-commit-hash script: git rev-parse --short HEAD stream_output: false - prefect_docker.deployments.steps.build_docker_image: requires: prefect-docker image_name: my-image image_tag: "{{ get-commit-hash.stdout }}" dockerfile: auto ``` Run a multi-line shell script: ```yaml build: - prefect.deployments.steps.run_shell_script: script: | echo "Hello" echo "World" ``` Run a shell script with environment variables: ```yaml build: - prefect.deployments.steps.run_shell_script: script: echo "Hello $NAME" env: NAME: World ``` Run a shell script with environment variables expanded from the current environment: ```yaml pull: - prefect.deployments.steps.run_shell_script: script: | echo "User: $USER" echo "Home Directory: $HOME" stream_output: true expand_env_vars: true ``` Run a shell script in a specific directory: ```yaml build: - prefect.deployments.steps.run_shell_script: script: echo "Hello" directory: /path/to/directory ``` Run a script stored in a file: ```yaml build: - prefect.deployments.steps.run_shell_script: script: "bash path/to/script.sh" ``` ### `pip_install_requirements` ```python pip_install_requirements(directory: Optional[str] = None, requirements_file: str = 'requirements.txt', stream_output: bool = True) -> dict[str, Any] ``` Installs dependencies from a requirements.txt file. **Args:** * `requirements_file`: The requirements.txt to use for installation. * `directory`: The directory the requirements.txt file is in. Defaults to the current working directory. * `stream_output`: Whether to stream the output from pip install should be streamed to the console **Returns:** * A dictionary with the keys `stdout` and `stderr` containing the output the `pip install` command **Raises:** * `subprocess.CalledProcessError`: if the pip install command fails for any reason ## Classes ### `RunShellScriptResult` The result of a `run_shell_script` step. # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-docker-__init__ # `prefect.docker` *This module is empty or contains only private/internal implementations.* # docker_image Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-docker-docker_image # `prefect.docker.docker_image` ## Classes ### `DockerImage` Configuration used to build and push a Docker image for a deployment. **Methods:** #### `build` ```python build(self) -> None ``` #### `push` ```python push(self) -> None ``` #### `reference` ```python reference(self) -> str ``` # engine Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-engine # `prefect.engine` ## Functions ### `handle_engine_signals` ```python handle_engine_signals(flow_run_id: UUID | None = None) ``` Handle signals from the orchestrator to abort or pause the flow run or otherwise handle unexpected exceptions. This context manager will handle exiting the process depending on the signal received. **Args:** * `flow_run_id`: The ID of the flow run to handle signals for. # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-events-__init__ # `prefect.events` *This module is empty or contains only private/internal implementations.* # actions Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-events-actions # `prefect.events.actions` ## Classes ### `Action` An Action that may be performed when an Automation is triggered **Methods:** #### `describe_for_cli` ```python describe_for_cli(self) -> str ``` A human-readable description of the action #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `DoNothing` Do nothing when an Automation is triggered **Methods:** #### `describe_for_cli` ```python describe_for_cli(self) -> str ``` A human-readable description of the action ### `DeploymentAction` Base class for Actions that operate on Deployments and need to infer them from events **Methods:** #### `describe_for_cli` ```python describe_for_cli(self) -> str ``` A human-readable description of the action #### `selected_deployment_requires_id` ```python selected_deployment_requires_id(self) ``` ### `RunDeployment` Runs the given deployment with the given parameters **Methods:** #### `selected_deployment_requires_id` ```python selected_deployment_requires_id(self) ``` ### `PauseDeployment` Pauses the given Deployment **Methods:** #### `selected_deployment_requires_id` ```python selected_deployment_requires_id(self) ``` ### `ResumeDeployment` Resumes the given Deployment **Methods:** #### `selected_deployment_requires_id` ```python selected_deployment_requires_id(self) ``` ### `ChangeFlowRunState` Changes the state of a flow run associated with the trigger **Methods:** #### `describe_for_cli` ```python describe_for_cli(self) -> str ``` A human-readable description of the action ### `CancelFlowRun` Cancels a flow run associated with the trigger **Methods:** #### `describe_for_cli` ```python describe_for_cli(self) -> str ``` A human-readable description of the action ### `ResumeFlowRun` Resumes a flow run associated with the trigger **Methods:** #### `describe_for_cli` ```python describe_for_cli(self) -> str ``` A human-readable description of the action ### `SuspendFlowRun` Suspends a flow run associated with the trigger **Methods:** #### `describe_for_cli` ```python describe_for_cli(self) -> str ``` A human-readable description of the action ### `CallWebhook` Call a webhook when an Automation is triggered. **Methods:** #### `describe_for_cli` ```python describe_for_cli(self) -> str ``` A human-readable description of the action ### `SendNotification` Send a notification when an Automation is triggered **Methods:** #### `describe_for_cli` ```python describe_for_cli(self) -> str ``` A human-readable description of the action ### `WorkPoolAction` Base class for Actions that operate on Work Pools and need to infer them from events **Methods:** #### `describe_for_cli` ```python describe_for_cli(self) -> str ``` A human-readable description of the action ### `PauseWorkPool` Pauses a Work Pool ### `ResumeWorkPool` Resumes a Work Pool ### `WorkQueueAction` Base class for Actions that operate on Work Queues and need to infer them from events **Methods:** #### `describe_for_cli` ```python describe_for_cli(self) -> str ``` A human-readable description of the action #### `selected_work_queue_requires_id` ```python selected_work_queue_requires_id(self) -> Self ``` ### `PauseWorkQueue` Pauses a Work Queue **Methods:** #### `selected_work_queue_requires_id` ```python selected_work_queue_requires_id(self) -> Self ``` ### `ResumeWorkQueue` Resumes a Work Queue **Methods:** #### `selected_work_queue_requires_id` ```python selected_work_queue_requires_id(self) -> Self ``` ### `AutomationAction` Base class for Actions that operate on Automations and need to infer them from events **Methods:** #### `describe_for_cli` ```python describe_for_cli(self) -> str ``` A human-readable description of the action #### `selected_automation_requires_id` ```python selected_automation_requires_id(self) -> Self ``` ### `PauseAutomation` Pauses a Work Queue **Methods:** #### `selected_automation_requires_id` ```python selected_automation_requires_id(self) -> Self ``` ### `ResumeAutomation` Resumes a Work Queue **Methods:** #### `selected_automation_requires_id` ```python selected_automation_requires_id(self) -> Self ``` ### `DeclareIncident` Declares an incident for the triggering event. Only available on Prefect Cloud **Methods:** #### `describe_for_cli` ```python describe_for_cli(self) -> str ``` A human-readable description of the action # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-events-cli-__init__ # `prefect.events.cli` *This module is empty or contains only private/internal implementations.* # automations Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-events-cli-automations # `prefect.events.cli.automations` Command line interface for working with automations. ## Functions ### `requires_automations` ```python requires_automations(func: Callable[..., Any]) -> Callable[..., Any] ``` ### `ls` ```python ls() ``` List all automations. ### `inspect` ```python inspect(name: Optional[str] = typer.Argument(None, help="An automation's name"), id: Optional[str] = typer.Option(None, '--id', help="An automation's id"), yaml: bool = typer.Option(False, '--yaml', help='Output as YAML'), json: bool = typer.Option(False, '--json', help='Output as JSON'), output: Optional[str] = typer.Option(None, '--output', '-o', help='Specify an output format. Currently supports: json, yaml')) ``` Inspect an automation. Arguments: name: the name of the automation to inspect id: the id of the automation to inspect yaml: output as YAML json: output as JSON Examples: `$ prefect automation inspect "my-automation"` `$ prefect automation inspect --id "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"` `$ prefect automation inspect "my-automation" --yaml` `$ prefect automation inspect "my-automation" --output json` `$ prefect automation inspect "my-automation" --output yaml` ### `resume` ```python resume(name: Optional[str] = typer.Argument(None, help="An automation's name"), id: Optional[str] = typer.Option(None, '--id', help="An automation's id")) ``` Resume an automation. Arguments: name: the name of the automation to resume id: the id of the automation to resume Examples: `$ prefect automation resume "my-automation"` `$ prefect automation resume --id "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"` ### `pause` ```python pause(name: Optional[str] = typer.Argument(None, help="An automation's name"), id: Optional[str] = typer.Option(None, '--id', help="An automation's id")) ``` Pause an automation. Arguments: name: the name of the automation to pause id: the id of the automation to pause Examples: `$ prefect automation pause "my-automation"` `$ prefect automation pause --id "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"` ### `delete` ```python delete(name: Optional[str] = typer.Argument(None, help="An automation's name"), id: Optional[str] = typer.Option(None, '--id', help="An automation's id")) ``` Delete an automation. **Args:** * `name`: the name of the automation to delete * `id`: the id of the automation to delete **Examples:** `$ prefect automation delete "my-automation"` `$ prefect automation delete --id "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"` ### `create` ```python create(from_file: Optional[str] = typer.Option(None, '--from-file', '-f', help='Path to YAML or JSON file containing automation(s)'), from_json: Optional[str] = typer.Option(None, '--from-json', '-j', help='JSON string containing automation(s)')) ``` Create one or more automations from a file or JSON string. **Examples:** `$ prefect automation create --from-file automation.yaml` `$ prefect automation create -f automation.json` `$ prefect automation create --from-json '{"name": "my-automation", "trigger": {...}, "actions": [...]}'` `$ prefect automation create -j '[{"name": "auto1", ...}, {"name": "auto2", ...}]'` # clients Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-events-clients # `prefect.events.clients` ## Functions ### `http_to_ws` ```python http_to_ws(url: str) -> str ``` ### `events_in_socket_from_api_url` ```python events_in_socket_from_api_url(url: str) -> str ``` ### `events_out_socket_from_api_url` ```python events_out_socket_from_api_url(url: str) -> str ``` ### `get_events_client` ```python get_events_client(reconnection_attempts: int = 10, checkpoint_every: int = 700) -> 'EventsClient' ``` ### `get_events_subscriber` ```python get_events_subscriber(filter: Optional['EventFilter'] = None, reconnection_attempts: int = 10) -> 'PrefectEventSubscriber' ``` ## Classes ### `EventsClient` The abstract interface for all Prefect Events clients **Methods:** #### `client_name` ```python client_name(self) -> str ``` #### `emit` ```python emit(self, event: Event) -> None ``` Emit a single event ### `NullEventsClient` A Prefect Events client implementation that does nothing **Methods:** #### `client_name` ```python client_name(self) -> str ``` #### `emit` ```python emit(self, event: Event) -> None ``` Emit a single event ### `AssertingEventsClient` A Prefect Events client that records all events sent to it for inspection during tests. **Methods:** #### `client_name` ```python client_name(self) -> str ``` #### `emit` ```python emit(self, event: Event) -> None ``` Emit a single event #### `pop_events` ```python pop_events(self) -> List[Event] ``` #### `reset` ```python reset(cls) -> None ``` Reset all captured instances and their events. For use between tests ### `PrefectEventsClient` A Prefect Events client that streams events to a Prefect server **Methods:** #### `client_name` ```python client_name(self) -> str ``` #### `emit` ```python emit(self, event: Event) -> None ``` Emit a single event ### `AssertingPassthroughEventsClient` A Prefect Events client that BOTH records all events sent to it for inspection during tests AND sends them to a Prefect server. **Methods:** #### `pop_events` ```python pop_events(self) -> list[Event] ``` #### `reset` ```python reset(cls) -> None ``` ### `PrefectCloudEventsClient` A Prefect Events client that streams events to a Prefect Cloud Workspace ### `PrefectEventSubscriber` Subscribes to a Prefect event stream, yielding events as they occur. Example: from prefect.events.clients import PrefectEventSubscriber from prefect.events.filters import EventFilter, EventNameFilter filter = EventFilter(event=EventNameFilter(prefix=\["prefect.flow-run."])) async with PrefectEventSubscriber(filter=filter) as subscriber: async for event in subscriber: print(event.occurred, event.resource.id, event.event) **Methods:** #### `client_name` ```python client_name(self) -> str ``` ### `PrefectCloudEventSubscriber` **Methods:** #### `client_name` ```python client_name(self) -> str ``` ### `PrefectCloudAccountEventSubscriber` # filters Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-events-filters # `prefect.events.filters` ## Classes ### `AutomationFilterCreated` Filter by `Automation.created`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `AutomationFilterName` Filter by `Automation.created`. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `AutomationFilter` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `EventDataFilter` A base class for filtering event data. **Methods:** #### `excludes` ```python excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `EventOccurredFilter` **Methods:** #### `excludes` ```python excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python includes(self, event: Event) -> bool ``` #### `includes` ```python includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? ### `EventNameFilter` **Methods:** #### `excludes` ```python excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python includes(self, event: Event) -> bool ``` #### `includes` ```python includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? ### `EventResourceFilter` **Methods:** #### `excludes` ```python excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python includes(self, event: Event) -> bool ``` #### `includes` ```python includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? ### `EventRelatedFilter` **Methods:** #### `excludes` ```python excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? ### `EventAnyResourceFilter` **Methods:** #### `excludes` ```python excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python includes(self, event: Event) -> bool ``` #### `includes` ```python includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? ### `EventIDFilter` **Methods:** #### `excludes` ```python excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python includes(self, event: Event) -> bool ``` #### `includes` ```python includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? ### `EventTextFilter` Filter by text search across event content. **Methods:** #### `excludes` ```python excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? ### `EventOrder` **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `EventFilter` **Methods:** #### `excludes` ```python excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? # related Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-events-related # `prefect.events.related` ## Functions ### `tags_as_related_resources` ```python tags_as_related_resources(tags: Iterable[str]) -> List[RelatedResource] ``` ### `object_as_related_resource` ```python object_as_related_resource(kind: str, role: str, object: Any) -> RelatedResource ``` ### `related_resources_from_run_context` ```python related_resources_from_run_context(client: 'PrefectClient', exclude: Optional[Set[str]] = None) -> List[RelatedResource] ``` # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-events-schemas-__init__ # `prefect.events.schemas` *This module is empty or contains only private/internal implementations.* # automations Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-events-schemas-automations # `prefect.events.schemas.automations` ## Functions ### `trigger_discriminator` ```python trigger_discriminator(value: Any) -> str ``` Discriminator for triggers that defaults to 'event' if no type is specified. ## Classes ### `Posture` **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `Trigger` Base class describing a set of criteria that must be satisfied in order to trigger an automation. **Methods:** #### `actions` ```python actions(self) -> List[ActionTypes] ``` #### `as_automation` ```python as_automation(self) -> 'AutomationCore' ``` #### `describe_for_cli` ```python describe_for_cli(self, indent: int = 0) -> str ``` Return a human-readable description of this trigger for the CLI #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `owner_resource` ```python owner_resource(self) -> Optional[str] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `set_deployment_id` ```python set_deployment_id(self, deployment_id: UUID) -> None ``` ### `ResourceTrigger` Base class for triggers that may filter by the labels of resources. **Methods:** #### `actions` ```python actions(self) -> List[ActionTypes] ``` #### `as_automation` ```python as_automation(self) -> 'AutomationCore' ``` #### `describe_for_cli` ```python describe_for_cli(self, indent: int = 0) -> str ``` Return a human-readable description of this trigger for the CLI #### `owner_resource` ```python owner_resource(self) -> Optional[str] ``` #### `set_deployment_id` ```python set_deployment_id(self, deployment_id: UUID) -> None ``` ### `EventTrigger` A trigger that fires based on the presence or absence of events within a given period of time. **Methods:** #### `describe_for_cli` ```python describe_for_cli(self, indent: int = 0) -> str ``` Return a human-readable description of this trigger for the CLI #### `enforce_minimum_within_for_proactive_triggers` ```python enforce_minimum_within_for_proactive_triggers(cls, data: Dict[str, Any]) -> Dict[str, Any] ``` ### `MetricTriggerOperator` ### `PrefectMetric` ### `MetricTriggerQuery` Defines a subset of the Trigger subclass, which is specific to Metric automations, that specify the query configurations and breaching conditions for the Automation **Methods:** #### `enforce_minimum_range` ```python enforce_minimum_range(cls, value: timedelta) -> timedelta ``` #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `MetricTrigger` A trigger that fires based on the results of a metric query. **Methods:** #### `describe_for_cli` ```python describe_for_cli(self, indent: int = 0) -> str ``` Return a human-readable description of this trigger for the CLI ### `CompositeTrigger` Requires some number of triggers to have fired within the given time period. **Methods:** #### `actions` ```python actions(self) -> List[ActionTypes] ``` #### `as_automation` ```python as_automation(self) -> 'AutomationCore' ``` #### `describe_for_cli` ```python describe_for_cli(self, indent: int = 0) -> str ``` Return a human-readable description of this trigger for the CLI #### `owner_resource` ```python owner_resource(self) -> Optional[str] ``` #### `set_deployment_id` ```python set_deployment_id(self, deployment_id: UUID) -> None ``` ### `CompoundTrigger` A composite trigger that requires some number of triggers to have fired within the given time period **Methods:** #### `describe_for_cli` ```python describe_for_cli(self, indent: int = 0) -> str ``` Return a human-readable description of this trigger for the CLI #### `validate_require` ```python validate_require(self) -> Self ``` ### `SequenceTrigger` A composite trigger that requires some number of triggers to have fired within the given time period in a specific order **Methods:** #### `describe_for_cli` ```python describe_for_cli(self, indent: int = 0) -> str ``` Return a human-readable description of this trigger for the CLI ### `AutomationCore` Defines an action a user wants to take when a certain number of events do or don't happen to the matching resources **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `Automation` # deployment_triggers Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-events-schemas-deployment_triggers # `prefect.events.schemas.deployment_triggers` Schemas for defining triggers within a Prefect deployment YAML. This is a separate parallel hierarchy for representing triggers so that they can also include the information necessary to create an automation. These triggers should follow the validation rules of the main Trigger class hierarchy as closely as possible (because otherwise users will get validation errors creating triggers), but we can be more liberal with the defaults here to make it simpler to create them from YAML. ## Functions ### `deployment_trigger_discriminator` ```python deployment_trigger_discriminator(value: Any) -> str ``` Custom discriminator for deployment triggers that defaults to 'event' if no type is specified. ## Classes ### `BaseDeploymentTrigger` Base class describing a set of criteria that must be satisfied in order to trigger an automation. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `DeploymentEventTrigger` A trigger that fires based on the presence or absence of events within a given period of time. **Methods:** #### `describe_for_cli` ```python describe_for_cli(self, indent: int = 0) -> str ``` Return a human-readable description of this trigger for the CLI #### `enforce_minimum_within_for_proactive_triggers` ```python enforce_minimum_within_for_proactive_triggers(cls, data: Dict[str, Any]) -> Dict[str, Any] ``` ### `DeploymentMetricTrigger` A trigger that fires based on the results of a metric query. **Methods:** #### `describe_for_cli` ```python describe_for_cli(self, indent: int = 0) -> str ``` Return a human-readable description of this trigger for the CLI ### `DeploymentCompoundTrigger` A composite trigger that requires some number of triggers to have fired within the given time period **Methods:** #### `describe_for_cli` ```python describe_for_cli(self, indent: int = 0) -> str ``` Return a human-readable description of this trigger for the CLI #### `validate_require` ```python validate_require(self) -> Self ``` ### `DeploymentSequenceTrigger` A composite trigger that requires some number of triggers to have fired within the given time period in a specific order **Methods:** #### `describe_for_cli` ```python describe_for_cli(self, indent: int = 0) -> str ``` Return a human-readable description of this trigger for the CLI # events Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-events-schemas-events # `prefect.events.schemas.events` ## Functions ### `matches` ```python matches(expected: str, value: Optional[str]) -> bool ``` Returns true if the given value matches the expected string, which may include a a negation prefix ("!this-value") or a wildcard suffix ("any-value-starting-with\*") ## Classes ### `Resource` An observable business object of interest to the user **Methods:** #### `as_label_value_array` ```python as_label_value_array(self) -> List[Dict[str, str]] ``` #### `enforce_maximum_labels` ```python enforce_maximum_labels(self) -> Self ``` #### `get` ```python get(self, label: str, default: Optional[str] = None) -> Optional[str] ``` #### `has_all_labels` ```python has_all_labels(self, labels: Dict[str, str]) -> bool ``` #### `id` ```python id(self) -> str ``` #### `items` ```python items(self) -> Iterable[Tuple[str, str]] ``` #### `keys` ```python keys(self) -> Iterable[str] ``` #### `labels` ```python labels(self) -> LabelDiver ``` #### `name` ```python name(self) -> Optional[str] ``` #### `prefect_object_id` ```python prefect_object_id(self, kind: str) -> UUID ``` Extracts the UUID from an event's resource ID if it's the expected kind of prefect resource #### `requires_resource_id` ```python requires_resource_id(self) -> Self ``` ### `RelatedResource` A Resource with a specific role in an Event **Methods:** #### `enforce_maximum_labels` ```python enforce_maximum_labels(self) -> Self ``` #### `id` ```python id(self) -> str ``` #### `name` ```python name(self) -> Optional[str] ``` #### `prefect_object_id` ```python prefect_object_id(self, kind: str) -> UUID ``` Extracts the UUID from an event's resource ID if it's the expected kind of prefect resource #### `requires_resource_id` ```python requires_resource_id(self) -> Self ``` #### `requires_resource_role` ```python requires_resource_role(self) -> Self ``` #### `role` ```python role(self) -> str ``` ### `Event` The client-side view of an event that has happened to a Resource **Methods:** #### `find_resource_label` ```python find_resource_label(self, label: str) -> Optional[str] ``` Finds the value of the given label in this event's resource or one of its related resources. If the label starts with `related::`, search for the first matching label in a related resource with that role. #### `involved_resources` ```python involved_resources(self) -> Sequence[Resource] ``` #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `resource_in_role` ```python resource_in_role(self) -> Mapping[str, RelatedResource] ``` Returns a mapping of roles to the first related resource in that role #### `resources_in_role` ```python resources_in_role(self) -> Mapping[str, Sequence[RelatedResource]] ``` Returns a mapping of roles to related resources in that role ### `ReceivedEvent` The server-side view of an event that has happened to a Resource after it has been received by the server **Methods:** #### `is_set` ```python is_set(self) ``` #### `set` ```python set(self) -> None ``` Set the flag, notifying all waiters. Unlike `asyncio.Event`, waiters may not be notified immediately when this is called; instead, notification will be placed on the owning loop of each waiter for thread safety. #### `wait` ```python wait(self) -> Literal[True] ``` Block until the internal flag is true. If the internal flag is true on entry, return True immediately. Otherwise, block until another `set()` is called, then return True. ### `ResourceSpecification` **Methods:** #### `deepcopy` ```python deepcopy(self) -> 'ResourceSpecification' ``` #### `get` ```python get(self, key: str, default: Optional[Union[str, List[str]]] = None) -> Optional[List[str]] ``` #### `includes` ```python includes(self, candidates: Iterable[Resource]) -> bool ``` #### `items` ```python items(self) -> Iterable[Tuple[str, List[str]]] ``` #### `matches` ```python matches(self, resource: Resource) -> bool ``` #### `matches_every_resource` ```python matches_every_resource(self) -> bool ``` #### `matches_every_resource_of_kind` ```python matches_every_resource_of_kind(self, prefix: str) -> bool ``` #### `pop` ```python pop(self, key: str, default: Optional[Union[str, List[str]]] = None) -> Optional[List[str]] ``` # labelling Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-events-schemas-labelling # `prefect.events.schemas.labelling` ## Classes ### `LabelDiver` The LabelDiver supports templating use cases for any Labelled object, by presenting the labels as a graph of objects that may be accessed by attribute. For example: ```python diver = LabelDiver({ 'hello.world': 'foo', 'hello.world.again': 'bar' }) assert str(diver.hello.world) == 'foo' assert str(diver.hello.world.again) == 'bar' ``` ### `Labelled` **Methods:** #### `as_label_value_array` ```python as_label_value_array(self) -> List[Dict[str, str]] ``` #### `get` ```python get(self, label: str, default: Optional[str] = None) -> Optional[str] ``` #### `has_all_labels` ```python has_all_labels(self, labels: Dict[str, str]) -> bool ``` #### `items` ```python items(self) -> Iterable[Tuple[str, str]] ``` #### `keys` ```python keys(self) -> Iterable[str] ``` #### `labels` ```python labels(self) -> LabelDiver ``` # utilities Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-events-utilities # `prefect.events.utilities` ## Functions ### `emit_event` ```python emit_event(event: str, resource: dict[str, str], occurred: datetime.datetime | None = None, related: list[dict[str, str]] | list[RelatedResource] | None = None, payload: dict[str, Any] | None = None, id: UUID | None = None, follows: Event | None = None, **kwargs: dict[str, Any] | None) -> Event | None ``` Send an event to Prefect. **Args:** * `event`: The name of the event that happened. * `resource`: The primary Resource this event concerns. * `occurred`: When the event happened from the sender's perspective. Defaults to the current datetime. * `related`: A list of additional Resources involved in this event. * `payload`: An open-ended set of data describing what happened. * `id`: The sender-provided identifier for this event. Defaults to a random UUID. * `follows`: The event that preceded this one. If the preceding event happened more than 5 minutes prior to this event the follows relationship will not be set. **Returns:** * The event that was emitted if worker is using a client that emit * events, otherwise None # worker Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-events-worker # `prefect.events.worker` ## Functions ### `should_emit_events` ```python should_emit_events() -> bool ``` ### `emit_events_to_cloud` ```python emit_events_to_cloud() -> bool ``` ### `should_emit_events_to_running_server` ```python should_emit_events_to_running_server() -> bool ``` ### `should_emit_events_to_ephemeral_server` ```python should_emit_events_to_ephemeral_server() -> bool ``` ## Classes ### `EventsWorker` **Methods:** #### `attach_related_resources_from_context` ```python attach_related_resources_from_context(self, event: Event) -> None ``` #### `instance` ```python instance(cls: Type[Self], client_type: Optional[Type[EventsClient]] = None) -> Self ``` # exceptions Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-exceptions # `prefect.exceptions` Prefect-specific exceptions. ## Functions ### `exception_traceback` ```python exception_traceback(exc: Exception) -> str ``` Convert an exception to a printable string with a traceback ## Classes ### `PrefectException` Base exception type for Prefect errors. ### `CrashedRun` Raised when the result from a crashed run is retrieved. This occurs when a string is attached to the state instead of an exception or if the state's data is null. ### `FailedRun` Raised when the result from a failed run is retrieved and an exception is not attached. This occurs when a string is attached to the state instead of an exception or if the state's data is null. ### `CancelledRun` Raised when the result from a cancelled run is retrieved and an exception is not attached. This occurs when a string is attached to the state instead of an exception or if the state's data is null. ### `PausedRun` Raised when the result from a paused run is retrieved. ### `UnfinishedRun` Raised when the result from a run that is not finished is retrieved. For example, if a run is in a SCHEDULED, PENDING, CANCELLING, or RUNNING state. ### `MissingFlowError` Raised when a given flow name is not found in the expected script. ### `UnspecifiedFlowError` Raised when multiple flows are found in the expected script and no name is given. ### `MissingResult` Raised when a result is missing from a state; often when result persistence is disabled and the state is retrieved from the API. ### `ScriptError` Raised when a script errors during evaluation while attempting to load data ### `ParameterTypeError` Raised when a parameter does not pass Pydantic type validation. **Methods:** #### `from_validation_error` ```python from_validation_error(cls, exc: ValidationError) -> Self ``` ### `ParameterBindError` Raised when args and kwargs cannot be converted to parameters. **Methods:** #### `from_bind_failure` ```python from_bind_failure(cls, fn: Callable[..., Any], exc: TypeError, call_args: tuple[Any, ...], call_kwargs: dict[str, Any]) -> Self ``` ### `SignatureMismatchError` Raised when parameters passed to a function do not match its signature. **Methods:** #### `from_bad_params` ```python from_bad_params(cls, expected_params: list[str], provided_params: list[str]) -> Self ``` ### `ObjectNotFound` Raised when the client receives a 404 (not found) from the API. ### `ObjectAlreadyExists` Raised when the client receives a 409 (conflict) from the API. ### `ObjectLimitReached` Raised when the client receives a 403 (forbidden) from the API due to reaching an object limit (e.g. maximum number of deployments). ### `ObjectUnsupported` Raised when the client receives a 403 (forbidden) from the API due to an unsupported object (i.e. requires a specific Prefect Cloud tier). ### `UpstreamTaskError` Raised when a task relies on the result of another task but that task is not 'COMPLETE' ### `MissingContextError` Raised when a method is called that requires a task or flow run context to be active but one cannot be found. ### `MissingProfileError` Raised when a profile name does not exist. ### `ReservedArgumentError` Raised when a function used with Prefect has an argument with a name that is reserved for a Prefect feature ### `InvalidNameError` Raised when a name contains characters that are not permitted. ### `PrefectSignal` Base type for signal-like exceptions that should never be caught by users. ### `Abort` Raised when the API sends an 'ABORT' instruction during state proposal. Indicates that the run should exit immediately. ### `Pause` Raised when a flow run is PAUSED and needs to exit for resubmission. ### `ExternalSignal` Base type for external signal-like exceptions that should never be caught by users. ### `TerminationSignal` Raised when a flow run receives a termination signal. ### `PrefectHTTPStatusError` Raised when client receives a `Response` that contains an HTTPStatusError. Used to include API error details in the error messages that the client provides users. **Methods:** #### `from_httpx_error` ```python from_httpx_error(cls: type[Self], httpx_error: HTTPStatusError) -> Self ``` Generate a `PrefectHTTPStatusError` from an `httpx.HTTPStatusError`. ### `MappingLengthMismatch` Raised when attempting to call Task.map with arguments of different lengths. ### `MappingMissingIterable` Raised when attempting to call Task.map with all static arguments ### `BlockMissingCapabilities` Raised when a block does not have required capabilities for a given operation. ### `ProtectedBlockError` Raised when an operation is prevented due to block protection. ### `InvalidRepositoryURLError` Raised when an incorrect URL is provided to a GitHub filesystem block. ### `InfrastructureError` A base class for exceptions related to infrastructure blocks ### `InfrastructureNotFound` Raised when infrastructure is missing, likely because it has exited or been deleted. ### `InfrastructureNotAvailable` Raised when infrastructure is not accessible from the current machine. For example, if a process was spawned on another machine it cannot be managed. ### `NotPausedError` Raised when attempting to unpause a run that isn't paused. ### `FlowPauseTimeout` Raised when a flow pause times out ### `FlowRunWaitTimeout` Raised when a flow run takes longer than a given timeout ### `PrefectImportError` An error raised when a Prefect object cannot be imported due to a move or removal. ### `SerializationError` Raised when an object cannot be serialized. ### `ConfigurationError` Raised when a configuration is invalid. ### `ProfileSettingsValidationError` Raised when a profile settings are invalid. ### `HashError` Raised when hashing objects fails # filesystems Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-filesystems # `prefect.filesystems` ## Classes ### `ReadableFileSystem` **Methods:** #### `aload` ```python aload(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `aload`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Custom.aload("my-custom-message") ``` Load from Block with a block document slug: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Block.aload("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = await Custom.aload("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `annotation_refers_to_block_class` ```python annotation_refers_to_block_class(annotation: Any) -> bool ``` #### `block_initialization` ```python block_initialization(self) -> None ``` #### `delete` ```python delete(cls, name: str, client: Optional['PrefectClient'] = None) ``` #### `get_block_capabilities` ```python get_block_capabilities(cls) -> FrozenSet[str] ``` Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset. #### `get_block_class_from_key` ```python get_block_class_from_key(cls: type[Self], key: str) -> type[Self] ``` Retrieve the block class implementation given a key. #### `get_block_class_from_schema` ```python get_block_class_from_schema(cls: type[Self], schema: BlockSchema) -> type[Self] ``` Retrieve the block class implementation given a schema. #### `get_block_placeholder` ```python get_block_placeholder(self) -> str ``` Returns the block placeholder for the current block which can be used for templating. **Returns:** * The block placeholder for the current block in the format `prefect.blocks.{block_type_name}.{block_document_name}` **Raises:** * `BlockNotSavedError`: Raised if the block has not been saved. If a block has not been saved, the return value will be `None`. #### `get_block_schema_version` ```python get_block_schema_version(cls) -> str ``` #### `get_block_type_name` ```python get_block_type_name(cls) -> str ``` #### `get_block_type_slug` ```python get_block_type_slug(cls) -> str ``` #### `get_code_example` ```python get_code_example(cls) -> Optional[str] ``` Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided. #### `get_description` ```python get_description(cls) -> Optional[str] ``` Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined. #### `is_block_class` ```python is_block_class(block: Any) -> TypeGuard[type['Block']] ``` #### `load` ```python load(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Custom.load("my-custom-message") ``` Load from Block with a block document slug: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Block.load("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = Custom.load("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `load_from_ref` ```python load_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `model_dump` ```python model_dump(self) -> dict[str, Any] ``` #### `model_json_schema` ```python model_json_schema(cls, by_alias: bool = True, ref_template: str = '#/definitions/{model}', schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema, mode: Literal['validation', 'serialization'] = 'validation') -> dict[str, Any] ``` TODO: stop overriding this method - use GenerateSchema in ConfigDict instead? #### `model_validate` ```python model_validate(cls: type[Self], obj: dict[str, Any] | Any) -> Self ``` #### `read_path` ```python read_path(self, path: str) -> bytes ``` #### `register_type_and_schema` ```python register_type_and_schema(cls, client: Optional['PrefectClient'] = None) ``` Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. #### `save` ```python save(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) ``` Saves the values of a block as a block document. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` ### `WritableFileSystem` **Methods:** #### `aload` ```python aload(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `aload`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Custom.aload("my-custom-message") ``` Load from Block with a block document slug: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Block.aload("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = await Custom.aload("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `annotation_refers_to_block_class` ```python annotation_refers_to_block_class(annotation: Any) -> bool ``` #### `block_initialization` ```python block_initialization(self) -> None ``` #### `delete` ```python delete(cls, name: str, client: Optional['PrefectClient'] = None) ``` #### `get_block_capabilities` ```python get_block_capabilities(cls) -> FrozenSet[str] ``` Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset. #### `get_block_class_from_key` ```python get_block_class_from_key(cls: type[Self], key: str) -> type[Self] ``` Retrieve the block class implementation given a key. #### `get_block_class_from_schema` ```python get_block_class_from_schema(cls: type[Self], schema: BlockSchema) -> type[Self] ``` Retrieve the block class implementation given a schema. #### `get_block_placeholder` ```python get_block_placeholder(self) -> str ``` Returns the block placeholder for the current block which can be used for templating. **Returns:** * The block placeholder for the current block in the format `prefect.blocks.{block_type_name}.{block_document_name}` **Raises:** * `BlockNotSavedError`: Raised if the block has not been saved. If a block has not been saved, the return value will be `None`. #### `get_block_schema_version` ```python get_block_schema_version(cls) -> str ``` #### `get_block_type_name` ```python get_block_type_name(cls) -> str ``` #### `get_block_type_slug` ```python get_block_type_slug(cls) -> str ``` #### `get_code_example` ```python get_code_example(cls) -> Optional[str] ``` Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided. #### `get_description` ```python get_description(cls) -> Optional[str] ``` Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined. #### `is_block_class` ```python is_block_class(block: Any) -> TypeGuard[type['Block']] ``` #### `load` ```python load(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Custom.load("my-custom-message") ``` Load from Block with a block document slug: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Block.load("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = Custom.load("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `load_from_ref` ```python load_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `model_dump` ```python model_dump(self) -> dict[str, Any] ``` #### `model_json_schema` ```python model_json_schema(cls, by_alias: bool = True, ref_template: str = '#/definitions/{model}', schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema, mode: Literal['validation', 'serialization'] = 'validation') -> dict[str, Any] ``` TODO: stop overriding this method - use GenerateSchema in ConfigDict instead? #### `model_validate` ```python model_validate(cls: type[Self], obj: dict[str, Any] | Any) -> Self ``` #### `read_path` ```python read_path(self, path: str) -> bytes ``` #### `register_type_and_schema` ```python register_type_and_schema(cls, client: Optional['PrefectClient'] = None) ``` Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. #### `save` ```python save(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) ``` Saves the values of a block as a block document. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `write_path` ```python write_path(self, path: str, content: bytes) -> None ``` ### `ReadableDeploymentStorage` **Methods:** #### `aload` ```python aload(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `aload`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Custom.aload("my-custom-message") ``` Load from Block with a block document slug: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Block.aload("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = await Custom.aload("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `annotation_refers_to_block_class` ```python annotation_refers_to_block_class(annotation: Any) -> bool ``` #### `block_initialization` ```python block_initialization(self) -> None ``` #### `delete` ```python delete(cls, name: str, client: Optional['PrefectClient'] = None) ``` #### `get_block_capabilities` ```python get_block_capabilities(cls) -> FrozenSet[str] ``` Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset. #### `get_block_class_from_key` ```python get_block_class_from_key(cls: type[Self], key: str) -> type[Self] ``` Retrieve the block class implementation given a key. #### `get_block_class_from_schema` ```python get_block_class_from_schema(cls: type[Self], schema: BlockSchema) -> type[Self] ``` Retrieve the block class implementation given a schema. #### `get_block_placeholder` ```python get_block_placeholder(self) -> str ``` Returns the block placeholder for the current block which can be used for templating. **Returns:** * The block placeholder for the current block in the format `prefect.blocks.{block_type_name}.{block_document_name}` **Raises:** * `BlockNotSavedError`: Raised if the block has not been saved. If a block has not been saved, the return value will be `None`. #### `get_block_schema_version` ```python get_block_schema_version(cls) -> str ``` #### `get_block_type_name` ```python get_block_type_name(cls) -> str ``` #### `get_block_type_slug` ```python get_block_type_slug(cls) -> str ``` #### `get_code_example` ```python get_code_example(cls) -> Optional[str] ``` Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided. #### `get_description` ```python get_description(cls) -> Optional[str] ``` Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined. #### `get_directory` ```python get_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> None ``` #### `is_block_class` ```python is_block_class(block: Any) -> TypeGuard[type['Block']] ``` #### `load` ```python load(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Custom.load("my-custom-message") ``` Load from Block with a block document slug: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Block.load("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = Custom.load("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `load_from_ref` ```python load_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `model_dump` ```python model_dump(self) -> dict[str, Any] ``` #### `model_json_schema` ```python model_json_schema(cls, by_alias: bool = True, ref_template: str = '#/definitions/{model}', schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema, mode: Literal['validation', 'serialization'] = 'validation') -> dict[str, Any] ``` TODO: stop overriding this method - use GenerateSchema in ConfigDict instead? #### `model_validate` ```python model_validate(cls: type[Self], obj: dict[str, Any] | Any) -> Self ``` #### `register_type_and_schema` ```python register_type_and_schema(cls, client: Optional['PrefectClient'] = None) ``` Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. #### `save` ```python save(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) ``` Saves the values of a block as a block document. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` ### `WritableDeploymentStorage` **Methods:** #### `aload` ```python aload(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `aload`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Custom.aload("my-custom-message") ``` Load from Block with a block document slug: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Block.aload("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = await Custom.aload("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `annotation_refers_to_block_class` ```python annotation_refers_to_block_class(annotation: Any) -> bool ``` #### `block_initialization` ```python block_initialization(self) -> None ``` #### `delete` ```python delete(cls, name: str, client: Optional['PrefectClient'] = None) ``` #### `get_block_capabilities` ```python get_block_capabilities(cls) -> FrozenSet[str] ``` Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset. #### `get_block_class_from_key` ```python get_block_class_from_key(cls: type[Self], key: str) -> type[Self] ``` Retrieve the block class implementation given a key. #### `get_block_class_from_schema` ```python get_block_class_from_schema(cls: type[Self], schema: BlockSchema) -> type[Self] ``` Retrieve the block class implementation given a schema. #### `get_block_placeholder` ```python get_block_placeholder(self) -> str ``` Returns the block placeholder for the current block which can be used for templating. **Returns:** * The block placeholder for the current block in the format `prefect.blocks.{block_type_name}.{block_document_name}` **Raises:** * `BlockNotSavedError`: Raised if the block has not been saved. If a block has not been saved, the return value will be `None`. #### `get_block_schema_version` ```python get_block_schema_version(cls) -> str ``` #### `get_block_type_name` ```python get_block_type_name(cls) -> str ``` #### `get_block_type_slug` ```python get_block_type_slug(cls) -> str ``` #### `get_code_example` ```python get_code_example(cls) -> Optional[str] ``` Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided. #### `get_description` ```python get_description(cls) -> Optional[str] ``` Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined. #### `get_directory` ```python get_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> None ``` #### `is_block_class` ```python is_block_class(block: Any) -> TypeGuard[type['Block']] ``` #### `load` ```python load(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Custom.load("my-custom-message") ``` Load from Block with a block document slug: ```python class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Block.load("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = Custom.load("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `load_from_ref` ```python load_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `model_dump` ```python model_dump(self) -> dict[str, Any] ``` #### `model_json_schema` ```python model_json_schema(cls, by_alias: bool = True, ref_template: str = '#/definitions/{model}', schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema, mode: Literal['validation', 'serialization'] = 'validation') -> dict[str, Any] ``` TODO: stop overriding this method - use GenerateSchema in ConfigDict instead? #### `model_validate` ```python model_validate(cls: type[Self], obj: dict[str, Any] | Any) -> Self ``` #### `put_directory` ```python put_directory(self, local_path: Optional[str] = None, to_path: Optional[str] = None, ignore_file: Optional[str] = None) -> None ``` #### `register_type_and_schema` ```python register_type_and_schema(cls, client: Optional['PrefectClient'] = None) ``` Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. #### `save` ```python save(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) ``` Saves the values of a block as a block document. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` ### `LocalFileSystem` Store data as a file on a local file system. **Methods:** #### `aget_directory` ```python aget_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> None ``` Copies a directory from one place to another on the local filesystem. Defaults to copying the entire contents of the block's basepath to the current working directory. #### `aput_directory` ```python aput_directory(self, local_path: Optional[str] = None, to_path: Optional[str] = None, ignore_file: Optional[str] = None) -> None ``` Copies a directory from one place to another on the local filesystem. Defaults to copying the entire contents of the current working directory to the block's basepath. An `ignore_file` path may be provided that can include gitignore style expressions for filepaths to ignore. #### `aread_path` ```python aread_path(self, path: str) -> bytes ``` #### `awrite_path` ```python awrite_path(self, path: str, content: bytes) -> str ``` #### `cast_pathlib` ```python cast_pathlib(cls, value: str | Path | None) -> str | None ``` #### `get_directory` ```python get_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> None ``` Copies a directory from one place to another on the local filesystem. Defaults to copying the entire contents of the block's basepath to the current working directory. #### `get_directory` ```python get_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> None ``` #### `put_directory` ```python put_directory(self, local_path: Optional[str] = None, to_path: Optional[str] = None, ignore_file: Optional[str] = None) -> None ``` Copies a directory from one place to another on the local filesystem. Defaults to copying the entire contents of the current working directory to the block's basepath. An `ignore_file` path may be provided that can include gitignore style expressions for filepaths to ignore. #### `put_directory` ```python put_directory(self, local_path: Optional[str] = None, to_path: Optional[str] = None, ignore_file: Optional[str] = None) -> None ``` #### `read_path` ```python read_path(self, path: str) -> bytes ``` #### `read_path` ```python read_path(self, path: str) -> bytes ``` #### `write_path` ```python write_path(self, path: str, content: bytes) -> str ``` #### `write_path` ```python write_path(self, path: str, content: bytes) -> None ``` ### `RemoteFileSystem` Store data as a file on a remote file system. Supports any remote file system supported by `fsspec`. The file system is specified using a protocol. For example, "s3://my-bucket/my-folder/" will use S3. **Methods:** #### `check_basepath` ```python check_basepath(cls, value: str) -> str ``` #### `filesystem` ```python filesystem(self) -> fsspec.AbstractFileSystem ``` #### `get_directory` ```python get_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> None ``` Downloads a directory from a given remote path to a local directory. Defaults to downloading the entire contents of the block's basepath to the current working directory. #### `get_directory` ```python get_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> None ``` #### `put_directory` ```python put_directory(self, local_path: Optional[str] = None, to_path: Optional[str] = None, ignore_file: Optional[str] = None, overwrite: bool = True) -> int ``` Uploads a directory from a given local path to a remote directory. Defaults to uploading the entire contents of the current working directory to the block's basepath. #### `put_directory` ```python put_directory(self, local_path: Optional[str] = None, to_path: Optional[str] = None, ignore_file: Optional[str] = None) -> None ``` #### `read_path` ```python read_path(self, path: str) -> bytes ``` #### `read_path` ```python read_path(self, path: str) -> bytes ``` #### `write_path` ```python write_path(self, path: str, content: bytes) -> str ``` #### `write_path` ```python write_path(self, path: str, content: bytes) -> None ``` ### `SMB` Store data as a file on a SMB share. **Methods:** #### `basepath` ```python basepath(self) -> str ``` #### `filesystem` ```python filesystem(self) -> RemoteFileSystem ``` #### `get_directory` ```python get_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> bytes ``` Downloads a directory from a given remote path to a local directory. Defaults to downloading the entire contents of the block's basepath to the current working directory. #### `get_directory` ```python get_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> None ``` #### `put_directory` ```python put_directory(self, local_path: Optional[str] = None, to_path: Optional[str] = None, ignore_file: Optional[str] = None) -> int ``` Uploads a directory from a given local path to a remote directory. Defaults to uploading the entire contents of the current working directory to the block's basepath. #### `put_directory` ```python put_directory(self, local_path: Optional[str] = None, to_path: Optional[str] = None, ignore_file: Optional[str] = None) -> None ``` #### `read_path` ```python read_path(self, path: str) -> bytes ``` #### `read_path` ```python read_path(self, path: str) -> bytes ``` #### `write_path` ```python write_path(self, path: str, content: bytes) -> str ``` #### `write_path` ```python write_path(self, path: str, content: bytes) -> None ``` ### `NullFileSystem` A file system that does not store any data. **Methods:** #### `get_directory` ```python get_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> None ``` #### `put_directory` ```python put_directory(self, local_path: Optional[str] = None, to_path: Optional[str] = None, ignore_file: Optional[str] = None) -> None ``` #### `read_path` ```python read_path(self, path: str) -> None ``` #### `write_path` ```python write_path(self, path: str, content: bytes) -> None ``` # flow_engine Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-flow_engine # `prefect.flow_engine` ## Functions ### `load_flow_run` ```python load_flow_run(flow_run_id: UUID) -> FlowRun ``` ### `load_flow` ```python load_flow(flow_run: FlowRun) -> Flow[..., Any] ``` ### `load_flow_and_flow_run` ```python load_flow_and_flow_run(flow_run_id: UUID) -> tuple[FlowRun, Flow[..., Any]] ``` ### `run_flow_sync` ```python run_flow_sync(flow: Flow[P, R], flow_run: Optional[FlowRun] = None, parameters: Optional[Dict[str, Any]] = None, wait_for: Optional[Iterable[PrefectFuture[Any]]] = None, return_type: Literal['state', 'result'] = 'result', context: Optional[dict[str, Any]] = None) -> Union[R, State, None] ``` ### `run_flow_async` ```python run_flow_async(flow: Flow[P, R], flow_run: Optional[FlowRun] = None, parameters: Optional[Dict[str, Any]] = None, wait_for: Optional[Iterable[PrefectFuture[Any]]] = None, return_type: Literal['state', 'result'] = 'result', context: Optional[dict[str, Any]] = None) -> Union[R, State, None] ``` ### `run_generator_flow_sync` ```python run_generator_flow_sync(flow: Flow[P, R], flow_run: Optional[FlowRun] = None, parameters: Optional[Dict[str, Any]] = None, wait_for: Optional[Iterable[PrefectFuture[Any]]] = None, return_type: Literal['state', 'result'] = 'result', context: Optional[dict[str, Any]] = None) -> Generator[R, None, None] ``` ### `run_generator_flow_async` ```python run_generator_flow_async(flow: Flow[P, R], flow_run: Optional[FlowRun] = None, parameters: Optional[Dict[str, Any]] = None, wait_for: Optional[Iterable[PrefectFuture[R]]] = None, return_type: Literal['state', 'result'] = 'result', context: Optional[dict[str, Any]] = None) -> AsyncGenerator[R, None] ``` ### `run_flow` ```python run_flow(flow: Flow[P, R], flow_run: Optional[FlowRun] = None, parameters: Optional[Dict[str, Any]] = None, wait_for: Optional[Iterable[PrefectFuture[R]]] = None, return_type: Literal['state', 'result'] = 'result', error_logger: Optional[logging.Logger] = None, context: Optional[dict[str, Any]] = None) -> R | State | None | Coroutine[Any, Any, R | State | None] | Generator[R, None, None] | AsyncGenerator[R, None] ``` ### `run_flow_in_subprocess` ```python run_flow_in_subprocess(flow: 'Flow[..., Any]', flow_run: 'FlowRun | None' = None, parameters: dict[str, Any] | None = None, wait_for: Iterable[PrefectFuture[Any]] | None = None, context: dict[str, Any] | None = None) -> multiprocessing.context.SpawnProcess ``` Run a flow in a subprocess. Note the result of the flow will only be accessible if the flow is configured to persist its result. **Args:** * `flow`: The flow to run. * `flow_run`: The flow run object containing run metadata. * `parameters`: The parameters to use when invoking the flow. * `wait_for`: The futures to wait for before starting the flow. * `context`: A serialized context to hydrate before running the flow. If not provided, the current context will be used. A serialized context should be provided if this function is called in a separate memory space from the parent run (e.g. in a subprocess or on another machine). **Returns:** * A multiprocessing.context.SpawnProcess representing the process that is running the flow. ## Classes ### `FlowRunTimeoutError` Raised when a flow run exceeds its defined timeout. ### `BaseFlowRunEngine` **Methods:** #### `cancel_all_tasks` ```python cancel_all_tasks(self) -> None ``` #### `is_pending` ```python is_pending(self) -> bool ``` #### `is_running` ```python is_running(self) -> bool ``` #### `state` ```python state(self) -> State ``` ### `FlowRunEngine` **Methods:** #### `begin_run` ```python begin_run(self) -> State ``` #### `call_flow_fn` ```python call_flow_fn(self) -> Union[R, Coroutine[Any, Any, R]] ``` Convenience method to call the flow function. Returns a coroutine if the flow is async. #### `call_hooks` ```python call_hooks(self, state: Optional[State] = None) -> None ``` #### `client` ```python client(self) -> SyncPrefectClient ``` #### `create_flow_run` ```python create_flow_run(self, client: SyncPrefectClient) -> FlowRun ``` #### `handle_crash` ```python handle_crash(self, exc: BaseException) -> None ``` #### `handle_exception` ```python handle_exception(self, exc: Exception, msg: Optional[str] = None, result_store: Optional[ResultStore] = None) -> State ``` #### `handle_success` ```python handle_success(self, result: R) -> R ``` #### `handle_timeout` ```python handle_timeout(self, exc: TimeoutError) -> None ``` #### `initialize_run` ```python initialize_run(self) ``` Enters a client context and creates a flow run if needed. #### `load_subflow_run` ```python load_subflow_run(self, parent_task_run: TaskRun, client: SyncPrefectClient, context: FlowRunContext) -> Union[FlowRun, None] ``` This method attempts to load an existing flow run for a subflow task run, if appropriate. If the parent task run is in a final but not COMPLETED state, and not being rerun, then we attempt to load an existing flow run instead of creating a new one. This will prevent the engine from running the subflow again. If no existing flow run is found, or if the subflow should be rerun, then no flow run is returned. #### `result` ```python result(self, raise_on_failure: bool = True) -> 'Union[R, State, None]' ``` #### `run_context` ```python run_context(self) ``` #### `set_state` ```python set_state(self, state: State, force: bool = False) -> State ``` #### `setup_run_context` ```python setup_run_context(self, client: Optional[SyncPrefectClient] = None) ``` #### `start` ```python start(self) -> Generator[None, None, None] ``` ### `AsyncFlowRunEngine` Async version of the flow run engine. NOTE: This has not been fully asyncified yet which may lead to async flows not being fully asyncified. **Methods:** #### `begin_run` ```python begin_run(self) -> State ``` #### `call_flow_fn` ```python call_flow_fn(self) -> Coroutine[Any, Any, R] ``` Convenience method to call the flow function. Returns a coroutine if the flow is async. #### `call_hooks` ```python call_hooks(self, state: Optional[State] = None) -> None ``` #### `client` ```python client(self) -> PrefectClient ``` #### `create_flow_run` ```python create_flow_run(self, client: PrefectClient) -> FlowRun ``` #### `handle_crash` ```python handle_crash(self, exc: BaseException) -> None ``` #### `handle_exception` ```python handle_exception(self, exc: Exception, msg: Optional[str] = None, result_store: Optional[ResultStore] = None) -> State ``` #### `handle_success` ```python handle_success(self, result: R) -> R ``` #### `handle_timeout` ```python handle_timeout(self, exc: TimeoutError) -> None ``` #### `initialize_run` ```python initialize_run(self) ``` Enters a client context and creates a flow run if needed. #### `load_subflow_run` ```python load_subflow_run(self, parent_task_run: TaskRun, client: PrefectClient, context: FlowRunContext) -> Union[FlowRun, None] ``` This method attempts to load an existing flow run for a subflow task run, if appropriate. If the parent task run is in a final but not COMPLETED state, and not being rerun, then we attempt to load an existing flow run instead of creating a new one. This will prevent the engine from running the subflow again. If no existing flow run is found, or if the subflow should be rerun, then no flow run is returned. #### `result` ```python result(self, raise_on_failure: bool = True) -> 'Union[R, State, None]' ``` #### `run_context` ```python run_context(self) ``` #### `set_state` ```python set_state(self, state: State, force: bool = False) -> State ``` #### `setup_run_context` ```python setup_run_context(self, client: Optional[PrefectClient] = None) ``` #### `start` ```python start(self) -> AsyncGenerator[None, None] ``` # flow_runs Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-flow_runs # `prefect.flow_runs` ## Functions ### `wait_for_flow_run` ```python wait_for_flow_run(flow_run_id: UUID, timeout: int | None = 10800, poll_interval: int | None = None, client: 'PrefectClient | None' = None, log_states: bool = False) -> FlowRun ``` Waits for the prefect flow run to finish and returns the FlowRun **Args:** * `flow_run_id`: The flow run ID for the flow run to wait for. * `timeout`: The wait timeout in seconds. Defaults to 10800 (3 hours). * `poll_interval`: Deprecated; polling is no longer used to wait for flow runs. * `client`: Optional Prefect client. If not provided, one will be injected. * `log_states`: If True, log state changes. Defaults to False. **Returns:** * The finished flow run. **Raises:** * `prefect.exceptions.FlowWaitTimeout`: If flow run goes over the timeout. **Examples:** Create a flow run for a deployment and wait for it to finish: ```python import asyncio from prefect.client.orchestration import get_client from prefect.flow_runs import wait_for_flow_run async def main(): async with get_client() as client: flow_run = await client.create_flow_run_from_deployment(deployment_id="my-deployment-id") flow_run = await wait_for_flow_run(flow_run_id=flow_run.id) print(flow_run.state) if __name__ == "__main__": asyncio.run(main()) ``` Trigger multiple flow runs and wait for them to finish: ```python import asyncio from prefect.client.orchestration import get_client from prefect.flow_runs import wait_for_flow_run async def main(num_runs: int): async with get_client() as client: flow_runs = [ await client.create_flow_run_from_deployment(deployment_id="my-deployment-id") for _ in range(num_runs) ] coros = [wait_for_flow_run(flow_run_id=flow_run.id) for flow_run in flow_runs] finished_flow_runs = await asyncio.gather(*coros) print([flow_run.state for flow_run in finished_flow_runs]) if __name__ == "__main__": asyncio.run(main(num_runs=10)) ``` ### `pause_flow_run` ```python pause_flow_run(wait_for_input: Type[T] | None = None, timeout: int = 3600, poll_interval: int = 10, key: str | None = None) -> T | None ``` Pauses the current flow run by blocking execution until resumed. When called within a flow run, execution will block and no downstream tasks will run until the flow is resumed. Task runs that have already started will continue running. A timeout parameter can be passed that will fail the flow run if it has not been resumed within the specified time. **Args:** * `timeout`: the number of seconds to wait for the flow to be resumed before failing. Defaults to 1 hour (3600 seconds). If the pause timeout exceeds any configured flow-level timeout, the flow might fail even after resuming. * `poll_interval`: The number of seconds between checking whether the flow has been resumed. Defaults to 10 seconds. * `key`: An optional key to prevent calling pauses more than once. This defaults to the number of pauses observed by the flow so far, and prevents pauses that use the "reschedule" option from running the same pause twice. A custom key can be supplied for custom pausing behavior. * `wait_for_input`: a subclass of `RunInput` or any type supported by Pydantic. If provided when the flow pauses, the flow will wait for the input to be provided before resuming. If the flow is resumed without providing the input, the flow will fail. If the flow is resumed with the input, the flow will resume and the input will be loaded and returned from this function. Example: ```python @task def task_one(): for i in range(3): sleep(1) @flow def my_flow(): terminal_state = task_one.submit(return_state=True) if terminal_state.type == StateType.COMPLETED: print("Task one succeeded! Pausing flow run..") pause_flow_run(timeout=2) else: print("Task one failed. Skipping pause flow run..") ``` ### `suspend_flow_run` ```python suspend_flow_run(wait_for_input: Type[T] | None = None, flow_run_id: UUID | None = None, timeout: int | None = 3600, key: str | None = None, client: 'PrefectClient | None' = None) -> T | None ``` Suspends a flow run by stopping code execution until resumed. When suspended, the flow run will continue execution until the NEXT task is orchestrated, at which point the flow will exit. Any tasks that have already started will run until completion. When resumed, the flow run will be rescheduled to finish execution. In order suspend a flow run in this way, the flow needs to have an associated deployment and results need to be configured with the `persist_result` option. **Args:** * `flow_run_id`: a flow run id. If supplied, this function will attempt to suspend the specified flow run. If not supplied will attempt to suspend the current flow run. * `timeout`: the number of seconds to wait for the flow to be resumed before failing. Defaults to 1 hour (3600 seconds). If the pause timeout exceeds any configured flow-level timeout, the flow might fail even after resuming. * `key`: An optional key to prevent calling suspend more than once. This defaults to a random string and prevents suspends from running the same suspend twice. A custom key can be supplied for custom suspending behavior. * `wait_for_input`: a subclass of `RunInput` or any type supported by Pydantic. If provided when the flow suspends, the flow will remain suspended until receiving the input before resuming. If the flow is resumed without providing the input, the flow will fail. If the flow is resumed with the input, the flow will resume and the input will be loaded and returned from this function. ### `resume_flow_run` ```python resume_flow_run(flow_run_id: UUID, run_input: dict[str, Any] | None = None) -> None ``` Resumes a paused flow. **Args:** * `flow_run_id`: the flow\_run\_id to resume * `run_input`: a dictionary of inputs to provide to the flow run. # flows Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-flows # `prefect.flows` Module containing the base workflow class and decorator - for most use cases, using the `@flow` decorator is preferred. ## Functions ### `bind_flow_to_infrastructure` ```python bind_flow_to_infrastructure(flow: Flow[P, R], work_pool: str, worker_cls: type['BaseWorker[Any, Any, Any]'], job_variables: dict[str, Any] | None = None) -> InfrastructureBoundFlow[P, R] ``` ### `select_flow` ```python select_flow(flows: Iterable[Flow[P, R]], flow_name: Optional[str] = None, from_message: Optional[str] = None) -> Flow[P, R] ``` Select the only flow in an iterable or a flow specified by name. Returns A single flow object **Raises:** * `MissingFlowError`: If no flows exist in the iterable * `MissingFlowError`: If a flow name is provided and that flow does not exist * `UnspecifiedFlowError`: If multiple flows exist but no flow name was provided ### `load_flow_from_entrypoint` ```python load_flow_from_entrypoint(entrypoint: str, use_placeholder_flow: bool = True) -> Flow[P, Any] ``` Extract a flow object from a script at an entrypoint by running all of the code in the file. **Args:** * `entrypoint`: a string in the format `\:` or a string in the format `\:.` or a module path to a flow function * `use_placeholder_flow`: if True, use a placeholder Flow object if the actual flow object cannot be loaded from the entrypoint (e.g. dependencies are missing) **Returns:** * The flow object from the script **Raises:** * `ScriptError`: If an exception is encountered while running the script * `MissingFlowError`: If the flow function specified in the entrypoint does not exist ### `load_function_and_convert_to_flow` ```python load_function_and_convert_to_flow(entrypoint: str) -> Flow[P, Any] ``` Loads a function from an entrypoint and converts it to a flow if it is not already a flow. ### `serve` ```python serve(*args: 'RunnerDeployment', **kwargs: Any) -> None ``` Serve the provided list of deployments. **Args:** * `*args`: A list of deployments to serve. * `pause_on_shutdown`: A boolean for whether or not to automatically pause deployment schedules on shutdown. * `print_starting_message`: Whether or not to print message to the console on startup. * `limit`: The maximum number of runs that can be executed concurrently. * `**kwargs`: Additional keyword arguments to pass to the runner. **Examples:** Prepare two deployments and serve them: ```python import datetime from prefect import flow, serve @flow def my_flow(name): print(f"hello {name}") @flow def my_other_flow(name): print(f"goodbye {name}") if __name__ == "__main__": # Run once a day hello_deploy = my_flow.to_deployment( "hello", tags=["dev"], interval=datetime.timedelta(days=1) ) # Run every Sunday at 4:00 AM bye_deploy = my_other_flow.to_deployment( "goodbye", tags=["dev"], cron="0 4 * * sun" ) serve(hello_deploy, bye_deploy) ``` ### `aserve` ```python aserve(*args: 'RunnerDeployment', **kwargs: Any) -> None ``` Asynchronously serve the provided list of deployments. Use `serve` instead if calling from a synchronous context. **Args:** * `*args`: A list of deployments to serve. * `pause_on_shutdown`: A boolean for whether or not to automatically pause deployment schedules on shutdown. * `print_starting_message`: Whether or not to print message to the console on startup. * `limit`: The maximum number of runs that can be executed concurrently. * `**kwargs`: Additional keyword arguments to pass to the runner. **Examples:** Prepare deployment and asynchronous initialization function and serve them: ````python import asyncio import datetime from prefect import flow, aserve, get_client async def init(): await set_concurrency_limit() async def set_concurrency_limit(): async with get_client() as client: await client.create_concurrency_limit(tag='dev', concurrency_limit=3) @flow async def my_flow(name): print(f"hello {name}") async def main(): # Initialization function await init() # Run once a day hello_deploy = await my_flow.to_deployment( "hello", tags=["dev"], interval=datetime.timedelta(days=1) ) await aserve(hello_deploy) if __name__ == "__main__": asyncio.run(main()) ### `load_flow_from_flow_run` ```python load_flow_from_flow_run(client: 'PrefectClient', flow_run: 'FlowRun', ignore_storage: bool = False, storage_base_path: Optional[str] = None, use_placeholder_flow: bool = True) -> Flow[..., Any] ```` Load a flow from the location/script provided in a deployment's storage document. If `ignore_storage=True` is provided, no pull from remote storage occurs. This flag is largely for testing, and assumes the flow is already available locally. ### `load_placeholder_flow` ```python load_placeholder_flow(entrypoint: str, raises: Exception) -> Flow[P, Any] ``` Load a placeholder flow that is initialized with the same arguments as the flow specified in the entrypoint. If called the flow will raise `raises`. This is useful when a flow can't be loaded due to missing dependencies or other issues but the base metadata defining the flow is still needed. **Args:** * `entrypoint`: a string in the format `\:` or a module path to a flow function * `raises`: an exception to raise when the flow is called ### `safe_load_flow_from_entrypoint` ```python safe_load_flow_from_entrypoint(entrypoint: str) -> Optional[Flow[P, Any]] ``` Safely load a Prefect flow from an entrypoint string. Returns None if loading fails. **Args:** * `entrypoint`: A string identifying the flow to load. Can be in one of the following formats: * `\:` * `\:.` * `.` **Returns:** * Optional\[Flow]: The loaded Prefect flow object, or None if loading fails due to errors * (e.g. unresolved dependencies, syntax errors, or missing objects). ### `load_flow_arguments_from_entrypoint` ```python load_flow_arguments_from_entrypoint(entrypoint: str, arguments: Optional[Union[list[str], set[str]]] = None) -> dict[str, Any] ``` Extract flow arguments from an entrypoint string. Loads the source code of the entrypoint and extracts the flow arguments from the `flow` decorator. **Args:** * `entrypoint`: a string in the format `\:` or a module path to a flow function ### `is_entrypoint_async` ```python is_entrypoint_async(entrypoint: str) -> bool ``` Determine if the function specified in the entrypoint is asynchronous. **Args:** * `entrypoint`: A string in the format `\:` or a module path to a function. **Returns:** * True if the function is asynchronous, False otherwise. ## Classes ### `FlowStateHook` A callable that is invoked when a flow enters a given state. ### `Flow` A Prefect workflow definition. Wraps a function with an entrypoint to the Prefect engine. To preserve the input and output types, we use the generic type variables `P` and `R` for "Parameters" and "Returns" respectively. **Args:** * `fn`: The function defining the workflow. * `name`: An optional name for the flow; if not provided, the name will be inferred from the given function. * `version`: An optional version string for the flow; if not provided, we will attempt to create a version string as a hash of the file containing the wrapped function; if the file cannot be located, the version will be null. * `flow_run_name`: An optional name to distinguish runs of this flow; this name can be provided as a string template with the flow's parameters as variables, or a function that returns a string. * `task_runner`: An optional task runner to use for task execution within the flow; if not provided, a `ThreadPoolTaskRunner` will be used. * `description`: An optional string description for the flow; if not provided, the description will be pulled from the docstring for the decorated function. * `timeout_seconds`: An optional number of seconds indicating a maximum runtime for the flow. If the flow exceeds this runtime, it will be marked as failed. Flow execution may continue until the next task is called. * `validate_parameters`: By default, parameters passed to flows are validated by Pydantic. This will check that input values conform to the annotated types on the function. Where possible, values will be coerced into the correct type; for example, if a parameter is defined as `x\: int` and "5" is passed, it will be resolved to `5`. If set to `False`, no validation will be performed on flow parameters. * `retries`: An optional number of times to retry on flow run failure. * `retry_delay_seconds`: An optional number of seconds to wait before retrying the flow after failure. This is only applicable if `retries` is nonzero. * `persist_result`: An optional toggle indicating whether the result of this flow should be persisted to result storage. Defaults to `None`, which indicates that Prefect should choose whether the result should be persisted depending on the features being used. * `result_storage`: An optional block to use to persist the result of this flow. This value will be used as the default for any tasks in this flow. If not provided, the local file system will be used unless called as a subflow, at which point the default will be loaded from the parent flow. * `result_serializer`: An optional serializer to use to serialize the result of this flow for persistence. This value will be used as the default for any tasks in this flow. If not provided, the value of `PREFECT_RESULTS_DEFAULT_SERIALIZER` will be used unless called as a subflow, at which point the default will be loaded from the parent flow. * `on_failure`: An optional list of callables to run when the flow enters a failed state. * `on_completion`: An optional list of callables to run when the flow enters a completed state. * `on_cancellation`: An optional list of callables to run when the flow enters a cancelling state. * `on_crashed`: An optional list of callables to run when the flow enters a crashed state. * `on_running`: An optional list of callables to run when the flow enters a running state. **Methods:** #### `afrom_source` ```python afrom_source(cls, source: Union[str, Path, 'RunnerStorage', ReadableDeploymentStorage], entrypoint: str) -> 'Flow[..., Any]' ``` Loads a flow from a remote source asynchronously. **Args:** * `source`: Either a URL to a git repository or a storage object. * `entrypoint`: The path to a file containing a flow and the name of the flow function in the format `./path/to/file.py\:flow_func_name`. **Returns:** * A new `Flow` instance. **Examples:** Load a flow from a public git repository: ```python from prefect import flow from prefect.runner.storage import GitRepository from prefect.blocks.system import Secret my_flow = flow.from_source( source="https://github.com/org/repo.git", entrypoint="flows.py:my_flow", ) my_flow() ``` Load a flow from a private git repository using an access token stored in a `Secret` block: ```python from prefect import flow from prefect.runner.storage import GitRepository from prefect.blocks.system import Secret my_flow = flow.from_source( source=GitRepository( url="https://github.com/org/repo.git", credentials={"access_token": Secret.load("github-access-token")} ), entrypoint="flows.py:my_flow", ) my_flow() ``` Load a flow from a local directory: ```python # from_local_source.py from pathlib import Path from prefect import flow @flow(log_prints=True) def my_flow(name: str = "world"): print(f"Hello {name}! I'm a flow from a Python script!") if __name__ == "__main__": my_flow.from_source( source=str(Path(__file__).parent), entrypoint="from_local_source.py:my_flow", ).deploy( name="my-deployment", parameters=dict(name="Marvin"), work_pool_name="local", ) ``` #### `ato_deployment` ```python ato_deployment(self, name: str, interval: Optional[Union[Iterable[Union[int, float, datetime.timedelta]], int, float, datetime.timedelta]] = None, cron: Optional[Union[Iterable[str], str]] = None, rrule: Optional[Union[Iterable[str], str]] = None, paused: Optional[bool] = None, schedule: Optional[Schedule] = None, schedules: Optional['FlexibleScheduleList'] = None, concurrency_limit: Optional[Union[int, ConcurrencyLimitConfig, None]] = None, parameters: Optional[dict[str, Any]] = None, triggers: Optional[list[Union[DeploymentTriggerTypes, TriggerTypes]]] = None, description: Optional[str] = None, tags: Optional[list[str]] = None, version: Optional[str] = None, version_type: Optional[VersionType] = None, enforce_parameter_schema: bool = True, work_pool_name: Optional[str] = None, work_queue_name: Optional[str] = None, job_variables: Optional[dict[str, Any]] = None, entrypoint_type: EntrypointType = EntrypointType.FILE_PATH, _sla: Optional[Union[SlaTypes, list[SlaTypes]]] = None) -> 'RunnerDeployment' ``` Asynchronously creates a runner deployment object for this flow. **Args:** * `name`: The name to give the created deployment. * `interval`: An interval on which to execute the new deployment. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds. * `cron`: A cron schedule of when to execute runs of this deployment. * `rrule`: An rrule schedule of when to execute runs of this deployment. * `paused`: Whether or not to set this deployment as paused. * `schedule`: A schedule object defining when to execute runs of this deployment. Used to provide additional scheduling options like `timezone` or `parameters`. * `schedules`: A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options such as `timezone`. * `concurrency_limit`: The maximum number of runs of this deployment that can run at the same time. * `parameters`: A dictionary of default parameter values to pass to runs of this deployment. * `triggers`: A list of triggers that will kick off runs of this deployment. * `description`: A description for the created deployment. Defaults to the flow's description if not provided. * `tags`: A list of tags to associate with the created deployment for organizational purposes. * `version`: A version for the created deployment. Defaults to the flow's version. * `version_type`: The type of version to use for the created deployment. The version type will be inferred if not provided. * `enforce_parameter_schema`: Whether or not the Prefect API should enforce the parameter schema for the created deployment. * `work_pool_name`: The name of the work pool to use for this deployment. * `work_queue_name`: The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used. * `job_variables`: Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for * `entrypoint_type`: Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment. * `_sla`: (Experimental) SLA configuration for the deployment. May be removed or modified at any time. Currently only supported on Prefect Cloud. **Examples:** Prepare two deployments and serve them: ```python from prefect import flow, serve @flow def my_flow(name): print(f"hello {name}") @flow def my_other_flow(name): print(f"goodbye {name}") if __name__ == "__main__": hello_deploy = my_flow.to_deployment("hello", tags=["dev"]) bye_deploy = my_other_flow.to_deployment("goodbye", tags=["dev"]) serve(hello_deploy, bye_deploy) ``` #### `deploy` ```python deploy(self, name: str, work_pool_name: Optional[str] = None, image: Optional[Union[str, 'DockerImage']] = None, build: bool = True, push: bool = True, work_queue_name: Optional[str] = None, job_variables: Optional[dict[str, Any]] = None, interval: Optional[Union[int, float, datetime.timedelta]] = None, cron: Optional[str] = None, rrule: Optional[str] = None, paused: Optional[bool] = None, schedule: Optional[Schedule] = None, schedules: Optional[list[Schedule]] = None, concurrency_limit: Optional[Union[int, ConcurrencyLimitConfig, None]] = None, triggers: Optional[list[Union[DeploymentTriggerTypes, TriggerTypes]]] = None, parameters: Optional[dict[str, Any]] = None, description: Optional[str] = None, tags: Optional[list[str]] = None, version: Optional[str] = None, version_type: Optional[VersionType] = None, enforce_parameter_schema: bool = True, entrypoint_type: EntrypointType = EntrypointType.FILE_PATH, print_next_steps: bool = True, ignore_warnings: bool = False, _sla: Optional[Union[SlaTypes, list[SlaTypes]]] = None) -> UUID ``` Deploys a flow to run on dynamic infrastructure via a work pool. By default, calling this method will build a Docker image for the flow, push it to a registry, and create a deployment via the Prefect API that will run the flow on the given schedule. If you want to use an existing image, you can pass `build=False` to skip building and pushing an image. **Args:** * `name`: The name to give the created deployment. * `work_pool_name`: The name of the work pool to use for this deployment. Defaults to the value of `PREFECT_DEFAULT_WORK_POOL_NAME`. * `image`: The name of the Docker image to build, including the registry and repository. Pass a DockerImage instance to customize the Dockerfile used and build arguments. * `build`: Whether or not to build a new image for the flow. If False, the provided image will be used as-is and pulled at runtime. * `push`: Whether or not to skip pushing the built image to a registry. * `work_queue_name`: The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used. * `job_variables`: Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings. * `interval`: An interval on which to execute the deployment. Accepts a number or a timedelta object to create a single schedule. If a number is given, it will be interpreted as seconds. Also accepts an iterable of numbers or timedelta to create multiple schedules. * `cron`: A cron schedule string of when to execute runs of this deployment. Also accepts an iterable of cron schedule strings to create multiple schedules. * `rrule`: An rrule schedule string of when to execute runs of this deployment. Also accepts an iterable of rrule schedule strings to create multiple schedules. * `triggers`: A list of triggers that will kick off runs of this deployment. * `paused`: Whether or not to set this deployment as paused. * `schedule`: A schedule object defining when to execute runs of this deployment. Used to provide additional scheduling options like `timezone` or `parameters`. * `schedules`: A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options like `timezone`. * `concurrency_limit`: The maximum number of runs that can be executed concurrently. * `parameters`: A dictionary of default parameter values to pass to runs of this deployment. * `description`: A description for the created deployment. Defaults to the flow's description if not provided. * `tags`: A list of tags to associate with the created deployment for organizational purposes. * `version`: A version for the created deployment. Defaults to the flow's version. * `version_type`: The type of version to use for the created deployment. The version type will be inferred if not provided. * `enforce_parameter_schema`: Whether or not the Prefect API should enforce the parameter schema for the created deployment. * `entrypoint_type`: Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment. * `print_next_steps_message`: Whether or not to print a message with next steps after deploying the deployments. * `ignore_warnings`: Whether or not to ignore warnings about the work pool type. * `_sla`: (Experimental) SLA configuration for the deployment. May be removed or modified at any time. Currently only supported on Prefect Cloud. Returns: The ID of the created/updated deployment. **Examples:** Deploy a local flow to a work pool: ```python from prefect import flow @flow def my_flow(name): print(f"hello {name}") if __name__ == "__main__": my_flow.deploy( "example-deployment", work_pool_name="my-work-pool", image="my-repository/my-image:dev", ) ``` Deploy a remotely stored flow to a work pool: ```python from prefect import flow if __name__ == "__main__": flow.from_source( source="https://github.com/org/repo.git", entrypoint="flows.py:my_flow", ).deploy( "example-deployment", work_pool_name="my-work-pool", image="my-repository/my-image:dev", ) ``` #### `from_source` ```python from_source(cls, source: Union[str, Path, 'RunnerStorage', ReadableDeploymentStorage], entrypoint: str) -> 'Flow[..., Any]' ``` Loads a flow from a remote source. **Args:** * `source`: Either a URL to a git repository or a storage object. * `entrypoint`: The path to a file containing a flow and the name of the flow function in the format `./path/to/file.py\:flow_func_name`. **Returns:** * A new `Flow` instance. **Examples:** Load a flow from a public git repository: ```python from prefect import flow from prefect.runner.storage import GitRepository from prefect.blocks.system import Secret my_flow = flow.from_source( source="https://github.com/org/repo.git", entrypoint="flows.py:my_flow", ) my_flow() ``` Load a flow from a private git repository using an access token stored in a `Secret` block: ```python from prefect import flow from prefect.runner.storage import GitRepository from prefect.blocks.system import Secret my_flow = flow.from_source( source=GitRepository( url="https://github.com/org/repo.git", credentials={"access_token": Secret.load("github-access-token")} ), entrypoint="flows.py:my_flow", ) my_flow() ``` Load a flow from a local directory: ```python # from_local_source.py from pathlib import Path from prefect import flow @flow(log_prints=True) def my_flow(name: str = "world"): print(f"Hello {name}! I'm a flow from a Python script!") if __name__ == "__main__": my_flow.from_source( source=str(Path(__file__).parent), entrypoint="from_local_source.py:my_flow", ).deploy( name="my-deployment", parameters=dict(name="Marvin"), work_pool_name="local", ) ``` #### `isclassmethod` ```python isclassmethod(self) -> bool ``` #### `ismethod` ```python ismethod(self) -> bool ``` #### `isstaticmethod` ```python isstaticmethod(self) -> bool ``` #### `on_cancellation` ```python on_cancellation(self, fn: FlowStateHook[P, R]) -> FlowStateHook[P, R] ``` #### `on_completion` ```python on_completion(self, fn: FlowStateHook[P, R]) -> FlowStateHook[P, R] ``` #### `on_crashed` ```python on_crashed(self, fn: FlowStateHook[P, R]) -> FlowStateHook[P, R] ``` #### `on_failure` ```python on_failure(self, fn: FlowStateHook[P, R]) -> FlowStateHook[P, R] ``` #### `on_running` ```python on_running(self, fn: FlowStateHook[P, R]) -> FlowStateHook[P, R] ``` #### `serialize_parameters` ```python serialize_parameters(self, parameters: dict[str, Any | PrefectFuture[Any] | State]) -> dict[str, Any] ``` Convert parameters to a serializable form. Uses FastAPI's `jsonable_encoder` to convert to JSON compatible objects without converting everything directly to a string. This maintains basic types like integers during API roundtrips. #### `serve` ```python serve(self, name: Optional[str] = None, interval: Optional[Union[Iterable[Union[int, float, datetime.timedelta]], int, float, datetime.timedelta]] = None, cron: Optional[Union[Iterable[str], str]] = None, rrule: Optional[Union[Iterable[str], str]] = None, paused: Optional[bool] = None, schedule: Optional[Schedule] = None, schedules: Optional['FlexibleScheduleList'] = None, global_limit: Optional[Union[int, ConcurrencyLimitConfig, None]] = None, triggers: Optional[list[Union[DeploymentTriggerTypes, TriggerTypes]]] = None, parameters: Optional[dict[str, Any]] = None, description: Optional[str] = None, tags: Optional[list[str]] = None, version: Optional[str] = None, enforce_parameter_schema: bool = True, pause_on_shutdown: bool = True, print_starting_message: bool = True, limit: Optional[int] = None, webserver: bool = False, entrypoint_type: EntrypointType = EntrypointType.FILE_PATH) -> None ``` Creates a deployment for this flow and starts a runner to monitor for scheduled work. **Args:** * `name`: The name to give the created deployment. Defaults to the name of the flow. * `interval`: An interval on which to execute the deployment. Accepts a number or a timedelta object to create a single schedule. If a number is given, it will be interpreted as seconds. Also accepts an iterable of numbers or timedelta to create multiple schedules. * `cron`: A cron schedule string of when to execute runs of this deployment. Also accepts an iterable of cron schedule strings to create multiple schedules. * `rrule`: An rrule schedule string of when to execute runs of this deployment. Also accepts an iterable of rrule schedule strings to create multiple schedules. * `triggers`: A list of triggers that will kick off runs of this deployment. * `paused`: Whether or not to set this deployment as paused. * `schedule`: A schedule object defining when to execute runs of this deployment. Used to provide additional scheduling options like `timezone` or `parameters`. * `schedules`: A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options like `timezone`. * `global_limit`: The maximum number of concurrent runs allowed across all served flow instances associated with the same deployment. * `parameters`: A dictionary of default parameter values to pass to runs of this deployment. * `description`: A description for the created deployment. Defaults to the flow's description if not provided. * `tags`: A list of tags to associate with the created deployment for organizational purposes. * `version`: A version for the created deployment. Defaults to the flow's version. * `enforce_parameter_schema`: Whether or not the Prefect API should enforce the parameter schema for the created deployment. * `pause_on_shutdown`: If True, provided schedule will be paused when the serve function is stopped. If False, the schedules will continue running. * `print_starting_message`: Whether or not to print the starting message when flow is served. * `limit`: The maximum number of runs that can be executed concurrently by the created runner; only applies to this served flow. To apply a limit across multiple served flows, use `global_limit`. * `webserver`: Whether or not to start a monitoring webserver for this flow. * `entrypoint_type`: Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment. **Examples:** Serve a flow: ```python from prefect import flow @flow def my_flow(name): print(f"hello {name}") if __name__ == "__main__": my_flow.serve("example-deployment") ``` Serve a flow and run it every hour: ```python from prefect import flow @flow def my_flow(name): print(f"hello {name}") if __name__ == "__main__": my_flow.serve("example-deployment", interval=3600) ``` #### `to_deployment` ```python to_deployment(self, name: str, interval: Optional[Union[Iterable[Union[int, float, datetime.timedelta]], int, float, datetime.timedelta]] = None, cron: Optional[Union[Iterable[str], str]] = None, rrule: Optional[Union[Iterable[str], str]] = None, paused: Optional[bool] = None, schedule: Optional[Schedule] = None, schedules: Optional['FlexibleScheduleList'] = None, concurrency_limit: Optional[Union[int, ConcurrencyLimitConfig, None]] = None, parameters: Optional[dict[str, Any]] = None, triggers: Optional[list[Union[DeploymentTriggerTypes, TriggerTypes]]] = None, description: Optional[str] = None, tags: Optional[list[str]] = None, version: Optional[str] = None, version_type: Optional[VersionType] = None, enforce_parameter_schema: bool = True, work_pool_name: Optional[str] = None, work_queue_name: Optional[str] = None, job_variables: Optional[dict[str, Any]] = None, entrypoint_type: EntrypointType = EntrypointType.FILE_PATH, _sla: Optional[Union[SlaTypes, list[SlaTypes]]] = None) -> 'RunnerDeployment' ``` Creates a runner deployment object for this flow. **Args:** * `name`: The name to give the created deployment. * `interval`: An interval on which to execute the new deployment. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds. * `cron`: A cron schedule of when to execute runs of this deployment. * `rrule`: An rrule schedule of when to execute runs of this deployment. * `paused`: Whether or not to set this deployment as paused. * `schedule`: A schedule object defining when to execute runs of this deployment. Used to provide additional scheduling options like `timezone` or `parameters`. * `schedules`: A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options such as `timezone`. * `concurrency_limit`: The maximum number of runs of this deployment that can run at the same time. * `parameters`: A dictionary of default parameter values to pass to runs of this deployment. * `triggers`: A list of triggers that will kick off runs of this deployment. * `description`: A description for the created deployment. Defaults to the flow's description if not provided. * `tags`: A list of tags to associate with the created deployment for organizational purposes. * `version`: A version for the created deployment. Defaults to the flow's version. * `version_type`: The type of version to use for the created deployment. The version type will be inferred if not provided. * `enforce_parameter_schema`: Whether or not the Prefect API should enforce the parameter schema for the created deployment. * `work_pool_name`: The name of the work pool to use for this deployment. * `work_queue_name`: The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used. * `job_variables`: Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for * `entrypoint_type`: Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment. * `_sla`: (Experimental) SLA configuration for the deployment. May be removed or modified at any time. Currently only supported on Prefect Cloud. **Examples:** Prepare two deployments and serve them: ```python from prefect import flow, serve @flow def my_flow(name): print(f"hello {name}") @flow def my_other_flow(name): print(f"goodbye {name}") if __name__ == "__main__": hello_deploy = my_flow.to_deployment("hello", tags=["dev"]) bye_deploy = my_other_flow.to_deployment("goodbye", tags=["dev"]) serve(hello_deploy, bye_deploy) ``` #### `validate_parameters` ```python validate_parameters(self, parameters: dict[str, Any]) -> dict[str, Any] ``` Validate parameters for compatibility with the flow by attempting to cast the inputs to the associated types specified by the function's type annotations. **Returns:** * A new dict of parameters that have been cast to the appropriate types **Raises:** * `ParameterTypeError`: if the provided parameters are not valid #### `visualize` ```python visualize(self, *args: 'P.args', **kwargs: 'P.kwargs') ``` Generates a graphviz object representing the current flow. In IPython notebooks, it's rendered inline, otherwise in a new window as a PNG. **Raises:** * `- ImportError`: If `graphviz` isn't installed. * `- GraphvizExecutableNotFoundError`: If the `dot` executable isn't found. * `- FlowVisualizationError`: If the flow can't be visualized for any other reason. #### `with_options` ```python with_options(self) -> 'Flow[P, R]' ``` Create a new flow from the current object, updating provided options. **Args:** * `name`: A new name for the flow. * `version`: A new version for the flow. * `description`: A new description for the flow. * `flow_run_name`: An optional name to distinguish runs of this flow; this name can be provided as a string template with the flow's parameters as variables, or a function that returns a string. * `task_runner`: A new task runner for the flow. * `timeout_seconds`: A new number of seconds to fail the flow after if still running. * `validate_parameters`: A new value indicating if flow calls should validate given parameters. * `retries`: A new number of times to retry on flow run failure. * `retry_delay_seconds`: A new number of seconds to wait before retrying the flow after failure. This is only applicable if `retries` is nonzero. * `persist_result`: A new option for enabling or disabling result persistence. * `result_storage`: A new storage type to use for results. * `result_serializer`: A new serializer to use for results. * `cache_result_in_memory`: A new value indicating if the flow's result should be cached in memory. * `on_failure`: A new list of callables to run when the flow enters a failed state. * `on_completion`: A new list of callables to run when the flow enters a completed state. * `on_cancellation`: A new list of callables to run when the flow enters a cancelling state. * `on_crashed`: A new list of callables to run when the flow enters a crashed state. * `on_running`: A new list of callables to run when the flow enters a running state. **Returns:** * A new `Flow` instance. Examples: Create a new flow from an existing flow and update the name: ```python from prefect import flow @flow(name="My flow") def my_flow(): return 1 new_flow = my_flow.with_options(name="My new flow") ``` Create a new flow from an existing flow, update the task runner, and call it without an intermediate variable: ```python from prefect.task_runners import ThreadPoolTaskRunner @flow def my_flow(x, y): return x + y state = my_flow.with_options(task_runner=ThreadPoolTaskRunner)(1, 3) assert state.result() == 4 ``` ### `FlowDecorator` ### `InfrastructureBoundFlow` A flow that is bound to running on a specific infrastructure. **Methods:** #### `submit` ```python submit(self, *args: P.args, **kwargs: P.kwargs) -> PrefectFlowRunFuture[R] ``` Submit the flow to run on remote infrastructure. This method will spin up a local worker to submit the flow to remote infrastructure. To submit the flow to remote infrastructure without spinning up a local worker, use `submit_to_work_pool` instead. **Args:** * `*args`: Positional arguments to pass to the flow. * `**kwargs`: Keyword arguments to pass to the flow. **Returns:** * A `PrefectFlowRunFuture` that can be used to retrieve the result of the flow run. **Examples:** Submit a flow to run on Kubernetes: ```python from prefect import flow from prefect_kubernetes.experimental import kubernetes @kubernetes(work_pool="my-kubernetes-work-pool") @flow def my_flow(x: int, y: int): return x + y future = my_flow.submit(x=1, y=2) result = future.result() print(result) ``` #### `submit_to_work_pool` ```python submit_to_work_pool(self, *args: P.args, **kwargs: P.kwargs) -> PrefectFlowRunFuture[R] ``` Submits the flow to run on remote infrastructure. This method will create a flow run for an existing worker to submit to remote infrastructure. If you don't have a worker available, use `submit` instead. **Args:** * `*args`: Positional arguments to pass to the flow. * `**kwargs`: Keyword arguments to pass to the flow. **Returns:** * A `PrefectFlowRunFuture` that can be used to retrieve the result of the flow run. **Examples:** Dispatch a flow to run on Kubernetes: ```python from prefect import flow from prefect_kubernetes.experimental import kubernetes @kubernetes(work_pool="my-kubernetes-work-pool") @flow def my_flow(x: int, y: int): return x + y future = my_flow.submit_to_work_pool(x=1, y=2) result = future.result() print(result) ``` #### `with_options` ```python with_options(self) -> 'InfrastructureBoundFlow[P, R]' ``` # futures Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-futures # `prefect.futures` ## Functions ### `as_completed` ```python as_completed(futures: list[PrefectFuture[R]], timeout: float | None = None) -> Generator[PrefectFuture[R], None] ``` ### `wait` ```python wait(futures: list[PrefectFuture[R]], timeout: float | None = None) -> DoneAndNotDoneFutures[R] ``` Wait for the futures in the given sequence to complete. **Args:** * `futures`: The sequence of Futures to wait upon. * `timeout`: The maximum number of seconds to wait. If None, then there is no limit on the wait time. **Returns:** * A named 2-tuple of sets. The first set, named 'done', contains the * futures that completed (is finished or cancelled) before the wait * completed. The second set, named 'not\_done', contains uncompleted * futures. Duplicate futures given to *futures* are removed and will be * returned only once. **Examples:** ```python @task def sleep_task(seconds): sleep(seconds) return 42 @flow def flow(): futures = random_task.map(range(10)) done, not_done = wait(futures, timeout=5) print(f"Done: {len(done)}") print(f"Not Done: {len(not_done)}") ``` ### `resolve_futures_to_states` ```python resolve_futures_to_states(expr: PrefectFuture[R] | Any) -> PrefectFuture[R] | Any ``` Given a Python built-in collection, recursively find `PrefectFutures` and build a new collection with the same structure with futures resolved to their final states. Resolving futures to their final states may wait for execution to complete. Unsupported object types will be returned without modification. ### `resolve_futures_to_results` ```python resolve_futures_to_results(expr: PrefectFuture[R] | Any) -> Any ``` Given a Python built-in collection, recursively find `PrefectFutures` and build a new collection with the same structure with futures resolved to their final results. Resolving futures to their final result may wait for execution to complete. Unsupported object types will be returned without modification. ## Classes ### `PrefectFuture` Abstract base class for Prefect futures. A Prefect future is a handle to the asynchronous execution of a run. It provides methods to wait for the to complete and to retrieve the result of the run. **Methods:** #### `add_done_callback` ```python add_done_callback(self, fn: Callable[['PrefectFuture[R]'], None]) -> None ``` Add a callback to be run when the future completes or is cancelled. **Args:** * `fn`: A callable that will be called with this future as its only argument when the future completes or is cancelled. #### `result` ```python result(self, timeout: float | None = None, raise_on_failure: bool = True) -> R ``` #### `state` ```python state(self) -> State ``` The current state of the task run associated with this future #### `task_run_id` ```python task_run_id(self) -> uuid.UUID ``` The ID of the task run associated with this future #### `wait` ```python wait(self, timeout: float | None = None) -> None ``` ### `PrefectTaskRunFuture` A Prefect future that represents the eventual execution of a task run. **Methods:** #### `state` ```python state(self) -> State ``` The current state of the task run associated with this future #### `task_run_id` ```python task_run_id(self) -> uuid.UUID ``` The ID of the task run associated with this future ### `PrefectWrappedFuture` A Prefect future that wraps another future object. **Methods:** #### `add_done_callback` ```python add_done_callback(self, fn: Callable[[PrefectFuture[R]], None]) -> None ``` Add a callback to be executed when the future completes. #### `wrapped_future` ```python wrapped_future(self) -> F ``` The underlying future object wrapped by this Prefect future ### `PrefectConcurrentFuture` A Prefect future that wraps a concurrent.futures.Future. This future is used when the task run is submitted to a ThreadPoolExecutor. **Methods:** #### `result` ```python result(self, timeout: float | None = None, raise_on_failure: bool = True) -> R ``` #### `wait` ```python wait(self, timeout: float | None = None) -> None ``` ### `PrefectDistributedFuture` Represents the result of a computation happening anywhere. This class is typically used to interact with the result of a task run scheduled to run in a Prefect task worker but can be used to interact with any task run scheduled in Prefect's API. **Methods:** #### `add_done_callback` ```python add_done_callback(self, fn: Callable[[PrefectFuture[R]], None]) -> None ``` #### `result` ```python result(self, timeout: float | None = None, raise_on_failure: bool = True) -> R ``` #### `result_async` ```python result_async(self, timeout: float | None = None, raise_on_failure: bool = True) -> R ``` #### `wait` ```python wait(self, timeout: float | None = None) -> None ``` #### `wait_async` ```python wait_async(self, timeout: float | None = None) -> None ``` ### `PrefectFlowRunFuture` A Prefect future that represents the eventual execution of a flow run. **Methods:** #### `add_done_callback` ```python add_done_callback(self, fn: Callable[[PrefectFuture[R]], None]) -> None ``` #### `aresult` ```python aresult(self, timeout: float | None = None, raise_on_failure: bool = True) -> R ``` #### `flow_run_id` ```python flow_run_id(self) -> uuid.UUID ``` The ID of the flow run associated with this future #### `result` ```python result(self, timeout: float | None = None, raise_on_failure: bool = True) -> R ``` #### `state` ```python state(self) -> State ``` The current state of the flow run associated with this future #### `wait` ```python wait(self, timeout: float | None = None) -> None ``` #### `wait_async` ```python wait_async(self, timeout: float | None = None) -> None ``` ### `PrefectFutureList` A list of Prefect futures. This class provides methods to wait for all futures in the list to complete and to retrieve the results of all task runs. **Methods:** #### `result` ```python result(self: Self, timeout: float | None = None, raise_on_failure: bool = True) -> list[R] ``` Get the results of all task runs associated with the futures in the list. **Args:** * `timeout`: The maximum number of seconds to wait for all futures to complete. * `raise_on_failure`: If `True`, an exception will be raised if any task run fails. **Returns:** * A list of results of the task runs. **Raises:** * `TimeoutError`: If the timeout is reached before all futures complete. #### `wait` ```python wait(self, timeout: float | None = None) -> None ``` Wait for all futures in the list to complete. **Args:** * `timeout`: The maximum number of seconds to wait for all futures to complete. This method will not raise if the timeout is reached. ### `DoneAndNotDoneFutures` A named 2-tuple of sets. multiple inheritance supported in 3.11+, use typing\_extensions.NamedTuple # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-infrastructure-__init__ # `prefect.infrastructure` 2024-06-27: This surfaces an actionable error message for moved or removed objects in Prefect 3.0 upgrade. # base Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-infrastructure-base # `prefect.infrastructure.base` 2024-06-27: This surfaces an actionable error message for moved or removed objects in Prefect 3.0 upgrade. # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-infrastructure-provisioners-__init__ # `prefect.infrastructure.provisioners` ## Functions ### `get_infrastructure_provisioner_for_work_pool_type` ```python get_infrastructure_provisioner_for_work_pool_type(work_pool_type: str) -> Type[Provisioner] ``` Retrieve an instance of the infrastructure provisioner for the given work pool type. **Args:** * `work_pool_type`: the work pool type **Returns:** * an instance of the infrastructure provisioner for the given work pool type **Raises:** * `ValueError`: if the work pool type is not supported ## Classes ### `Provisioner` **Methods:** #### `console` ```python console(self) -> rich.console.Console ``` #### `console` ```python console(self, value: rich.console.Console) -> None ``` #### `provision` ```python provision(self, work_pool_name: str, base_job_template: Dict[str, Any], client: Optional['PrefectClient'] = None) -> Dict[str, Any] ``` # cloud_run Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-infrastructure-provisioners-cloud_run # `prefect.infrastructure.provisioners.cloud_run` ## Classes ### `CloudRunPushProvisioner` **Methods:** #### `console` ```python console(self) -> Console ``` #### `console` ```python console(self, value: Console) -> None ``` #### `provision` ```python provision(self, work_pool_name: str, base_job_template: dict, client: Optional['PrefectClient'] = None) -> Dict[str, Any] ``` # coiled Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-infrastructure-provisioners-coiled # `prefect.infrastructure.provisioners.coiled` ## Classes ### `CoiledPushProvisioner` A infrastructure provisioner for Coiled push work pools. **Methods:** #### `console` ```python console(self) -> Console ``` #### `console` ```python console(self, value: Console) -> None ``` #### `provision` ```python provision(self, work_pool_name: str, base_job_template: Dict[str, Any], client: Optional['PrefectClient'] = None) -> Dict[str, Any] ``` Provisions resources necessary for a Coiled push work pool. **Args:** * `work_pool_name`: The name of the work pool to provision resources for * `base_job_template`: The base job template to update **Returns:** * A copy of the provided base job template with the provisioned resources # container_instance Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-infrastructure-provisioners-container_instance # `prefect.infrastructure.provisioners.container_instance` This module defines the ContainerInstancePushProvisioner class, which is responsible for provisioning infrastructure using Azure Container Instances for Prefect work pools. The ContainerInstancePushProvisioner class provides methods for provisioning infrastructure and interacting with Azure Container Instances. Classes: AzureCLI: A class to handle Azure CLI commands. ContainerInstancePushProvisioner: A class for provisioning infrastructure using Azure Container Instances. ## Classes ### `AzureCLI` A class for executing Azure CLI commands and handling their output. **Args:** * `console`: A Rich console object for displaying messages. **Methods:** #### `run_command` ```python run_command(self, command: str, success_message: Optional[str] = None, failure_message: Optional[str] = None, ignore_if_exists: bool = False, return_json: bool = False) -> str | dict[str, Any] | None ``` Runs an Azure CLI command and processes the output. **Args:** * `command`: The Azure CLI command to execute. * `success_message`: Message to print on success. * `failure_message`: Message to print on failure. * `ignore_if_exists`: Whether to ignore errors if a resource already exists. * `return_json`: Whether to return the output as JSON. **Returns:** * A tuple with two elements: * str: Status, either 'created', 'exists', or 'error'. * str or dict or None: The command output or None if an error occurs (depends on return\_json). **Raises:** * `subprocess.CalledProcessError`: If the command execution fails. * `json.JSONDecodeError`: If output cannot be decoded as JSON when return\_json is True. ### `ContainerInstancePushProvisioner` A class responsible for provisioning Azure resources and setting up a push work pool. **Methods:** #### `console` ```python console(self) -> Console ``` #### `console` ```python console(self, value: Console) -> None ``` #### `provision` ```python provision(self, work_pool_name: str, base_job_template: Dict[str, Any], client: Optional['PrefectClient'] = None) -> Dict[str, Any] ``` Orchestrates the provisioning of Azure resources and setup for the push work pool. **Args:** * `work_pool_name`: The name of the work pool. * `base_job_template`: The base template for job creation. * `client`: An instance of PrefectClient. If None, it will be injected. **Returns:** * Dict\[str, Any]: The updated job template with necessary references and configurations. **Raises:** * `RuntimeError`: If client injection fails or the Azure CLI command execution fails. #### `set_location` ```python set_location(self) -> None ``` Set the Azure resource deployment location to the default or 'eastus' on failure. **Raises:** * `RuntimeError`: If unable to execute the Azure CLI command. # ecs Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-infrastructure-provisioners-ecs # `prefect.infrastructure.provisioners.ecs` ## Functions ### `console_context` ```python console_context(value: Console) -> Generator[None, None, None] ``` ## Classes ### `IamPolicyResource` Represents an IAM policy resource for managing ECS tasks. **Args:** * `policy_name`: The name of the IAM policy. Defaults to "prefect-ecs-policy". **Methods:** #### `get_planned_actions` ```python get_planned_actions(self) -> List[str] ``` Returns a description of the planned actions for provisioning this resource. **Returns:** * Optional\[str]: A description of the planned actions for provisioning the resource, or None if provisioning is not required. #### `get_task_count` ```python get_task_count(self) -> int ``` Returns the number of tasks that will be executed to provision this resource. **Returns:** * The number of tasks to be provisioned. #### `next_steps` ```python next_steps(self) -> list[str] ``` #### `provision` ```python provision(self, policy_document: dict[str, Any], advance: Callable[[], None]) -> str ``` Provisions an IAM policy. **Args:** * `advance`: A callback function to indicate progress. **Returns:** * The ARN (Amazon Resource Name) of the created IAM policy. #### `requires_provisioning` ```python requires_provisioning(self) -> bool ``` Check if this resource requires provisioning. **Returns:** * True if provisioning is required, False otherwise. ### `IamUserResource` Represents an IAM user resource for managing ECS tasks. **Args:** * `user_name`: The desired name of the IAM user. **Methods:** #### `get_planned_actions` ```python get_planned_actions(self) -> List[str] ``` Returns a description of the planned actions for provisioning this resource. **Returns:** * Optional\[str]: A description of the planned actions for provisioning the resource, or None if provisioning is not required. #### `get_task_count` ```python get_task_count(self) -> int ``` Returns the number of tasks that will be executed to provision this resource. **Returns:** * The number of tasks to be provisioned. #### `next_steps` ```python next_steps(self) -> list[str] ``` #### `provision` ```python provision(self, advance: Callable[[], None]) -> None ``` Provisions an IAM user. **Args:** * `advance`: A callback function to indicate progress. #### `requires_provisioning` ```python requires_provisioning(self) -> bool ``` Check if this resource requires provisioning. **Returns:** * True if provisioning is required, False otherwise. ### `CredentialsBlockResource` **Methods:** #### `get_planned_actions` ```python get_planned_actions(self) -> List[str] ``` Returns a description of the planned actions for provisioning this resource. **Returns:** * Optional\[str]: A description of the planned actions for provisioning the resource, or None if provisioning is not required. #### `get_task_count` ```python get_task_count(self) -> int ``` Returns the number of tasks that will be executed to provision this resource. **Returns:** * The number of tasks to be provisioned. #### `next_steps` ```python next_steps(self) -> list[str] ``` #### `provision` ```python provision(self, base_job_template: Dict[str, Any], advance: Callable[[], None], client: Optional['PrefectClient'] = None) ``` Provisions an AWS credentials block. Will generate new credentials if the block does not already exist. Updates the `aws_credentials` variable in the job template to reference the block. **Args:** * `base_job_template`: The base job template. * `advance`: A callback function to indicate progress. * `client`: A Prefect client to use for interacting with the Prefect API. #### `requires_provisioning` ```python requires_provisioning(self, client: Optional['PrefectClient'] = None) -> bool ``` ### `AuthenticationResource` **Methods:** #### `get_planned_actions` ```python get_planned_actions(self) -> List[str] ``` Returns a description of the planned actions for provisioning this resource. **Returns:** * Optional\[str]: A description of the planned actions for provisioning the resource, or None if provisioning is not required. #### `get_task_count` ```python get_task_count(self) -> int ``` Returns the number of tasks that will be executed to provision this resource. **Returns:** * The number of tasks to be provisioned. #### `next_steps` ```python next_steps(self) -> list[str] ``` #### `provision` ```python provision(self, base_job_template: dict[str, Any], advance: Callable[[], None]) -> None ``` Provisions the authentication resources. **Args:** * `base_job_template`: The base job template of the work pool to provision infrastructure for. * `advance`: A callback function to indicate progress. #### `requires_provisioning` ```python requires_provisioning(self) -> bool ``` Check if this resource requires provisioning. **Returns:** * True if provisioning is required, False otherwise. #### `resources` ```python resources(self) -> list['ExecutionRoleResource | IamUserResource | IamPolicyResource | CredentialsBlockResource'] ``` ### `ClusterResource` **Methods:** #### `get_planned_actions` ```python get_planned_actions(self) -> List[str] ``` Returns a description of the planned actions for provisioning this resource. **Returns:** * Optional\[str]: A description of the planned actions for provisioning the resource, or None if provisioning is not required. #### `get_task_count` ```python get_task_count(self) -> int ``` Returns the number of tasks that will be executed to provision this resource. **Returns:** * The number of tasks to be provisioned. #### `next_steps` ```python next_steps(self) -> list[str] ``` #### `provision` ```python provision(self, base_job_template: dict[str, Any], advance: Callable[[], None]) -> None ``` Provisions an ECS cluster. Will update the `cluster` variable in the job template to reference the cluster. **Args:** * `base_job_template`: The base job template of the work pool to provision infrastructure for. * `advance`: A callback function to indicate progress. #### `requires_provisioning` ```python requires_provisioning(self) -> bool ``` Check if this resource requires provisioning. **Returns:** * True if provisioning is required, False otherwise. ### `VpcResource` **Methods:** #### `get_planned_actions` ```python get_planned_actions(self) -> List[str] ``` Returns a description of the planned actions for provisioning this resource. **Returns:** * Optional\[str]: A description of the planned actions for provisioning the resource, or None if provisioning is not required. #### `get_task_count` ```python get_task_count(self) -> int ``` Returns the number of tasks that will be executed to provision this resource. **Returns:** * The number of tasks to be provisioned. #### `next_steps` ```python next_steps(self) -> list[str] ``` #### `provision` ```python provision(self, base_job_template: dict[str, Any], advance: Callable[[], None]) -> None ``` Provisions a VPC. Chooses a CIDR block to avoid conflicting with any existing VPCs. Will update the `vpc_id` variable in the job template to reference the VPC. **Args:** * `base_job_template`: The base job template of the work pool to provision infrastructure for. * `advance`: A callback function to indicate progress. #### `requires_provisioning` ```python requires_provisioning(self) -> bool ``` Check if this resource requires provisioning. **Returns:** * True if provisioning is required, False otherwise. ### `ContainerRepositoryResource` **Methods:** #### `get_planned_actions` ```python get_planned_actions(self) -> List[str] ``` Returns a description of the planned actions for provisioning this resource. **Returns:** * Optional\[str]: A description of the planned actions for provisioning the resource, or None if provisioning is not required. #### `get_task_count` ```python get_task_count(self) -> int ``` Returns the number of tasks that will be executed to provision this resource. **Returns:** * The number of tasks to be provisioned. #### `next_steps` ```python next_steps(self) -> list[str | Panel] ``` #### `provision` ```python provision(self, base_job_template: dict[str, Any], advance: Callable[[], None]) -> None ``` Provisions an ECR repository. **Args:** * `base_job_template`: The base job template of the work pool to provision infrastructure for. * `advance`: A callback function to indicate progress. #### `requires_provisioning` ```python requires_provisioning(self) -> bool ``` Check if this resource requires provisioning. **Returns:** * True if provisioning is required, False otherwise. ### `ExecutionRoleResource` **Methods:** #### `get_planned_actions` ```python get_planned_actions(self) -> List[str] ``` Returns a description of the planned actions for provisioning this resource. **Returns:** * Optional\[str]: A description of the planned actions for provisioning the resource, or None if provisioning is not required. #### `get_task_count` ```python get_task_count(self) -> int ``` Returns the number of tasks that will be executed to provision this resource. **Returns:** * The number of tasks to be provisioned. #### `next_steps` ```python next_steps(self) -> list[str] ``` #### `provision` ```python provision(self, base_job_template: dict[str, Any], advance: Callable[[], None]) -> str ``` Provisions an IAM role. **Args:** * `base_job_template`: The base job template of the work pool to provision infrastructure for. * `advance`: A callback function to indicate progress. #### `requires_provisioning` ```python requires_provisioning(self) -> bool ``` Check if this resource requires provisioning. **Returns:** * True if provisioning is required, False otherwise. ### `ElasticContainerServicePushProvisioner` An infrastructure provisioner for ECS push work pools. **Methods:** #### `console` ```python console(self) -> Console ``` #### `console` ```python console(self, value: Console) -> None ``` #### `is_boto3_installed` ```python is_boto3_installed() -> bool ``` Check if boto3 is installed. #### `provision` ```python provision(self, work_pool_name: str, base_job_template: dict[str, Any]) -> dict[str, Any] ``` Provisions the infrastructure for an ECS push work pool. **Args:** * `work_pool_name`: The name of the work pool to provision infrastructure for. * `base_job_template`: The base job template of the work pool to provision infrastructure for. **Returns:** * An updated copy base job template. # modal Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-infrastructure-provisioners-modal # `prefect.infrastructure.provisioners.modal` ## Classes ### `ModalPushProvisioner` A infrastructure provisioner for Modal push work pools. **Methods:** #### `console` ```python console(self) -> Console ``` #### `console` ```python console(self, value: Console) -> None ``` #### `provision` ```python provision(self, work_pool_name: str, base_job_template: Dict[str, Any], client: Optional['PrefectClient'] = None) -> Dict[str, Any] ``` Provisions resources necessary for a Modal push work pool. **Args:** * `work_pool_name`: The name of the work pool to provision resources for * `base_job_template`: The base job template to update **Returns:** * A copy of the provided base job template with the provisioned resources # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-input-__init__ # `prefect.input` *This module is empty or contains only private/internal implementations.* # actions Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-input-actions # `prefect.input.actions` ## Functions ### `ensure_flow_run_id` ```python ensure_flow_run_id(flow_run_id: Optional[UUID] = None) -> UUID ``` ### `create_flow_run_input_from_model` ```python create_flow_run_input_from_model(key: str, model_instance: pydantic.BaseModel, flow_run_id: Optional[UUID] = None, sender: Optional[str] = None) ``` ### `create_flow_run_input` ```python create_flow_run_input(client: 'PrefectClient', key: str, value: Any, flow_run_id: Optional[UUID] = None, sender: Optional[str] = None) ``` Create a new flow run input. The given `value` will be serialized to JSON and stored as a flow run input value. **Args:** * `- key`: the flow run input key * `- value`: the flow run input value * `- flow_run_id`: the, optional, flow run ID. If not given will default to pulling the flow run ID from the current context. ### `filter_flow_run_input` ```python filter_flow_run_input(client: 'PrefectClient', key_prefix: str, limit: int = 1, exclude_keys: Optional[Set[str]] = None, flow_run_id: Optional[UUID] = None) ``` ### `read_flow_run_input` ```python read_flow_run_input(client: 'PrefectClient', key: str, flow_run_id: Optional[UUID] = None) -> Any ``` Read a flow run input. **Args:** * `- key`: the flow run input key * `- flow_run_id`: the flow run ID ### `delete_flow_run_input` ```python delete_flow_run_input(client: 'PrefectClient', key: str, flow_run_id: Optional[UUID] = None) ``` Delete a flow run input. **Args:** * `- flow_run_id`: the flow run ID * `- key`: the flow run input key # run_input Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-input-run_input # `prefect.input.run_input` This module contains functions that allow sending type-checked `RunInput` data to flows at runtime. Flows can send back responses, establishing two-way channels with senders. These functions are particularly useful for systems that require ongoing data transfer or need to react to input quickly. real-time interaction and efficient data handling. It's designed to facilitate dynamic communication within distributed or microservices-oriented systems, making it ideal for scenarios requiring continuous data synchronization and processing. It's particularly useful for systems that require ongoing data input and output. The following is an example of two flows. One sends a random number to the other and waits for a response. The other receives the number, squares it, and sends the result back. The sender flow then prints the result. Sender flow: ```python import random from uuid import UUID from prefect import flow from prefect.logging import get_run_logger from prefect.input import RunInput class NumberData(RunInput): number: int @flow async def sender_flow(receiver_flow_run_id: UUID): logger = get_run_logger() the_number = random.randint(1, 100) await NumberData(number=the_number).send_to(receiver_flow_run_id) receiver = NumberData.receive(flow_run_id=receiver_flow_run_id) squared = await receiver.next() logger.info(f"{the_number} squared is {squared.number}") ``` Receiver flow: ```python import random from uuid import UUID from prefect import flow from prefect.logging import get_run_logger from prefect.input import RunInput class NumberData(RunInput): number: int @flow async def receiver_flow(): async for data in NumberData.receive(): squared = data.number ** 2 data.respond(NumberData(number=squared)) ``` ## Functions ### `keyset_from_paused_state` ```python keyset_from_paused_state(state: 'State') -> Keyset ``` Get the keyset for the given Paused state. **Args:** * `- state`: the state to get the keyset for ### `keyset_from_base_key` ```python keyset_from_base_key(base_key: str) -> Keyset ``` Get the keyset for the given base key. **Args:** * `- base_key`: the base key to get the keyset for **Returns:** * * Dict\[str, str]: the keyset ### `run_input_subclass_from_type` ```python run_input_subclass_from_type(_type: Union[Type[R], Type[T], pydantic.BaseModel]) -> Union[Type[AutomaticRunInput[T]], Type[R]] ``` Create a new `RunInput` subclass from the given type. ### `send_input` ```python send_input(run_input: Any, flow_run_id: UUID, sender: Optional[str] = None, key_prefix: Optional[str] = None) ``` ### `receive_input` ```python receive_input(input_type: Union[Type[R], Type[T], pydantic.BaseModel], timeout: Optional[float] = 3600, poll_interval: float = 10, raise_timeout_error: bool = False, exclude_keys: Optional[Set[str]] = None, key_prefix: Optional[str] = None, flow_run_id: Optional[UUID] = None, with_metadata: bool = False) -> Union[GetAutomaticInputHandler[T], GetInputHandler[R]] ``` ## Classes ### `RunInputMetadata` ### `BaseRunInput` **Methods:** #### `keyset_from_type` ```python keyset_from_type(cls) -> Keyset ``` #### `load` ```python load(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None) -> Self ``` Load the run input response from the given key. **Args:** * `- keyset`: the keyset to load the input for * `- flow_run_id`: the flow run ID to load the input for #### `load_from_flow_run_input` ```python load_from_flow_run_input(cls, flow_run_input: 'FlowRunInput') -> Self ``` Load the run input from a FlowRunInput object. **Args:** * `- flow_run_input`: the flow run input to load the input for #### `metadata` ```python metadata(self) -> RunInputMetadata ``` #### `respond` ```python respond(self, run_input: 'RunInput', sender: Optional[str] = None, key_prefix: Optional[str] = None) ``` #### `save` ```python save(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None) ``` Save the run input response to the given key. **Args:** * `- keyset`: the keyset to save the input for * `- flow_run_id`: the flow run ID to save the input for #### `send_to` ```python send_to(self, flow_run_id: UUID, sender: Optional[str] = None, key_prefix: Optional[str] = None) ``` #### `with_initial_data` ```python with_initial_data(cls: Type[R], description: Optional[str] = None, **kwargs: Any) -> Type[R] ``` Create a new `RunInput` subclass with the given initial data as field defaults. **Args:** * `- description`: a description to show when resuming a flow run that requires input * `- kwargs`: the initial data to populate the subclass ### `RunInput` **Methods:** #### `keyset_from_type` ```python keyset_from_type(cls) -> Keyset ``` #### `load` ```python load(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None) -> Self ``` Load the run input response from the given key. **Args:** * `- keyset`: the keyset to load the input for * `- flow_run_id`: the flow run ID to load the input for #### `load_from_flow_run_input` ```python load_from_flow_run_input(cls, flow_run_input: 'FlowRunInput') -> Self ``` Load the run input from a FlowRunInput object. **Args:** * `- flow_run_input`: the flow run input to load the input for #### `metadata` ```python metadata(self) -> RunInputMetadata ``` #### `receive` ```python receive(cls, timeout: Optional[float] = 3600, poll_interval: float = 10, raise_timeout_error: bool = False, exclude_keys: Optional[Set[str]] = None, key_prefix: Optional[str] = None, flow_run_id: Optional[UUID] = None) -> GetInputHandler[Self] ``` #### `respond` ```python respond(self, run_input: 'RunInput', sender: Optional[str] = None, key_prefix: Optional[str] = None) ``` #### `save` ```python save(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None) ``` Save the run input response to the given key. **Args:** * `- keyset`: the keyset to save the input for * `- flow_run_id`: the flow run ID to save the input for #### `send_to` ```python send_to(self, flow_run_id: UUID, sender: Optional[str] = None, key_prefix: Optional[str] = None) ``` #### `subclass_from_base_model_type` ```python subclass_from_base_model_type(cls, model_cls: Type[pydantic.BaseModel]) -> Type['RunInput'] ``` Create a new `RunInput` subclass from the given `pydantic.BaseModel` subclass. **Args:** * `- model_cls`: the class from which to create the new `RunInput` subclass #### `with_initial_data` ```python with_initial_data(cls: Type[R], description: Optional[str] = None, **kwargs: Any) -> Type[R] ``` Create a new `RunInput` subclass with the given initial data as field defaults. **Args:** * `- description`: a description to show when resuming a flow run that requires input * `- kwargs`: the initial data to populate the subclass ### `AutomaticRunInput` **Methods:** #### `keyset_from_type` ```python keyset_from_type(cls) -> Keyset ``` #### `load` ```python load(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None) -> Self ``` Load the run input response from the given key. **Args:** * `- keyset`: the keyset to load the input for * `- flow_run_id`: the flow run ID to load the input for #### `load` ```python load(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None) -> Self ``` Load the run input response from the given key. **Args:** * `- keyset`: the keyset to load the input for * `- flow_run_id`: the flow run ID to load the input for #### `load_from_flow_run_input` ```python load_from_flow_run_input(cls, flow_run_input: 'FlowRunInput') -> Self ``` Load the run input from a FlowRunInput object. **Args:** * `- flow_run_input`: the flow run input to load the input for #### `metadata` ```python metadata(self) -> RunInputMetadata ``` #### `receive` ```python receive(cls, timeout: Optional[float] = 3600, poll_interval: float = 10, raise_timeout_error: bool = False, exclude_keys: Optional[Set[str]] = None, key_prefix: Optional[str] = None, flow_run_id: Optional[UUID] = None, with_metadata: bool = False) -> GetAutomaticInputHandler[T] ``` #### `respond` ```python respond(self, run_input: 'RunInput', sender: Optional[str] = None, key_prefix: Optional[str] = None) ``` #### `save` ```python save(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None) ``` Save the run input response to the given key. **Args:** * `- keyset`: the keyset to save the input for * `- flow_run_id`: the flow run ID to save the input for #### `send_to` ```python send_to(self, flow_run_id: UUID, sender: Optional[str] = None, key_prefix: Optional[str] = None) ``` #### `subclass_from_type` ```python subclass_from_type(cls, _type: Type[T]) -> Type['AutomaticRunInput[T]'] ``` Create a new `AutomaticRunInput` subclass from the given type. This method uses the type's name as a key prefix to identify related flow run inputs. This helps in ensuring that values saved under a type (like List\[int]) are retrievable under the generic type name (like "list"). #### `with_initial_data` ```python with_initial_data(cls: Type[R], description: Optional[str] = None, **kwargs: Any) -> Type[R] ``` Create a new `RunInput` subclass with the given initial data as field defaults. **Args:** * `- description`: a description to show when resuming a flow run that requires input * `- kwargs`: the initial data to populate the subclass ### `GetInputHandler` **Methods:** #### `filter_for_inputs` ```python filter_for_inputs(self) -> list['FlowRunInput'] ``` #### `next` ```python next(self) -> R ``` #### `to_instance` ```python to_instance(self, flow_run_input: 'FlowRunInput') -> R ``` ### `GetAutomaticInputHandler` **Methods:** #### `filter_for_inputs` ```python filter_for_inputs(self) -> list['FlowRunInput'] ``` #### `next` ```python next(self) -> Union[T, AutomaticRunInput[T]] ``` #### `to_instance` ```python to_instance(self, flow_run_input: 'FlowRunInput') -> Union[T, AutomaticRunInput[T]] ``` # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-locking-__init__ # `prefect.locking` *This module is empty or contains only private/internal implementations.* # filesystem Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-locking-filesystem # `prefect.locking.filesystem` ## Classes ### `FileSystemLockManager` A lock manager that implements locking using local files. **Methods:** #### `aacquire_lock` ```python aacquire_lock(self, key: str, holder: str, acquire_timeout: Optional[float] = None, hold_timeout: Optional[float] = None) -> bool ``` #### `aacquire_lock` ```python aacquire_lock(self, key: str, holder: str, acquire_timeout: Optional[float] = None, hold_timeout: Optional[float] = None) -> bool ``` Acquire a lock for a transaction record with the given key. Will block other actors from updating this transaction record until the lock is released. **Args:** * `key`: Unique identifier for the transaction record. * `holder`: Unique identifier for the holder of the lock. * `acquire_timeout`: Max number of seconds to wait for the record to become available if it is locked while attempting to acquire a lock. Pass 0 to attempt to acquire a lock without waiting. Blocks indefinitely by default. * `hold_timeout`: Max number of seconds to hold the lock for. Holds the lock indefinitely by default. **Returns:** * True if the lock was successfully acquired; False otherwise. #### `acquire_lock` ```python acquire_lock(self, key: str, holder: str, acquire_timeout: Optional[float] = None, hold_timeout: Optional[float] = None) -> bool ``` #### `acquire_lock` ```python acquire_lock(self, key: str, holder: str, acquire_timeout: Optional[float] = None, hold_timeout: Optional[float] = None) -> bool ``` Acquire a lock for a transaction record with the given key. Will block other actors from updating this transaction record until the lock is released. **Args:** * `key`: Unique identifier for the transaction record. * `holder`: Unique identifier for the holder of the lock. * `acquire_timeout`: Max number of seconds to wait for the record to become available if it is locked while attempting to acquire a lock. Pass 0 to attempt to acquire a lock without waiting. Blocks indefinitely by default. * `hold_timeout`: Max number of seconds to hold the lock for. Holds the lock indefinitely by default. **Returns:** * True if the lock was successfully acquired; False otherwise. #### `await_for_lock` ```python await_for_lock(self, key: str, timeout: Optional[float] = None) -> bool ``` #### `await_for_lock` ```python await_for_lock(self, key: str, timeout: Optional[float] = None) -> bool ``` Wait for the corresponding transaction record to become free. **Args:** * `key`: Unique identifier for the transaction record. * `timeout`: Maximum time to wait. None means to wait indefinitely. **Returns:** * True if the lock becomes free within the timeout; False otherwise. #### `is_lock_holder` ```python is_lock_holder(self, key: str, holder: str) -> bool ``` #### `is_lock_holder` ```python is_lock_holder(self, key: str, holder: str) -> bool ``` Check if the current holder is the lock holder for the transaction record. **Args:** * `key`: Unique identifier for the transaction record. * `holder`: Unique identifier for the holder of the lock. **Returns:** * True if the current holder is the lock holder; False otherwise. #### `is_locked` ```python is_locked(self, key: str, use_cache: bool = False) -> bool ``` #### `is_locked` ```python is_locked(self, key: str) -> bool ``` Simple check to see if the corresponding record is currently locked. **Args:** * `key`: Unique identifier for the transaction record. **Returns:** * True is the record is locked; False otherwise. #### `release_lock` ```python release_lock(self, key: str, holder: str) -> None ``` #### `release_lock` ```python release_lock(self, key: str, holder: str) -> None ``` Releases the lock on the corresponding transaction record. **Args:** * `key`: Unique identifier for the transaction record. * `holder`: Unique identifier for the holder of the lock. Must match the holder provided when acquiring the lock. #### `wait_for_lock` ```python wait_for_lock(self, key: str, timeout: Optional[float] = None) -> bool ``` #### `wait_for_lock` ```python wait_for_lock(self, key: str, timeout: Optional[float] = None) -> bool ``` Wait for the corresponding transaction record to become free. **Args:** * `key`: Unique identifier for the transaction record. * `timeout`: Maximum time to wait. None means to wait indefinitely. **Returns:** * True if the lock becomes free within the timeout; False otherwise. # memory Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-locking-memory # `prefect.locking.memory` ## Classes ### `MemoryLockManager` A lock manager that stores lock information in memory. Note: because this lock manager stores data in memory, it is not suitable for use in a distributed environment or across different processes. **Methods:** #### `aacquire_lock` ```python aacquire_lock(self, key: str, holder: str, acquire_timeout: float | None = None, hold_timeout: float | None = None) -> bool ``` #### `aacquire_lock` ```python aacquire_lock(self, key: str, holder: str, acquire_timeout: Optional[float] = None, hold_timeout: Optional[float] = None) -> bool ``` Acquire a lock for a transaction record with the given key. Will block other actors from updating this transaction record until the lock is released. **Args:** * `key`: Unique identifier for the transaction record. * `holder`: Unique identifier for the holder of the lock. * `acquire_timeout`: Max number of seconds to wait for the record to become available if it is locked while attempting to acquire a lock. Pass 0 to attempt to acquire a lock without waiting. Blocks indefinitely by default. * `hold_timeout`: Max number of seconds to hold the lock for. Holds the lock indefinitely by default. **Returns:** * True if the lock was successfully acquired; False otherwise. #### `acquire_lock` ```python acquire_lock(self, key: str, holder: str, acquire_timeout: float | None = None, hold_timeout: float | None = None) -> bool ``` #### `acquire_lock` ```python acquire_lock(self, key: str, holder: str, acquire_timeout: Optional[float] = None, hold_timeout: Optional[float] = None) -> bool ``` Acquire a lock for a transaction record with the given key. Will block other actors from updating this transaction record until the lock is released. **Args:** * `key`: Unique identifier for the transaction record. * `holder`: Unique identifier for the holder of the lock. * `acquire_timeout`: Max number of seconds to wait for the record to become available if it is locked while attempting to acquire a lock. Pass 0 to attempt to acquire a lock without waiting. Blocks indefinitely by default. * `hold_timeout`: Max number of seconds to hold the lock for. Holds the lock indefinitely by default. **Returns:** * True if the lock was successfully acquired; False otherwise. #### `await_for_lock` ```python await_for_lock(self, key: str, timeout: float | None = None) -> bool ``` #### `await_for_lock` ```python await_for_lock(self, key: str, timeout: Optional[float] = None) -> bool ``` Wait for the corresponding transaction record to become free. **Args:** * `key`: Unique identifier for the transaction record. * `timeout`: Maximum time to wait. None means to wait indefinitely. **Returns:** * True if the lock becomes free within the timeout; False otherwise. #### `is_lock_holder` ```python is_lock_holder(self, key: str, holder: str) -> bool ``` #### `is_lock_holder` ```python is_lock_holder(self, key: str, holder: str) -> bool ``` Check if the current holder is the lock holder for the transaction record. **Args:** * `key`: Unique identifier for the transaction record. * `holder`: Unique identifier for the holder of the lock. **Returns:** * True if the current holder is the lock holder; False otherwise. #### `is_locked` ```python is_locked(self, key: str) -> bool ``` #### `is_locked` ```python is_locked(self, key: str) -> bool ``` Simple check to see if the corresponding record is currently locked. **Args:** * `key`: Unique identifier for the transaction record. **Returns:** * True is the record is locked; False otherwise. #### `release_lock` ```python release_lock(self, key: str, holder: str) -> None ``` #### `release_lock` ```python release_lock(self, key: str, holder: str) -> None ``` Releases the lock on the corresponding transaction record. **Args:** * `key`: Unique identifier for the transaction record. * `holder`: Unique identifier for the holder of the lock. Must match the holder provided when acquiring the lock. #### `wait_for_lock` ```python wait_for_lock(self, key: str, timeout: float | None = None) -> bool ``` #### `wait_for_lock` ```python wait_for_lock(self, key: str, timeout: Optional[float] = None) -> bool ``` Wait for the corresponding transaction record to become free. **Args:** * `key`: Unique identifier for the transaction record. * `timeout`: Maximum time to wait. None means to wait indefinitely. **Returns:** * True if the lock becomes free within the timeout; False otherwise. # protocol Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-locking-protocol # `prefect.locking.protocol` ## Classes ### `LockManager` **Methods:** #### `aacquire_lock` ```python aacquire_lock(self, key: str, holder: str, acquire_timeout: Optional[float] = None, hold_timeout: Optional[float] = None) -> bool ``` Acquire a lock for a transaction record with the given key. Will block other actors from updating this transaction record until the lock is released. **Args:** * `key`: Unique identifier for the transaction record. * `holder`: Unique identifier for the holder of the lock. * `acquire_timeout`: Max number of seconds to wait for the record to become available if it is locked while attempting to acquire a lock. Pass 0 to attempt to acquire a lock without waiting. Blocks indefinitely by default. * `hold_timeout`: Max number of seconds to hold the lock for. Holds the lock indefinitely by default. **Returns:** * True if the lock was successfully acquired; False otherwise. #### `acquire_lock` ```python acquire_lock(self, key: str, holder: str, acquire_timeout: Optional[float] = None, hold_timeout: Optional[float] = None) -> bool ``` Acquire a lock for a transaction record with the given key. Will block other actors from updating this transaction record until the lock is released. **Args:** * `key`: Unique identifier for the transaction record. * `holder`: Unique identifier for the holder of the lock. * `acquire_timeout`: Max number of seconds to wait for the record to become available if it is locked while attempting to acquire a lock. Pass 0 to attempt to acquire a lock without waiting. Blocks indefinitely by default. * `hold_timeout`: Max number of seconds to hold the lock for. Holds the lock indefinitely by default. **Returns:** * True if the lock was successfully acquired; False otherwise. #### `await_for_lock` ```python await_for_lock(self, key: str, timeout: Optional[float] = None) -> bool ``` Wait for the corresponding transaction record to become free. **Args:** * `key`: Unique identifier for the transaction record. * `timeout`: Maximum time to wait. None means to wait indefinitely. **Returns:** * True if the lock becomes free within the timeout; False otherwise. #### `is_lock_holder` ```python is_lock_holder(self, key: str, holder: str) -> bool ``` Check if the current holder is the lock holder for the transaction record. **Args:** * `key`: Unique identifier for the transaction record. * `holder`: Unique identifier for the holder of the lock. **Returns:** * True if the current holder is the lock holder; False otherwise. #### `is_locked` ```python is_locked(self, key: str) -> bool ``` Simple check to see if the corresponding record is currently locked. **Args:** * `key`: Unique identifier for the transaction record. **Returns:** * True is the record is locked; False otherwise. #### `release_lock` ```python release_lock(self, key: str, holder: str) -> None ``` Releases the lock on the corresponding transaction record. **Args:** * `key`: Unique identifier for the transaction record. * `holder`: Unique identifier for the holder of the lock. Must match the holder provided when acquiring the lock. #### `wait_for_lock` ```python wait_for_lock(self, key: str, timeout: Optional[float] = None) -> bool ``` Wait for the corresponding transaction record to become free. **Args:** * `key`: Unique identifier for the transaction record. * `timeout`: Maximum time to wait. None means to wait indefinitely. **Returns:** * True if the lock becomes free within the timeout; False otherwise. # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-logging-__init__ # `prefect.logging` *This module is empty or contains only private/internal implementations.* # clients Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-logging-clients # `prefect.logging.clients` ## Functions ### `http_to_ws` ```python http_to_ws(url: str) -> str ``` ### `logs_out_socket_from_api_url` ```python logs_out_socket_from_api_url(url: str) -> str ``` ### `get_logs_subscriber` ```python get_logs_subscriber(filter: Optional['LogFilter'] = None, reconnection_attempts: int = 10) -> 'PrefectLogsSubscriber' ``` Get a logs subscriber based on the current Prefect configuration. Similar to get\_events\_subscriber, this automatically detects whether you're using Prefect Cloud or OSS and returns the appropriate subscriber. ## Classes ### `PrefectLogsSubscriber` Subscribes to a Prefect logs stream, yielding logs as they occur. Example: from prefect.logging.clients import PrefectLogsSubscriber from prefect.client.schemas.filters import LogFilter, LogFilterLevel import logging filter = LogFilter(level=LogFilterLevel(ge\_=logging.INFO)) async with PrefectLogsSubscriber(filter=filter) as subscriber: async for log in subscriber: print(log.timestamp, log.level, log.message) **Methods:** #### `client_name` ```python client_name(self) -> str ``` ### `PrefectCloudLogsSubscriber` Logs subscriber for Prefect Cloud **Methods:** #### `client_name` ```python client_name(self) -> str ``` # configuration Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-logging-configuration # `prefect.logging.configuration` ## Functions ### `load_logging_config` ```python load_logging_config(path: Path) -> dict[str, Any] ``` Loads logging configuration from a path allowing override from the environment ### `setup_logging` ```python setup_logging(incremental: bool | None = None) -> dict[str, Any] ``` Sets up logging. Returns the config used. # filters Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-logging-filters # `prefect.logging.filters` ## Functions ### `redact_substr` ```python redact_substr(obj: Any, substr: str) -> Any ``` Redact a string from a potentially nested object. **Args:** * `obj`: The object to redact the string from * `substr`: The string to redact. **Returns:** * The object with the API key redacted. ## Classes ### `ObfuscateApiKeyFilter` A logging filter that obfuscates any string that matches the obfuscate\_string function. **Methods:** #### `filter` ```python filter(self, record: logging.LogRecord) -> bool ``` # formatters Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-logging-formatters # `prefect.logging.formatters` ## Functions ### `format_exception_info` ```python format_exception_info(exc_info: ExceptionInfoType) -> dict[str, Any] ``` ## Classes ### `JsonFormatter` Formats log records as a JSON string. The format may be specified as "pretty" to format the JSON with indents and newlines. **Methods:** #### `format` ```python format(self, record: logging.LogRecord) -> str ``` ### `PrefectFormatter` **Methods:** #### `formatMessage` ```python formatMessage(self, record: logging.LogRecord) -> str ``` # handlers Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-logging-handlers # `prefect.logging.handlers` ## Classes ### `APILogWorker` **Methods:** #### `instance` ```python instance(cls: Type[Self], *args: Any) -> Self ``` #### `max_batch_size` ```python max_batch_size(self) -> int ``` #### `min_interval` ```python min_interval(self) -> float | None ``` ### `APILogHandler` A logging handler that sends logs to the Prefect API. Sends log records to the `APILogWorker` which manages sending batches of logs in the background. **Methods:** #### `aflush` ```python aflush(cls) -> None ``` Tell the `APILogWorker` to send any currently enqueued logs and block until completion. #### `emit` ```python emit(self, record: logging.LogRecord) -> None ``` Send a log to the `APILogWorker` #### `flush` ```python flush(self) -> None ``` Tell the `APILogWorker` to send any currently enqueued logs and block until completion. Use `aflush` from async contexts instead. #### `handleError` ```python handleError(self, record: logging.LogRecord) -> None ``` #### `prepare` ```python prepare(self, record: logging.LogRecord) -> Dict[str, Any] ``` Convert a `logging.LogRecord` to the API `LogCreate` schema and serialize. This infers the linked flow or task run from the log record or the current run context. If a flow run id cannot be found, the log will be dropped. Logs exceeding the maximum size will be dropped. ### `WorkerAPILogHandler` **Methods:** #### `aflush` ```python aflush(cls) -> None ``` Tell the `APILogWorker` to send any currently enqueued logs and block until completion. #### `emit` ```python emit(self, record: logging.LogRecord) -> None ``` #### `emit` ```python emit(self, record: logging.LogRecord) -> None ``` Send a log to the `APILogWorker` #### `flush` ```python flush(self) -> None ``` Tell the `APILogWorker` to send any currently enqueued logs and block until completion. Use `aflush` from async contexts instead. #### `handleError` ```python handleError(self, record: logging.LogRecord) -> None ``` #### `prepare` ```python prepare(self, record: logging.LogRecord) -> Dict[str, Any] ``` Convert a `logging.LogRecord` to the API `LogCreate` schema and serialize. This will add in the worker id to the log. Logs exceeding the maximum size will be dropped. #### `prepare` ```python prepare(self, record: logging.LogRecord) -> Dict[str, Any] ``` Convert a `logging.LogRecord` to the API `LogCreate` schema and serialize. This infers the linked flow or task run from the log record or the current run context. If a flow run id cannot be found, the log will be dropped. Logs exceeding the maximum size will be dropped. ### `PrefectConsoleHandler` **Methods:** #### `emit` ```python emit(self, record: logging.LogRecord) -> None ``` # highlighters Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-logging-highlighters # `prefect.logging.highlighters` ## Classes ### `LevelHighlighter` Apply style to log levels. ### `UrlHighlighter` Apply style to urls. ### `NameHighlighter` Apply style to names. ### `StateHighlighter` Apply style to states. ### `PrefectConsoleHighlighter` Applies style from multiple highlighters. # loggers Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-logging-loggers # `prefect.logging.loggers` ## Functions ### `get_logger` ```python get_logger(name: str | None = None) -> logging.Logger ``` Get a `prefect` logger. These loggers are intended for internal use within the `prefect` package. See `get_run_logger` for retrieving loggers for use within task or flow runs. By default, only run-related loggers are connected to the `APILogHandler`. ### `get_run_logger` ```python get_run_logger(context: Optional['RunContext'] = None, **kwargs: Any) -> Union[logging.Logger, LoggingAdapter] ``` Get a Prefect logger for the current task run or flow run. The logger will be named either `prefect.task_runs` or `prefect.flow_runs`. Contextual data about the run will be attached to the log records. These loggers are connected to the `APILogHandler` by default to send log records to the API. **Args:** * `context`: A specific context may be provided as an override. By default, the context is inferred from global state and this should not be needed. * `**kwargs`: Additional keyword arguments will be attached to the log records in addition to the run metadata **Raises:** * `MissingContextError`: If no context can be found ### `flow_run_logger` ```python flow_run_logger(flow_run: 'FlowRun', flow: Optional['Flow[Any, Any]'] = None, **kwargs: str) -> PrefectLogAdapter ``` Create a flow run logger with the run's metadata attached. Additional keyword arguments can be provided to attach custom data to the log records. If the flow run context is available, see `get_run_logger` instead. ### `task_run_logger` ```python task_run_logger(task_run: 'TaskRun', task: Optional['Task[Any, Any]'] = None, flow_run: Optional['FlowRun'] = None, flow: Optional['Flow[Any, Any]'] = None, **kwargs: Any) -> LoggingAdapter ``` Create a task run logger with the run's metadata attached. Additional keyword arguments can be provided to attach custom data to the log records. If the task run context is available, see `get_run_logger` instead. If only the flow run context is available, it will be used for default values of `flow_run` and `flow`. ### `get_worker_logger` ```python get_worker_logger(worker: 'BaseWorker[Any, Any, Any]', name: Optional[str] = None) -> logging.Logger | LoggingAdapter ``` Create a worker logger with the worker's metadata attached. If the worker has a backend\_id, it will be attached to the log records. If the worker does not have a backend\_id a basic logger will be returned. If the worker does not have a backend\_id attribute, a basic logger will be returned. ### `disable_logger` ```python disable_logger(name: str) ``` Get a logger by name and disables it within the context manager. Upon exiting the context manager, the logger is returned to its original state. ### `disable_run_logger` ```python disable_run_logger() ``` Gets both `prefect.flow_run` and `prefect.task_run` and disables them within the context manager. Upon exiting the context manager, both loggers are returned to their original state. ### `print_as_log` ```python print_as_log(*args: Any, **kwargs: Any) -> None ``` A patch for `print` to send printed messages to the Prefect run logger. If no run is active, `print` will behave as if it were not patched. If `print` sends data to a file other than `sys.stdout` or `sys.stderr`, it will not be forwarded to the Prefect logger either. ### `patch_print` ```python patch_print() ``` Patches the Python builtin `print` method to use `print_as_log` ## Classes ### `PrefectLogAdapter` Adapter that ensures extra kwargs are passed through correctly; without this the `extra` fields set on the adapter would overshadow any provided on a log-by-log basis. See [https://bugs.python.org/issue32732](https://bugs.python.org/issue32732) β€” the Python team has declared that this is not a bug in the LoggingAdapter and subclassing is the intended workaround. **Methods:** #### `getChild` ```python getChild(self, suffix: str, extra: dict[str, Any] | None = None) -> 'PrefectLogAdapter' ``` #### `process` ```python process(self, msg: str, kwargs: MutableMapping[str, Any]) -> tuple[str, MutableMapping[str, Any]] ``` ### `LogEavesdropper` A context manager that collects logs for the duration of the context Example: ````python import logging from prefect.logging import LogEavesdropper with LogEavesdropper("my_logger") as eavesdropper: logging.getLogger("my_logger").info("Hello, world!") logging.getLogger("my_logger.child_module").info("Another one!") print(eavesdropper.text()) # Outputs: "Hello, world! Another one!" **Methods:** #### `emit` ```python emit(self, record: LogRecord) -> None ```` The logging.Handler implementation, not intended to be called directly. #### `text` ```python text(self) -> str ``` Return the collected logs as a single newline-delimited string # main Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-main # `prefect.main` *This module is empty or contains only private/internal implementations.* # plugins Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-plugins # `prefect.plugins` Utilities for loading plugins that extend Prefect's functionality. Plugins are detected by entry point definitions in package setup files. Currently supported entrypoints: * prefect.collections: Identifies this package as a Prefect collection that should be imported when Prefect is imported. ## Functions ### `safe_load_entrypoints` ```python safe_load_entrypoints(entrypoints: EntryPoints) -> dict[str, Union[Exception, Any]] ``` Load entry points for a group capturing any exceptions that occur. ### `load_prefect_collections` ```python load_prefect_collections() -> dict[str, Union[ModuleType, Exception]] ``` Load all Prefect collections that define an entrypoint in the group `prefect.collections`. # results Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-results # `prefect.results` ## Functions ### `DEFAULT_STORAGE_KEY_FN` ```python DEFAULT_STORAGE_KEY_FN() -> str ``` ### `aget_default_result_storage` ```python aget_default_result_storage() -> WritableFileSystem ``` Generate a default file system for result storage. ### `get_default_result_storage` ```python get_default_result_storage() -> WritableFileSystem ``` Generate a default file system for result storage. ### `aresolve_result_storage` ```python aresolve_result_storage(result_storage: ResultStorage | UUID | Path) -> WritableFileSystem ``` Resolve one of the valid `ResultStorage` input types into a saved block document id and an instance of the block. ### `resolve_result_storage` ```python resolve_result_storage(result_storage: ResultStorage | UUID | Path) -> WritableFileSystem ``` Resolve one of the valid `ResultStorage` input types into a saved block document id and an instance of the block. ### `resolve_serializer` ```python resolve_serializer(serializer: ResultSerializer) -> Serializer ``` Resolve one of the valid `ResultSerializer` input types into a serializer instance. ### `get_or_create_default_task_scheduling_storage` ```python get_or_create_default_task_scheduling_storage() -> ResultStorage ``` Generate a default file system for background task parameter/result storage. ### `get_default_result_serializer` ```python get_default_result_serializer() -> Serializer ``` Generate a default file system for result storage. ### `get_default_persist_setting` ```python get_default_persist_setting() -> bool ``` Return the default option for result persistence. ### `get_default_persist_setting_for_tasks` ```python get_default_persist_setting_for_tasks() -> bool ``` Return the default option for result persistence for tasks. ### `should_persist_result` ```python should_persist_result() -> bool ``` Return the default option for result persistence determined by the current run context. If there is no current run context, the value of `results.persist_by_default` on the current settings will be returned. ### `default_cache` ```python default_cache() -> LRUCache[str, 'ResultRecord[Any]'] ``` ### `result_storage_discriminator` ```python result_storage_discriminator(x: Any) -> str ``` ### `get_result_store` ```python get_result_store() -> ResultStore ``` Get the current result store. ## Classes ### `ResultStore` Manages the storage and retrieval of results. **Methods:** #### `aacquire_lock` ```python aacquire_lock(self, key: str, holder: str | None = None, timeout: float | None = None) -> bool ``` Acquire a lock for a result record. **Args:** * `key`: The key to acquire the lock for. * `holder`: The holder of the lock. If not provided, a default holder based on the current host, process, and thread will be used. * `timeout`: The timeout for the lock. **Returns:** * True if the lock was successfully acquired; False otherwise. #### `acquire_lock` ```python acquire_lock(self, key: str, holder: str | None = None, timeout: float | None = None) -> bool ``` Acquire a lock for a result record. **Args:** * `key`: The key to acquire the lock for. * `holder`: The holder of the lock. If not provided, a default holder based on the current host, process, and thread will be used. * `timeout`: The timeout for the lock. **Returns:** * True if the lock was successfully acquired; False otherwise. #### `aexists` ```python aexists(self, key: str) -> bool ``` Check if a result record exists in storage. **Args:** * `key`: The key to check for the existence of a result record. **Returns:** * True if the result record exists, False otherwise. #### `apersist_result_record` ```python apersist_result_record(self, result_record: 'ResultRecord[Any]', holder: str | None = None) -> None ``` Persist a result record to storage. **Args:** * `result_record`: The result record to persist. #### `aread` ```python aread(self, key: str, holder: str | None = None) -> 'ResultRecord[Any]' ``` Read a result record from storage. **Args:** * `key`: The key to read the result record from. * `holder`: The holder of the lock if a lock was set on the record. **Returns:** * A result record. #### `await_for_lock` ```python await_for_lock(self, key: str, timeout: float | None = None) -> bool ``` Wait for the corresponding transaction record to become free. #### `awrite` ```python awrite(self, obj: Any, key: str | None = None, expiration: DateTime | None = None, holder: str | None = None) -> None ``` Write a result to storage. **Args:** * `key`: The key to write the result record to. * `obj`: The object to write to storage. * `expiration`: The expiration time for the result record. * `holder`: The holder of the lock if a lock was set on the record. #### `create_result_record` ```python create_result_record(self, obj: Any, key: str | None = None, expiration: DateTime | None = None) -> 'ResultRecord[Any]' ``` Create a result record. **Args:** * `key`: The key to create the result record for. * `obj`: The object to create the result record for. * `expiration`: The expiration time for the result record. #### `exists` ```python exists(self, key: str) -> bool ``` Check if a result record exists in storage. **Args:** * `key`: The key to check for the existence of a result record. **Returns:** * True if the result record exists, False otherwise. #### `generate_default_holder` ```python generate_default_holder() -> str ``` Generate a default holder string using hostname, PID, and thread ID. **Returns:** * A unique identifier string. #### `is_lock_holder` ```python is_lock_holder(self, key: str, holder: str | None = None) -> bool ``` Check if the current holder is the lock holder for the result record. **Args:** * `key`: The key to check the lock for. * `holder`: The holder of the lock. If not provided, a default holder based on the current host, process, and thread will be used. **Returns:** * True if the current holder is the lock holder; False otherwise. #### `is_locked` ```python is_locked(self, key: str) -> bool ``` Check if a result record is locked. #### `persist_result_record` ```python persist_result_record(self, result_record: 'ResultRecord[Any]', holder: str | None = None) -> None ``` Persist a result record to storage. **Args:** * `result_record`: The result record to persist. #### `read` ```python read(self, key: str, holder: str | None = None) -> 'ResultRecord[Any]' ``` Read a result record from storage. **Args:** * `key`: The key to read the result record from. * `holder`: The holder of the lock if a lock was set on the record. **Returns:** * A result record. #### `read_parameters` ```python read_parameters(self, identifier: UUID) -> dict[str, Any] ``` #### `release_lock` ```python release_lock(self, key: str, holder: str | None = None) -> None ``` Release a lock for a result record. **Args:** * `key`: The key to release the lock for. * `holder`: The holder of the lock. Must match the holder that acquired the lock. If not provided, a default holder based on the current host, process, and thread will be used. #### `result_storage_block_id` ```python result_storage_block_id(self) -> UUID | None ``` #### `store_parameters` ```python store_parameters(self, identifier: UUID, parameters: dict[str, Any]) ``` #### `supports_isolation_level` ```python supports_isolation_level(self, level: 'IsolationLevel') -> bool ``` Check if the result store supports a given isolation level. **Args:** * `level`: The isolation level to check. **Returns:** * True if the isolation level is supported, False otherwise. #### `update_for_flow` ```python update_for_flow(self, flow: 'Flow[..., Any]') -> Self ``` Create a new result store for a flow with updated settings. **Args:** * `flow`: The flow to update the result store for. **Returns:** * An updated result store. #### `update_for_task` ```python update_for_task(self: Self, task: 'Task[P, R]') -> Self ``` Create a new result store for a task. **Args:** * `task`: The task to update the result store for. **Returns:** * An updated result store. #### `wait_for_lock` ```python wait_for_lock(self, key: str, timeout: float | None = None) -> bool ``` Wait for the corresponding transaction record to become free. #### `write` ```python write(self, obj: Any, key: str | None = None, expiration: DateTime | None = None, holder: str | None = None) -> None ``` Write a result to storage. Handles the creation of a `ResultRecord` and its serialization to storage. **Args:** * `key`: The key to write the result record to. * `obj`: The object to write to storage. * `expiration`: The expiration time for the result record. * `holder`: The holder of the lock if a lock was set on the record. # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-runner-__init__ # `prefect.runner` *This module is empty or contains only private/internal implementations.* # runner Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-runner-runner # `prefect.runner.runner` Runners are responsible for managing the execution of all deployments. When creating a deployment using either `flow.serve` or the `serve` utility, they also will poll for scheduled runs. Example: ```python import time from prefect import flow, serve @flow def slow_flow(sleep: int = 60): "Sleepy flow - sleeps the provided amount of time (in seconds)." time.sleep(sleep) @flow def fast_flow(): "Fastest flow this side of the Mississippi." return if __name__ == "__main__": slow_deploy = slow_flow.to_deployment(name="sleeper", interval=45) fast_deploy = fast_flow.to_deployment(name="fast") # serve generates a Runner instance serve(slow_deploy, fast_deploy) ``` ## Classes ### `ProcessMapEntry` ### `Runner` **Methods:** #### `add_deployment` ```python add_deployment(self, deployment: 'RunnerDeployment') -> UUID ``` Registers the deployment with the Prefect API and will monitor for work once the runner is started. **Args:** * `deployment`: A deployment for the runner to register. #### `add_flow` ```python add_flow(self, flow: Flow[Any, Any], name: Optional[str] = None, interval: Optional[Union[Iterable[Union[int, float, datetime.timedelta]], int, float, datetime.timedelta]] = None, cron: Optional[Union[Iterable[str], str]] = None, rrule: Optional[Union[Iterable[str], str]] = None, paused: Optional[bool] = None, schedule: Optional[Schedule] = None, schedules: Optional['FlexibleScheduleList'] = None, concurrency_limit: Optional[Union[int, ConcurrencyLimitConfig, None]] = None, parameters: Optional[dict[str, Any]] = None, triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None, description: Optional[str] = None, tags: Optional[List[str]] = None, version: Optional[str] = None, enforce_parameter_schema: bool = True, entrypoint_type: EntrypointType = EntrypointType.FILE_PATH) -> UUID ``` Provides a flow to the runner to be run based on the provided configuration. Will create a deployment for the provided flow and register the deployment with the runner. **Args:** * `flow`: A flow for the runner to run. * `name`: The name to give the created deployment. Will default to the name of the runner. * `interval`: An interval on which to execute the current flow. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds. * `cron`: A cron schedule of when to execute runs of this flow. * `rrule`: An rrule schedule of when to execute runs of this flow. * `paused`: Whether or not to set the created deployment as paused. * `schedule`: A schedule object defining when to execute runs of this deployment. Used to provide additional scheduling options like `timezone` or `parameters`. * `schedules`: A list of schedule objects defining when to execute runs of this flow. Used to define multiple schedules or additional scheduling options like `timezone`. * `concurrency_limit`: The maximum number of concurrent runs of this flow to allow. * `triggers`: A list of triggers that should kick of a run of this flow. * `parameters`: A dictionary of default parameter values to pass to runs of this flow. * `description`: A description for the created deployment. Defaults to the flow's description if not provided. * `tags`: A list of tags to associate with the created deployment for organizational purposes. * `version`: A version for the created deployment. Defaults to the flow's version. * `entrypoint_type`: Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment. #### `cancel_all` ```python cancel_all(self) -> None ``` #### `execute_bundle` ```python execute_bundle(self, bundle: SerializedBundle, cwd: Path | str | None = None, env: dict[str, str | None] | None = None) -> None ``` Executes a bundle in a subprocess. #### `execute_flow_run` ```python execute_flow_run(self, flow_run_id: UUID, entrypoint: str | None = None, command: str | None = None, cwd: Path | str | None = None, env: dict[str, str | None] | None = None, task_status: anyio.abc.TaskStatus[int] = anyio.TASK_STATUS_IGNORED, stream_output: bool = True) -> anyio.abc.Process | None ``` Executes a single flow run with the given ID. Execution will wait to monitor for cancellation requests. Exits once the flow run process has exited. **Returns:** * The flow run process. #### `execute_in_background` ```python execute_in_background(self, func: Callable[..., Any], *args: Any, **kwargs: Any) -> 'concurrent.futures.Future[Any]' ``` Executes a function in the background. #### `handle_sigterm` ```python handle_sigterm(self, *args: Any, **kwargs: Any) -> None ``` Gracefully shuts down the runner when a SIGTERM is received. #### `has_slots_available` ```python has_slots_available(self) -> bool ``` Determine if the flow run limit has been reached. **Returns:** * * bool: True if the limit has not been reached, False otherwise. #### `reschedule_current_flow_runs` ```python reschedule_current_flow_runs(self) -> None ``` Reschedules all flow runs that are currently running. This should only be called when the runner is shutting down because it kill all child processes and short-circuit the crash detection logic. #### `start` ```python start(self, run_once: bool = False, webserver: Optional[bool] = None) -> None ``` Starts a runner. The runner will begin monitoring for and executing any scheduled work for all added flows. **Args:** * `run_once`: If True, the runner will through one query loop and then exit. * `webserver`: a boolean for whether to start a webserver for this runner. If provided, overrides the default on the runner **Examples:** Initialize a Runner, add two flows, and serve them by starting the Runner: ```python import asyncio from prefect import flow, Runner @flow def hello_flow(name): print(f"hello {name}") @flow def goodbye_flow(name): print(f"goodbye {name}") if __name__ == "__main__" runner = Runner(name="my-runner") # Will be runnable via the API runner.add_flow(hello_flow) # Run on a cron schedule runner.add_flow(goodbye_flow, schedule={"cron": "0 * * * *"}) asyncio.run(runner.start()) ``` #### `stop` ```python stop(self) ``` Stops the runner's polling cycle. # server Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-runner-server # `prefect.runner.server` ## Functions ### `perform_health_check` ```python perform_health_check(runner: 'Runner', delay_threshold: int | None = None) -> Callable[..., JSONResponse] ``` ### `run_count` ```python run_count(runner: 'Runner') -> Callable[..., int] ``` ### `shutdown` ```python shutdown(runner: 'Runner') -> Callable[..., JSONResponse] ``` ### `get_deployment_router` ```python get_deployment_router(runner: 'Runner') -> tuple[APIRouter, dict[Hashable, Any]] ``` ### `get_subflow_schemas` ```python get_subflow_schemas(runner: 'Runner') -> dict[str, dict[str, Any]] ``` Load available subflow schemas by filtering for only those subflows in the deployment entrypoint's import space. ### `build_server` ```python build_server(runner: 'Runner') -> FastAPI ``` Build a FastAPI server for a runner. **Args:** * `runner`: the runner this server interacts with and monitors * `log_level`: the log level to use for the server ### `start_webserver` ```python start_webserver(runner: 'Runner', log_level: str | None = None) -> None ``` Run a FastAPI server for a runner. **Args:** * `runner`: the runner this server interacts with and monitors * `log_level`: the log level to use for the server ## Classes ### `RunnerGenericFlowRunRequest` # storage Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-runner-storage # `prefect.runner.storage` ## Functions ### `create_storage_from_source` ```python create_storage_from_source(source: str, pull_interval: Optional[int] = 60) -> RunnerStorage ``` Creates a storage object from a URL. **Args:** * `url`: The URL to create a storage object from. Supports git and `fsspec` URLs. * `pull_interval`: The interval at which to pull contents from remote storage to local storage **Returns:** * A runner storage compatible object ## Classes ### `RunnerStorage` A storage interface for a runner to use to retrieve remotely stored flow code. **Methods:** #### `destination` ```python destination(self) -> Path ``` The local file path to pull contents from remote storage to. #### `pull_code` ```python pull_code(self) -> None ``` Pulls contents from remote storage to the local filesystem. #### `pull_interval` ```python pull_interval(self) -> Optional[int] ``` The interval at which contents from remote storage should be pulled to local storage. If None, remote storage will perform a one-time sync. #### `set_base_path` ```python set_base_path(self, path: Path) -> None ``` Sets the base path to use when pulling contents from remote storage to local storage. #### `to_pull_step` ```python to_pull_step(self) -> dict[str, Any] | list[dict[str, Any]] ``` Returns a dictionary representation of the storage object that can be used as a deployment pull step. ### `GitCredentials` ### `GitRepository` Pulls the contents of a git repository to the local filesystem. **Args:** * `url`: The URL of the git repository to pull from * `credentials`: A dictionary of credentials to use when pulling from the repository. If a username is provided, an access token must also be provided. * `name`: The name of the repository. If not provided, the name will be inferred from the repository URL. * `branch`: The branch to pull from. Defaults to "main". * `pull_interval`: The interval in seconds at which to pull contents from remote storage to local storage. If None, remote storage will perform a one-time sync. * `directories`: The directories to pull from the Git repository (uses git sparse-checkout) **Examples:** Pull the contents of a private git repository to the local filesystem: ```python from prefect.runner.storage import GitRepository storage = GitRepository( url="https://github.com/org/repo.git", credentials={"username": "oauth2", "access_token": "my-access-token"}, ) await storage.pull_code() ``` **Methods:** #### `destination` ```python destination(self) -> Path ``` #### `is_current_commit` ```python is_current_commit(self) -> bool ``` Check if the current commit is the same as the commit SHA #### `is_shallow_clone` ```python is_shallow_clone(self) -> bool ``` Check if the repository is a shallow clone #### `is_sparsely_checked_out` ```python is_sparsely_checked_out(self) -> bool ``` Check if existing repo is sparsely checked out #### `pull_code` ```python pull_code(self) -> None ``` Pulls the contents of the configured repository to the local filesystem. #### `pull_interval` ```python pull_interval(self) -> Optional[int] ``` #### `set_base_path` ```python set_base_path(self, path: Path) -> None ``` #### `to_pull_step` ```python to_pull_step(self) -> dict[str, Any] ``` ### `RemoteStorage` Pulls the contents of a remote storage location to the local filesystem. **Args:** * `url`: The URL of the remote storage location to pull from. Supports `fsspec` URLs. Some protocols may require an additional `fsspec` dependency to be installed. Refer to the [`fsspec` docs](https://filesystem-spec.readthedocs.io/en/latest/api.html#other-known-implementations) for more details. * `pull_interval`: The interval in seconds at which to pull contents from remote storage to local storage. If None, remote storage will perform a one-time sync. * `**settings`: Any additional settings to pass the `fsspec` filesystem class. **Examples:** Pull the contents of a remote storage location to the local filesystem: ```python from prefect.runner.storage import RemoteStorage storage = RemoteStorage(url="s3://my-bucket/my-folder") await storage.pull_code() ``` Pull the contents of a remote storage location to the local filesystem with additional settings: ```python from prefect.runner.storage import RemoteStorage from prefect.blocks.system import Secret storage = RemoteStorage( url="s3://my-bucket/my-folder", # Use Secret blocks to keep credentials out of your code key=Secret.load("my-aws-access-key"), secret=Secret.load("my-aws-secret-key"), ) await storage.pull_code() ``` **Methods:** #### `destination` ```python destination(self) -> Path ``` The local file path to pull contents from remote storage to. #### `pull_code` ```python pull_code(self) -> None ``` Pulls contents from remote storage to the local filesystem. #### `pull_interval` ```python pull_interval(self) -> Optional[int] ``` The interval at which contents from remote storage should be pulled to local storage. If None, remote storage will perform a one-time sync. #### `set_base_path` ```python set_base_path(self, path: Path) -> None ``` #### `to_pull_step` ```python to_pull_step(self) -> dict[str, Any] ``` Returns a dictionary representation of the storage object that can be used as a deployment pull step. ### `BlockStorageAdapter` A storage adapter for a storage block object to allow it to be used as a runner storage object. **Methods:** #### `destination` ```python destination(self) -> Path ``` #### `pull_code` ```python pull_code(self) -> None ``` #### `pull_interval` ```python pull_interval(self) -> Optional[int] ``` #### `set_base_path` ```python set_base_path(self, path: Path) -> None ``` #### `to_pull_step` ```python to_pull_step(self) -> dict[str, Any] ``` ### `LocalStorage` Sets the working directory in the local filesystem. Parameters: Path: Local file path to set the working directory for the flow Examples: Sets the working directory for the local path to the flow: ```python from prefect.runner.storage import Localstorage storage = LocalStorage( path="/path/to/local/flow_directory", ) ``` **Methods:** #### `destination` ```python destination(self) -> Path ``` #### `pull_code` ```python pull_code(self) -> None ``` #### `pull_interval` ```python pull_interval(self) -> Optional[int] ``` #### `set_base_path` ```python set_base_path(self, path: Path) -> None ``` #### `to_pull_step` ```python to_pull_step(self) -> dict[str, Any] ``` Returns a dictionary representation of the storage object that can be used as a deployment pull step. # submit Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-runner-submit # `prefect.runner.submit` ## Functions ### `submit_to_runner` ```python submit_to_runner(prefect_callable: Flow[Any, Any], parameters: dict[str, Any] | list[dict[str, Any]] | None = None, retry_failed_submissions: bool = True) -> FlowRun | list[FlowRun] ``` Submit a callable in the background via the runner webserver one or more times. **Args:** * `prefect_callable`: the callable to run (only flows are supported for now, but eventually tasks) * `parameters`: keyword arguments to pass to the callable. May be a list of dictionaries where each dictionary represents a discrete invocation of the callable * `retry_failed_submissions`: Whether to retry failed submissions to the runner webserver. ### `wait_for_submitted_runs` ```python wait_for_submitted_runs(flow_run_filter: FlowRunFilter | None = None, task_run_filter: TaskRunFilter | None = None, timeout: float | None = None, poll_interval: float = 3.0) -> uuid.UUID | None ``` Wait for completion of any provided flow runs (eventually task runs), as well as subflow runs of the current flow run (if called from within a flow run and subflow runs exist). **Args:** * `flow_run_filter`: A filter to apply to the flow runs to wait for. * `task_run_filter`: A filter to apply to the task runs to wait for. # TODO: /task/run * `timeout`: How long to wait for completion of all runs (seconds). * `poll_interval`: How long to wait between polling each run's state (seconds). # utils Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-runner-utils # `prefect.runner.utils` ## Functions ### `inject_schemas_into_openapi` ```python inject_schemas_into_openapi(webserver: FastAPI, schemas_to_inject: dict[Hashable, Any]) -> dict[str, Any] ``` Augments the webserver's OpenAPI schema with additional schemas from deployments / flows / tasks. **Args:** * `webserver`: The FastAPI instance representing the webserver. * `schemas_to_inject`: A dictionary of OpenAPI schemas to integrate. **Returns:** * The augmented OpenAPI schema dictionary. ### `merge_definitions` ```python merge_definitions(injected_schemas: dict[Hashable, Any], openapi_schema: dict[str, Any]) -> dict[str, Any] ``` Integrates definitions from injected schemas into the OpenAPI components. **Args:** * `injected_schemas`: A dictionary of deployment-specific schemas. * `openapi_schema`: The base OpenAPI schema to update. ### `update_refs_in_schema` ```python update_refs_in_schema(schema_item: dict[str, Any] | list[Any], new_ref: str) -> None ``` Recursively replaces `$ref` with a new reference base in a schema item. **Args:** * `schema_item`: A schema or part of a schema to update references in. * `new_ref`: The new base string to replace in `$ref` values. ### `update_refs_to_components` ```python update_refs_to_components(openapi_schema: dict[str, Any]) -> dict[str, Any] ``` Updates all `$ref` fields in the OpenAPI schema to reference the components section. **Args:** * `openapi_schema`: The OpenAPI schema to modify `$ref` fields in. # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-runtime-__init__ # `prefect.runtime` Module for easily accessing dynamic attributes for a given run, especially those generated from deployments. Example usage: ```python from prefect.runtime import deployment print(f"This script is running from deployment {deployment.id} with parameters {deployment.parameters}") ``` # deployment Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-runtime-deployment # `prefect.runtime.deployment` Access attributes of the current deployment run dynamically. Note that if a deployment is not currently being run, all attributes will return empty values. You can mock the runtime attributes for testing purposes by setting environment variables prefixed with `PREFECT__RUNTIME__DEPLOYMENT`. Example usage: ```python from prefect.runtime import deployment def get_task_runner(): task_runner_config = deployment.parameters.get("runner_config", "default config here") return DummyTaskRunner(task_runner_specs=task_runner_config) ``` Available attributes: * `id`: the deployment's unique ID * `name`: the deployment's name * `version`: the deployment's version * `flow_run_id`: the current flow run ID for this deployment * `parameters`: the parameters that were passed to this run; note that these do not necessarily include default values set on the flow function, only the parameter values set on the deployment object or those directly provided via API for this run ## Functions ### `get_id` ```python get_id() -> Optional[str] ``` ### `get_parameters` ```python get_parameters() -> dict[str, Any] ``` ### `get_name` ```python get_name() -> Optional[str] ``` ### `get_version` ```python get_version() -> Optional[str] ``` ### `get_flow_run_id` ```python get_flow_run_id() -> Optional[str] ``` # flow_run Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-runtime-flow_run # `prefect.runtime.flow_run` Access attributes of the current flow run dynamically. Note that if a flow run cannot be discovered, all attributes will return empty values. You can mock the runtime attributes for testing purposes by setting environment variables prefixed with `PREFECT__RUNTIME__FLOW_RUN`. Available attributes: * `id`: the flow run's unique ID * `tags`: the flow run's set of tags * `scheduled_start_time`: the flow run's expected scheduled start time; defaults to now if not present * `name`: the name of the flow run * `flow_name`: the name of the flow * `flow_version`: the version of the flow * `parameters`: the parameters that were passed to this run; note that these do not necessarily include default values set on the flow function, only the parameter values explicitly passed for the run * `parent_flow_run_id`: the ID of the flow run that triggered this run, if any * `parent_deployment_id`: the ID of the deployment that triggered this run, if any * `run_count`: the number of times this flow run has been run ## Functions ### `get_id` ```python get_id() -> Optional[str] ``` ### `get_tags` ```python get_tags() -> List[str] ``` ### `get_run_count` ```python get_run_count() -> int ``` ### `get_name` ```python get_name() -> Optional[str] ``` ### `get_flow_name` ```python get_flow_name() -> Optional[str] ``` ### `get_flow_version` ```python get_flow_version() -> Optional[str] ``` ### `get_scheduled_start_time` ```python get_scheduled_start_time() -> DateTime ``` ### `get_parameters` ```python get_parameters() -> Dict[str, Any] ``` ### `get_parent_flow_run_id` ```python get_parent_flow_run_id() -> Optional[str] ``` ### `get_parent_deployment_id` ```python get_parent_deployment_id() -> Optional[str] ``` ### `get_root_flow_run_id` ```python get_root_flow_run_id() -> str ``` ### `get_flow_run_api_url` ```python get_flow_run_api_url() -> Optional[str] ``` ### `get_flow_run_ui_url` ```python get_flow_run_ui_url() -> Optional[str] ``` ### `get_job_variables` ```python get_job_variables() -> Optional[Dict[str, Any]] ``` # task_run Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-runtime-task_run # `prefect.runtime.task_run` Access attributes of the current task run dynamically. Note that if a task run cannot be discovered, all attributes will return empty values. You can mock the runtime attributes for testing purposes by setting environment variables prefixed with `PREFECT__RUNTIME__TASK_RUN`. Available attributes: * `id`: the task run's unique ID * `name`: the name of the task run * `tags`: the task run's set of tags * `parameters`: the parameters the task was called with * `run_count`: the number of times this task run has been run * `task_name`: the name of the task ## Functions ### `get_id` ```python get_id() -> str | None ``` ### `get_tags` ```python get_tags() -> list[str] ``` ### `get_run_count` ```python get_run_count() -> int ``` ### `get_name` ```python get_name() -> str | None ``` ### `get_task_name` ```python get_task_name() -> str | None ``` ### `get_parameters` ```python get_parameters() -> dict[str, Any] ``` ### `get_task_run_api_url` ```python get_task_run_api_url() -> str | None ``` ### `get_task_run_ui_url` ```python get_task_run_ui_url() -> str | None ``` # schedules Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-schedules # `prefect.schedules` This module contains functionality for creating schedules for deployments. ## Functions ### `Cron` ```python Cron(timezone: str | None = None, day_or: bool = True, active: bool = True, parameters: dict[str, Any] | None = None, slug: str | None = None) -> Schedule ``` Creates a cron schedule. **Args:** * `cron`: A valid cron string (e.g. "0 0 \* \* \*"). * `timezone`: A valid timezone string in IANA tzdata format (e.g. America/New\_York). * `day_or`: Control how `day` and `day_of_week` entries are handled. Defaults to True, matching cron which connects those values using OR. If the switch is set to False, the values are connected using AND. This behaves like fcron and enables you to e.g. define a job that executes each 2nd friday of a month by setting the days of month and the weekday. * `active`: Whether or not the schedule is active. * `parameters`: A dictionary containing parameter overrides for the schedule. * `slug`: A unique identifier for the schedule. **Returns:** * A cron schedule. **Examples:** Create a cron schedule that runs every day at 12:00 AM UTC: ```python from prefect.schedules import Cron Cron("0 0 * * *") ``` Create a cron schedule that runs every Monday at 8:00 AM in the America/New\_York timezone: ```python from prefect.schedules import Cron Cron("0 8 * * 1", timezone="America/New_York") ``` ### `Interval` ```python Interval(anchor_date: datetime.datetime | None = None, timezone: str | None = None, active: bool = True, parameters: dict[str, Any] | None = None, slug: str | None = None) -> Schedule ``` Creates an interval schedule. **Args:** * `interval`: The interval to use for the schedule. If an integer is provided, it will be interpreted as seconds. * `anchor_date`: The anchor date to use for the schedule. * `timezone`: A valid timezone string in IANA tzdata format (e.g. America/New\_York). * `active`: Whether or not the schedule is active. * `parameters`: A dictionary containing parameter overrides for the schedule. * `slug`: A unique identifier for the schedule. **Returns:** * An interval schedule. **Examples:** Create an interval schedule that runs every hour: ```python from datetime import timedelta from prefect.schedules import Interval Interval(timedelta(hours=1)) ``` Create an interval schedule that runs every 60 seconds starting at a specific date: ```python from datetime import datetime from prefect.schedules import Interval Interval(60, anchor_date=datetime(2024, 1, 1)) ``` ### `RRule` ```python RRule(timezone: str | None = None, active: bool = True, parameters: dict[str, Any] | None = None, slug: str | None = None) -> Schedule ``` Creates an RRule schedule. **Args:** * `rrule`: A valid RRule string (e.g. "RRULE:FREQ=DAILY;INTERVAL=1"). * `timezone`: A valid timezone string in IANA tzdata format (e.g. America/New\_York). * `active`: Whether or not the schedule is active. * `parameters`: A dictionary containing parameter overrides for the schedule. * `slug`: A unique identifier for the schedule. **Returns:** * An RRule schedule. **Examples:** Create an RRule schedule that runs every day at 12:00 AM UTC: ```python from prefect.schedules import RRule RRule("RRULE:FREQ=DAILY;INTERVAL=1") ``` Create an RRule schedule that runs every 2nd friday of the month in the America/Chicago timezone: ```python from prefect.schedules import RRule RRule("RRULE:FREQ=MONTHLY;INTERVAL=1;BYDAY=2FR", timezone="America/Chicago") ``` ## Classes ### `Schedule` A dataclass representing a schedule. Note that only one of `interval`, `cron`, or `rrule` can be defined at a time. # serializers Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-serializers # `prefect.serializers` Serializer implementations for converting objects to bytes and bytes to objects. All serializers are based on the `Serializer` class and include a `type` string that allows them to be referenced without referencing the actual class. For example, you can get often specify the `JSONSerializer` with the string "json". Some serializers support additional settings for configuration of serialization. These are stored on the instance so the same settings can be used to load saved objects. All serializers must implement `dumps` and `loads` which convert objects to bytes and bytes to an object respectively. ## Functions ### `prefect_json_object_encoder` ```python prefect_json_object_encoder(obj: Any) -> Any ``` `JSONEncoder.default` for encoding objects into JSON with extended type support. Raises a `TypeError` to fallback on other encoders on failure. ### `prefect_json_object_decoder` ```python prefect_json_object_decoder(result: dict[str, Any]) -> Any ``` `JSONDecoder.object_hook` for decoding objects from JSON when previously encoded with `prefect_json_object_encoder` ## Classes ### `Serializer` A serializer that can encode objects of type 'D' into bytes. **Methods:** #### `dumps` ```python dumps(self, obj: D) -> bytes ``` Encode the object into a blob of bytes. #### `loads` ```python loads(self, blob: bytes) -> D ``` Decode the blob of bytes into an object. ### `PickleSerializer` Serializes objects using the pickle protocol. * Uses `cloudpickle` by default. See `picklelib` for using alternative libraries. * Stores the version of the pickle library to check for compatibility during deserialization. * Wraps pickles in base64 for safe transmission. **Methods:** #### `check_picklelib` ```python check_picklelib(cls, value: str) -> str ``` #### `dumps` ```python dumps(self, obj: D) -> bytes ``` #### `loads` ```python loads(self, blob: bytes) -> D ``` ### `JSONSerializer` Serializes data to JSON. Input types must be compatible with the stdlib json library. Wraps the `json` library to serialize to UTF-8 bytes instead of string types. **Methods:** #### `dumps` ```python dumps(self, obj: D) -> bytes ``` #### `dumps_kwargs_cannot_contain_default` ```python dumps_kwargs_cannot_contain_default(cls, value: dict[str, Any]) -> dict[str, Any] ``` #### `loads` ```python loads(self, blob: bytes) -> D ``` #### `loads_kwargs_cannot_contain_object_hook` ```python loads_kwargs_cannot_contain_object_hook(cls, value: dict[str, Any]) -> dict[str, Any] ``` ### `CompressedSerializer` Wraps another serializer, compressing its output. Uses `lzma` by default. See `compressionlib` for using alternative libraries. **Methods:** #### `check_compressionlib` ```python check_compressionlib(cls, value: str) -> str ``` #### `dumps` ```python dumps(self, obj: D) -> bytes ``` #### `loads` ```python loads(self, blob: bytes) -> D ``` #### `validate_serializer` ```python validate_serializer(cls, value: Union[str, Serializer[D]]) -> Serializer[D] ``` ### `CompressedPickleSerializer` A compressed serializer preconfigured to use the pickle serializer. ### `CompressedJSONSerializer` A compressed serializer preconfigured to use the json serializer. # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-__init__ # `prefect.server` *This module is empty or contains only private/internal implementations.* # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-__init__ # `prefect.server.api` *This module is empty or contains only private/internal implementations.* # admin Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-admin # `prefect.server.api.admin` Routes for admin-level interactions with the Prefect REST API. ## Functions ### `read_settings` ```python read_settings() -> prefect.settings.Settings ``` Get the current Prefect REST API settings. Secret setting values will be obfuscated. ### `read_version` ```python read_version() -> str ``` Returns the Prefect version number ### `clear_database` ```python clear_database(db: PrefectDBInterface = Depends(provide_database_interface), confirm: bool = Body(False, embed=True, description='Pass confirm=True to confirm you want to modify the database.'), response: Response = None) -> None ``` Clear all database tables without dropping them. ### `drop_database` ```python drop_database(db: PrefectDBInterface = Depends(provide_database_interface), confirm: bool = Body(False, embed=True, description='Pass confirm=True to confirm you want to modify the database.'), response: Response = None) -> None ``` Drop all database objects. ### `create_database` ```python create_database(db: PrefectDBInterface = Depends(provide_database_interface), confirm: bool = Body(False, embed=True, description='Pass confirm=True to confirm you want to modify the database.'), response: Response = None) -> None ``` Create all database objects. # artifacts Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-artifacts # `prefect.server.api.artifacts` Routes for interacting with artifact objects. ## Functions ### `create_artifact` ```python create_artifact(artifact: actions.ArtifactCreate, response: Response, db: PrefectDBInterface = Depends(provide_database_interface)) -> core.Artifact ``` Create an artifact. For more information, see [https://docs.prefect.io/v3/develop/artifacts](https://docs.prefect.io/v3/develop/artifacts). ### `read_artifact` ```python read_artifact(artifact_id: UUID = Path(..., description='The ID of the artifact to retrieve.', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> core.Artifact ``` Retrieve an artifact from the database. ### `read_latest_artifact` ```python read_latest_artifact(key: str = Path(..., description='The key of the artifact to retrieve.'), db: PrefectDBInterface = Depends(provide_database_interface)) -> core.Artifact ``` Retrieve the latest artifact from the artifact table. ### `read_artifacts` ```python read_artifacts(sort: sorting.ArtifactSort = Body(sorting.ArtifactSort.ID_DESC), limit: int = dependencies.LimitBody(), offset: int = Body(0, ge=0), artifacts: filters.ArtifactFilter = None, flow_runs: filters.FlowRunFilter = None, task_runs: filters.TaskRunFilter = None, flows: filters.FlowFilter = None, deployments: filters.DeploymentFilter = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> List[core.Artifact] ``` Retrieve artifacts from the database. ### `read_latest_artifacts` ```python read_latest_artifacts(sort: sorting.ArtifactCollectionSort = Body(sorting.ArtifactCollectionSort.ID_DESC), limit: int = dependencies.LimitBody(), offset: int = Body(0, ge=0), artifacts: filters.ArtifactCollectionFilter = None, flow_runs: filters.FlowRunFilter = None, task_runs: filters.TaskRunFilter = None, flows: filters.FlowFilter = None, deployments: filters.DeploymentFilter = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> List[core.ArtifactCollection] ``` Retrieve artifacts from the database. ### `count_artifacts` ```python count_artifacts(artifacts: filters.ArtifactFilter = None, flow_runs: filters.FlowRunFilter = None, task_runs: filters.TaskRunFilter = None, flows: filters.FlowFilter = None, deployments: filters.DeploymentFilter = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> int ``` Count artifacts from the database. ### `count_latest_artifacts` ```python count_latest_artifacts(artifacts: filters.ArtifactCollectionFilter = None, flow_runs: filters.FlowRunFilter = None, task_runs: filters.TaskRunFilter = None, flows: filters.FlowFilter = None, deployments: filters.DeploymentFilter = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> int ``` Count artifacts from the database. ### `update_artifact` ```python update_artifact(artifact: actions.ArtifactUpdate, artifact_id: UUID = Path(..., description='The ID of the artifact to update.', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Update an artifact in the database. ### `delete_artifact` ```python delete_artifact(artifact_id: UUID = Path(..., description='The ID of the artifact to delete.', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Delete an artifact from the database. # automations Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-automations # `prefect.server.api.automations` ## Functions ### `create_automation` ```python create_automation(automation: AutomationCreate, db: PrefectDBInterface = Depends(provide_database_interface)) -> Automation ``` Create an automation. For more information, see [https://docs.prefect.io/v3/automate](https://docs.prefect.io/v3/automate). ### `update_automation` ```python update_automation(automation: AutomationUpdate, automation_id: UUID = Path(..., alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `patch_automation` ```python patch_automation(automation: AutomationPartialUpdate, automation_id: UUID = Path(..., alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `delete_automation` ```python delete_automation(automation_id: UUID = Path(..., alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `read_automations` ```python read_automations(sort: AutomationSort = Body(AutomationSort.NAME_ASC), limit: int = LimitBody(), offset: int = Body(0, ge=0), automations: Optional[AutomationFilter] = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> Sequence[Automation] ``` ### `count_automations` ```python count_automations(db: PrefectDBInterface = Depends(provide_database_interface)) -> int ``` ### `read_automation` ```python read_automation(automation_id: UUID = Path(..., alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> Automation ``` ### `read_automations_related_to_resource` ```python read_automations_related_to_resource(resource_id: str = Path(..., alias='resource_id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> Sequence[Automation] ``` ### `delete_automations_owned_by_resource` ```python delete_automations_owned_by_resource(resource_id: str = Path(..., alias='resource_id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` # block_capabilities Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-block_capabilities # `prefect.server.api.block_capabilities` Routes for interacting with block capabilities. ## Functions ### `read_available_block_capabilities` ```python read_available_block_capabilities(db: PrefectDBInterface = Depends(provide_database_interface)) -> List[str] ``` Get available block capabilities. For more information, see [https://docs.prefect.io/v3/develop/blocks](https://docs.prefect.io/v3/develop/blocks). # block_documents Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-block_documents # `prefect.server.api.block_documents` Routes for interacting with block objects. ## Functions ### `create_block_document` ```python create_block_document(block_document: schemas.actions.BlockDocumentCreate, db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.BlockDocument ``` Create a new block document. For more information, see [https://docs.prefect.io/v3/develop/blocks](https://docs.prefect.io/v3/develop/blocks). ### `read_block_documents` ```python read_block_documents(limit: int = dependencies.LimitBody(), block_documents: Optional[schemas.filters.BlockDocumentFilter] = None, block_types: Optional[schemas.filters.BlockTypeFilter] = None, block_schemas: Optional[schemas.filters.BlockSchemaFilter] = None, include_secrets: bool = Body(False, description='Whether to include sensitive values in the block document.'), sort: Optional[schemas.sorting.BlockDocumentSort] = Body(schemas.sorting.BlockDocumentSort.NAME_ASC), offset: int = Body(0, ge=0), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.core.BlockDocument] ``` Query for block documents. ### `count_block_documents` ```python count_block_documents(block_documents: Optional[schemas.filters.BlockDocumentFilter] = None, block_types: Optional[schemas.filters.BlockTypeFilter] = None, block_schemas: Optional[schemas.filters.BlockSchemaFilter] = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> int ``` Count block documents. ### `read_block_document_by_id` ```python read_block_document_by_id(block_document_id: UUID = Path(..., description='The block document id', alias='id'), include_secrets: bool = Query(False, description='Whether to include sensitive values in the block document.'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.BlockDocument ``` ### `delete_block_document` ```python delete_block_document(block_document_id: UUID = Path(..., description='The block document id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `update_block_document_data` ```python update_block_document_data(block_document: schemas.actions.BlockDocumentUpdate, block_document_id: UUID = Path(..., description='The block document id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` # block_schemas Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-block_schemas # `prefect.server.api.block_schemas` Routes for interacting with block schema objects. ## Functions ### `create_block_schema` ```python create_block_schema(block_schema: schemas.actions.BlockSchemaCreate, response: Response, db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.BlockSchema ``` Create a block schema. For more information, see [https://docs.prefect.io/v3/develop/blocks](https://docs.prefect.io/v3/develop/blocks). ### `delete_block_schema` ```python delete_block_schema(block_schema_id: UUID = Path(..., description='The block schema id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface), api_version: str = Depends(dependencies.provide_request_api_version)) -> None ``` Delete a block schema by id. ### `read_block_schemas` ```python read_block_schemas(block_schemas: Optional[schemas.filters.BlockSchemaFilter] = None, limit: int = dependencies.LimitBody(), offset: int = Body(0, ge=0), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.core.BlockSchema] ``` Read all block schemas, optionally filtered by type ### `read_block_schema_by_id` ```python read_block_schema_by_id(block_schema_id: UUID = Path(..., description='The block schema id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.BlockSchema ``` Get a block schema by id. ### `read_block_schema_by_checksum` ```python read_block_schema_by_checksum(block_schema_checksum: str = Path(..., description='The block schema checksum', alias='checksum'), db: PrefectDBInterface = Depends(provide_database_interface), version: Optional[str] = Query(None, description='Version of block schema. If not provided the most recently created block schema with the matching checksum will be returned.')) -> schemas.core.BlockSchema ``` # block_types Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-block_types # `prefect.server.api.block_types` ## Functions ### `create_block_type` ```python create_block_type(block_type: schemas.actions.BlockTypeCreate, db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.BlockType ``` Create a new block type. For more information, see [https://docs.prefect.io/v3/develop/blocks](https://docs.prefect.io/v3/develop/blocks). ### `read_block_type_by_id` ```python read_block_type_by_id(block_type_id: UUID = Path(..., description='The block type ID', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.BlockType ``` Get a block type by ID. ### `read_block_type_by_slug` ```python read_block_type_by_slug(block_type_slug: str = Path(..., description='The block type name', alias='slug'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.BlockType ``` Get a block type by name. ### `read_block_types` ```python read_block_types(block_types: Optional[schemas.filters.BlockTypeFilter] = None, block_schemas: Optional[schemas.filters.BlockSchemaFilter] = None, limit: int = dependencies.LimitBody(), offset: int = Body(0, ge=0), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.core.BlockType] ``` Gets all block types. Optionally limit return with limit and offset. ### `update_block_type` ```python update_block_type(block_type: schemas.actions.BlockTypeUpdate, block_type_id: UUID = Path(..., description='The block type ID', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Update a block type. ### `delete_block_type` ```python delete_block_type(block_type_id: UUID = Path(..., description='The block type ID', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `read_block_documents_for_block_type` ```python read_block_documents_for_block_type(db: PrefectDBInterface = Depends(provide_database_interface), block_type_slug: str = Path(..., description='The block type name', alias='slug'), include_secrets: bool = Query(False, description='Whether to include sensitive values in the block document.')) -> List[schemas.core.BlockDocument] ``` ### `read_block_document_by_name_for_block_type` ```python read_block_document_by_name_for_block_type(db: PrefectDBInterface = Depends(provide_database_interface), block_type_slug: str = Path(..., description='The block type name', alias='slug'), block_document_name: str = Path(..., description='The block type name'), include_secrets: bool = Query(False, description='Whether to include sensitive values in the block document.')) -> schemas.core.BlockDocument ``` ### `install_system_block_types` ```python install_system_block_types(db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` # clients Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-clients # `prefect.server.api.clients` ## Classes ### `BaseClient` ### `OrchestrationClient` **Methods:** #### `create_flow_run` ```python create_flow_run(self, deployment_id: UUID, flow_run_create: DeploymentFlowRunCreate) -> Response ``` #### `pause_deployment` ```python pause_deployment(self, deployment_id: UUID) -> Response ``` #### `pause_work_pool` ```python pause_work_pool(self, work_pool_name: str) -> Response ``` #### `pause_work_queue` ```python pause_work_queue(self, work_queue_id: UUID) -> Response ``` #### `read_block_document_raw` ```python read_block_document_raw(self, block_document_id: UUID, include_secrets: bool = True) -> Response ``` #### `read_concurrency_limit_v2_raw` ```python read_concurrency_limit_v2_raw(self, concurrency_limit_id: UUID) -> Response ``` #### `read_deployment` ```python read_deployment(self, deployment_id: UUID) -> Optional[DeploymentResponse] ``` #### `read_deployment_raw` ```python read_deployment_raw(self, deployment_id: UUID) -> Response ``` #### `read_flow_raw` ```python read_flow_raw(self, flow_id: UUID) -> Response ``` #### `read_flow_run_raw` ```python read_flow_run_raw(self, flow_run_id: UUID) -> Response ``` #### `read_task_run_raw` ```python read_task_run_raw(self, task_run_id: UUID) -> Response ``` #### `read_work_pool` ```python read_work_pool(self, work_pool_id: UUID) -> Optional[WorkPool] ``` #### `read_work_pool_raw` ```python read_work_pool_raw(self, work_pool_id: UUID) -> Response ``` #### `read_work_queue_raw` ```python read_work_queue_raw(self, work_queue_id: UUID) -> Response ``` #### `read_work_queue_status_raw` ```python read_work_queue_status_raw(self, work_queue_id: UUID) -> Response ``` #### `read_workspace_variables` ```python read_workspace_variables(self, names: Optional[List[str]] = None) -> Dict[str, StrictVariableValue] ``` #### `request` ```python request(self, method: HTTP_METHODS, path: 'ServerRoutes', params: dict[str, Any] | None = None, path_params: dict[str, Any] | None = None, **kwargs: Any) -> 'Response' ``` #### `resume_deployment` ```python resume_deployment(self, deployment_id: UUID) -> Response ``` #### `resume_flow_run` ```python resume_flow_run(self, flow_run_id: UUID) -> OrchestrationResult ``` #### `resume_work_pool` ```python resume_work_pool(self, work_pool_name: str) -> Response ``` #### `resume_work_queue` ```python resume_work_queue(self, work_queue_id: UUID) -> Response ``` #### `set_flow_run_state` ```python set_flow_run_state(self, flow_run_id: UUID, state: StateCreate) -> Response ``` ### `WorkPoolsOrchestrationClient` **Methods:** #### `read_work_pool` ```python read_work_pool(self, work_pool_name: str) -> WorkPool ``` Reads information for a given work pool Args: work\_pool\_name: The name of the work pool to for which to get information. Returns: Information about the requested work pool. #### `request` ```python request(self, method: HTTP_METHODS, path: 'ServerRoutes', params: dict[str, Any] | None = None, path_params: dict[str, Any] | None = None, **kwargs: Any) -> 'Response' ``` # collections Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-collections # `prefect.server.api.collections` ## Functions ### `read_view_content` ```python read_view_content(view: str) -> Dict[str, Any] ``` Reads the content of a view from the prefect-collection-registry. ### `get_collection_view` ```python get_collection_view(view: str) -> dict[str, Any] ``` # concurrency_limits Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-concurrency_limits # `prefect.server.api.concurrency_limits` Routes for interacting with concurrency limit objects. This module provides a V1 API adapter that routes requests to the V2 concurrency system. After the migration, V1 limits are converted to V2, but the V1 API continues to work for backward compatibility. ## Functions ### `create_concurrency_limit` ```python create_concurrency_limit(concurrency_limit: schemas.actions.ConcurrencyLimitCreate, response: Response, db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.ConcurrencyLimit ``` Create a task run concurrency limit. For more information, see [https://docs.prefect.io/v3/develop/task-run-limits](https://docs.prefect.io/v3/develop/task-run-limits). ### `read_concurrency_limit` ```python read_concurrency_limit(concurrency_limit_id: UUID = Path(..., description='The concurrency limit id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.ConcurrencyLimit ``` Get a concurrency limit by id. The `active slots` field contains a list of TaskRun IDs currently using a concurrency slot for the specified tag. ### `read_concurrency_limit_by_tag` ```python read_concurrency_limit_by_tag(tag: str = Path(..., description='The tag name', alias='tag'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.ConcurrencyLimit ``` Get a concurrency limit by tag. The `active slots` field contains a list of TaskRun IDs currently using a concurrency slot for the specified tag. ### `read_concurrency_limits` ```python read_concurrency_limits(limit: int = dependencies.LimitBody(), offset: int = Body(0, ge=0), db: PrefectDBInterface = Depends(provide_database_interface)) -> Sequence[schemas.core.ConcurrencyLimit] ``` Query for concurrency limits. For each concurrency limit the `active slots` field contains a list of TaskRun IDs currently using a concurrency slot for the specified tag. ### `reset_concurrency_limit_by_tag` ```python reset_concurrency_limit_by_tag(tag: str = Path(..., description='The tag name'), slot_override: Optional[List[UUID]] = Body(None, embed=True, description='Manual override for active concurrency limit slots.'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `delete_concurrency_limit` ```python delete_concurrency_limit(concurrency_limit_id: UUID = Path(..., description='The concurrency limit id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `delete_concurrency_limit_by_tag` ```python delete_concurrency_limit_by_tag(tag: str = Path(..., description='The tag name'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `increment_concurrency_limits_v1` ```python increment_concurrency_limits_v1(names: List[str] = Body(..., description='The tags to acquire a slot for'), task_run_id: UUID = Body(..., description='The ID of the task run acquiring the slot'), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[MinimalConcurrencyLimitResponse] ``` Increment concurrency limits for the given tags. During migration, this handles both V1 and V2 limits to support mixed states. Post-migration, it only uses V2 with lease-based concurrency. ### `decrement_concurrency_limits_v1` ```python decrement_concurrency_limits_v1(names: List[str] = Body(..., description='The tags to release a slot for'), task_run_id: UUID = Body(..., description='The ID of the task run releasing the slot'), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[MinimalConcurrencyLimitResponse] ``` Decrement concurrency limits for the given tags. Finds and revokes the lease for V2 limits or decrements V1 active slots. Returns the list of limits that were decremented. ## Classes ### `Abort` ### `Delay` # concurrency_limits_v2 Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-concurrency_limits_v2 # `prefect.server.api.concurrency_limits_v2` ## Functions ### `create_concurrency_limit_v2` ```python create_concurrency_limit_v2(concurrency_limit: actions.ConcurrencyLimitV2Create, db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.ConcurrencyLimitV2 ``` Create a task run concurrency limit. For more information, see [https://docs.prefect.io/v3/develop/global-concurrency-limits](https://docs.prefect.io/v3/develop/global-concurrency-limits). ### `read_concurrency_limit_v2` ```python read_concurrency_limit_v2(id_or_name: Union[UUID, str] = Path(..., description='The ID or name of the concurrency limit', alias='id_or_name'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.responses.GlobalConcurrencyLimitResponse ``` ### `read_all_concurrency_limits_v2` ```python read_all_concurrency_limits_v2(limit: int = LimitBody(), offset: int = Body(0, ge=0), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.responses.GlobalConcurrencyLimitResponse] ``` ### `update_concurrency_limit_v2` ```python update_concurrency_limit_v2(concurrency_limit: actions.ConcurrencyLimitV2Update, id_or_name: Union[UUID, str] = Path(..., description='The ID or name of the concurrency limit', alias='id_or_name'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `delete_concurrency_limit_v2` ```python delete_concurrency_limit_v2(id_or_name: Union[UUID, str] = Path(..., description='The ID or name of the concurrency limit', alias='id_or_name'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `bulk_increment_active_slots` ```python bulk_increment_active_slots(slots: int = Body(..., gt=0), names: List[str] = Body(..., min_items=1), mode: Literal['concurrency', 'rate_limit'] = Body('concurrency'), create_if_missing: Optional[bool] = Body(None, deprecated='Limits must be explicitly created before acquiring concurrency slots.'), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[MinimalConcurrencyLimitResponse] ``` ### `bulk_increment_active_slots_with_lease` ```python bulk_increment_active_slots_with_lease(slots: int = Body(..., gt=0), names: List[str] = Body(..., min_items=1), mode: Literal['concurrency', 'rate_limit'] = Body('concurrency'), lease_duration: float = Body(300, ge=60, le=60 * 60 * 24, description='The duration of the lease in seconds.'), holder: Optional[ConcurrencyLeaseHolder] = Body(None, description='The holder of the lease with type (flow_run, task_run, or deployment) and id.'), db: PrefectDBInterface = Depends(provide_database_interface)) -> ConcurrencyLimitWithLeaseResponse ``` ### `bulk_decrement_active_slots` ```python bulk_decrement_active_slots(slots: int = Body(..., gt=0), names: List[str] = Body(..., min_items=1), occupancy_seconds: Optional[float] = Body(None, gt=0.0), create_if_missing: bool = Body(None, deprecated='Limits must be explicitly created before decrementing active slots.'), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[MinimalConcurrencyLimitResponse] ``` ### `bulk_decrement_active_slots_with_lease` ```python bulk_decrement_active_slots_with_lease(lease_id: UUID = Body(..., description='The ID of the lease corresponding to the concurrency limits to decrement.', embed=True), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `renew_concurrency_lease` ```python renew_concurrency_lease(lease_id: UUID = Path(..., description='The ID of the lease to renew'), lease_duration: float = Body(300, ge=60, le=60 * 60 * 24, description='The duration of the lease in seconds.', embed=True)) -> None ``` ## Classes ### `MinimalConcurrencyLimitResponse` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `ConcurrencyLimitWithLeaseResponse` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. # csrf_token Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-csrf_token # `prefect.server.api.csrf_token` ## Functions ### `create_csrf_token` ```python create_csrf_token(db: PrefectDBInterface = Depends(provide_database_interface), client: str = Query(..., description='The client to create a CSRF token for')) -> schemas.core.CsrfToken ``` Create or update a CSRF token for a client # dependencies Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-dependencies # `prefect.server.api.dependencies` Utilities for injecting FastAPI dependencies. ## Functions ### `provide_request_api_version` ```python provide_request_api_version(x_prefect_api_version: str = Header(None)) -> Version | None ``` ### `LimitBody` ```python LimitBody() -> Any ``` A `fastapi.Depends` factory for pulling a `limit: int` parameter from the request body while determining the default from the current settings. ### `get_created_by` ```python get_created_by(prefect_automation_id: Optional[UUID] = Header(None, include_in_schema=False), prefect_automation_name: Optional[str] = Header(None, include_in_schema=False)) -> Optional[schemas.core.CreatedBy] ``` A dependency that returns the provenance information to use when creating objects during this API call. ### `get_updated_by` ```python get_updated_by(prefect_automation_id: Optional[UUID] = Header(None, include_in_schema=False), prefect_automation_name: Optional[str] = Header(None, include_in_schema=False)) -> Optional[schemas.core.UpdatedBy] ``` A dependency that returns the provenance information to use when updating objects during this API call. ### `is_ephemeral_request` ```python is_ephemeral_request(request: Request) -> bool ``` A dependency that returns whether the request is to an ephemeral server. ### `get_prefect_client_version` ```python get_prefect_client_version(user_agent: Annotated[Optional[str], Header(include_in_schema=False)] = None) -> Optional[str] ``` Attempts to parse out the Prefect client version from the User-Agent header. ## Classes ### `EnforceMinimumAPIVersion` FastAPI Dependency used to check compatibility between the version of the api and a given request. Looks for the header 'X-PREFECT-API-VERSION' in the request and compares it to the api's version. Rejects requests that are lower than the minimum version. # deployments Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-deployments # `prefect.server.api.deployments` Routes for interacting with Deployment objects. ## Functions ### `create_deployment` ```python create_deployment(deployment: schemas.actions.DeploymentCreate, response: Response, worker_lookups: WorkerLookups = Depends(WorkerLookups), created_by: Optional[schemas.core.CreatedBy] = Depends(dependencies.get_created_by), updated_by: Optional[schemas.core.UpdatedBy] = Depends(dependencies.get_updated_by), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.responses.DeploymentResponse ``` Gracefully creates a new deployment from the provided schema. If a deployment with the same name and flow\_id already exists, the deployment is updated. If the deployment has an active schedule, flow runs will be scheduled. When upserting, any scheduled runs from the existing deployment will be deleted. For more information, see [https://docs.prefect.io/v3/deploy](https://docs.prefect.io/v3/deploy). ### `update_deployment` ```python update_deployment(deployment: schemas.actions.DeploymentUpdate, deployment_id: UUID = Path(..., description='The deployment id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `read_deployment_by_name` ```python read_deployment_by_name(flow_name: str = Path(..., description='The name of the flow'), deployment_name: str = Path(..., description='The name of the deployment'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.responses.DeploymentResponse ``` Get a deployment using the name of the flow and the deployment. ### `read_deployment` ```python read_deployment(deployment_id: UUID = Path(..., description='The deployment id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.responses.DeploymentResponse ``` Get a deployment by id. ### `read_deployments` ```python read_deployments(limit: int = dependencies.LimitBody(), offset: int = Body(0, ge=0), flows: Optional[schemas.filters.FlowFilter] = None, flow_runs: Optional[schemas.filters.FlowRunFilter] = None, task_runs: Optional[schemas.filters.TaskRunFilter] = None, deployments: Optional[schemas.filters.DeploymentFilter] = None, work_pools: Optional[schemas.filters.WorkPoolFilter] = None, work_pool_queues: Optional[schemas.filters.WorkQueueFilter] = None, sort: schemas.sorting.DeploymentSort = Body(schemas.sorting.DeploymentSort.NAME_ASC), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.responses.DeploymentResponse] ``` Query for deployments. ### `paginate_deployments` ```python paginate_deployments(limit: int = dependencies.LimitBody(), page: int = Body(1, ge=1), flows: Optional[schemas.filters.FlowFilter] = None, flow_runs: Optional[schemas.filters.FlowRunFilter] = None, task_runs: Optional[schemas.filters.TaskRunFilter] = None, deployments: Optional[schemas.filters.DeploymentFilter] = None, work_pools: Optional[schemas.filters.WorkPoolFilter] = None, work_pool_queues: Optional[schemas.filters.WorkQueueFilter] = None, sort: schemas.sorting.DeploymentSort = Body(schemas.sorting.DeploymentSort.NAME_ASC), db: PrefectDBInterface = Depends(provide_database_interface)) -> DeploymentPaginationResponse ``` Pagination query for flow runs. ### `get_scheduled_flow_runs_for_deployments` ```python get_scheduled_flow_runs_for_deployments(background_tasks: BackgroundTasks, deployment_ids: list[UUID] = Body(default=..., description='The deployment IDs to get scheduled runs for'), scheduled_before: DateTime = Body(None, description='The maximum time to look for scheduled flow runs'), limit: int = dependencies.LimitBody(), db: PrefectDBInterface = Depends(provide_database_interface)) -> list[schemas.responses.FlowRunResponse] ``` Get scheduled runs for a set of deployments. Used by a runner to poll for work. ### `count_deployments` ```python count_deployments(flows: Optional[schemas.filters.FlowFilter] = None, flow_runs: Optional[schemas.filters.FlowRunFilter] = None, task_runs: Optional[schemas.filters.TaskRunFilter] = None, deployments: Optional[schemas.filters.DeploymentFilter] = None, work_pools: Optional[schemas.filters.WorkPoolFilter] = None, work_pool_queues: Optional[schemas.filters.WorkQueueFilter] = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> int ``` Count deployments. ### `delete_deployment` ```python delete_deployment(deployment_id: UUID = Path(..., description='The deployment id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Delete a deployment by id. ### `schedule_deployment` ```python schedule_deployment(deployment_id: UUID = Path(..., description='The deployment id', alias='id'), start_time: datetime.datetime = Body(None, description='The earliest date to schedule'), end_time: datetime.datetime = Body(None, description='The latest date to schedule'), min_time: float = Body(None, description='Runs will be scheduled until at least this long after the `start_time`', json_schema_extra={'format': 'time-delta'}), min_runs: int = Body(None, description='The minimum number of runs to schedule'), max_runs: int = Body(None, description='The maximum number of runs to schedule'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Schedule runs for a deployment. For backfills, provide start/end times in the past. This function will generate the minimum number of runs that satisfy the min and max times, and the min and max counts. Specifically, the following order will be respected. * Runs will be generated starting on or after the `start_time` * No more than `max_runs` runs will be generated * No runs will be generated after `end_time` is reached * At least `min_runs` runs will be generated * Runs will be generated until at least `start_time + min_time` is reached ### `resume_deployment` ```python resume_deployment(deployment_id: UUID = Path(..., description='The deployment id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Set a deployment schedule to active. Runs will be scheduled immediately. ### `pause_deployment` ```python pause_deployment(deployment_id: UUID = Path(..., description='The deployment id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Set a deployment schedule to inactive. Any auto-scheduled runs still in a Scheduled state will be deleted. ### `create_flow_run_from_deployment` ```python create_flow_run_from_deployment(flow_run: schemas.actions.DeploymentFlowRunCreate, deployment_id: UUID = Path(..., description='The deployment id', alias='id'), created_by: Optional[schemas.core.CreatedBy] = Depends(dependencies.get_created_by), db: PrefectDBInterface = Depends(provide_database_interface), worker_lookups: WorkerLookups = Depends(WorkerLookups), response: Response = None) -> schemas.responses.FlowRunResponse ``` Create a flow run from a deployment. Any parameters not provided will be inferred from the deployment's parameters. If tags are not provided, the deployment's tags will be used. If no state is provided, the flow run will be created in a SCHEDULED state. ### `work_queue_check_for_deployment` ```python work_queue_check_for_deployment(deployment_id: UUID = Path(..., description='The deployment id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.core.WorkQueue] ``` Get list of work-queues that are able to pick up the specified deployment. This endpoint is intended to be used by the UI to provide users warnings about deployments that are unable to be executed because there are no work queues that will pick up their runs, based on existing filter criteria. It may be deprecated in the future because there is not a strict relationship between work queues and deployments. ### `read_deployment_schedules` ```python read_deployment_schedules(deployment_id: UUID = Path(..., description='The deployment id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.core.DeploymentSchedule] ``` ### `create_deployment_schedules` ```python create_deployment_schedules(deployment_id: UUID = Path(..., description='The deployment id', alias='id'), schedules: List[schemas.actions.DeploymentScheduleCreate] = Body(default=..., description='The schedules to create'), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.core.DeploymentSchedule] ``` ### `update_deployment_schedule` ```python update_deployment_schedule(deployment_id: UUID = Path(..., description='The deployment id', alias='id'), schedule_id: UUID = Path(..., description='The schedule id', alias='schedule_id'), schedule: schemas.actions.DeploymentScheduleUpdate = Body(default=..., description='The updated schedule'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `delete_deployment_schedule` ```python delete_deployment_schedule(deployment_id: UUID = Path(..., description='The deployment id', alias='id'), schedule_id: UUID = Path(..., description='The schedule id', alias='schedule_id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` # events Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-events # `prefect.server.api.events` ## Functions ### `create_events` ```python create_events(events: List[Event], ephemeral_request: bool = Depends(is_ephemeral_request)) -> None ``` Record a batch of Events. For more information, see [https://docs.prefect.io/v3/concepts/events](https://docs.prefect.io/v3/concepts/events). ### `stream_events_in` ```python stream_events_in(websocket: WebSocket) -> None ``` Open a WebSocket to stream incoming Events ### `stream_workspace_events_out` ```python stream_workspace_events_out(websocket: WebSocket) -> None ``` Open a WebSocket to stream Events ### `verified_page_token` ```python verified_page_token(page_token: str = Query(..., alias='page-token')) -> str ``` ### `read_events` ```python read_events(request: Request, filter: Optional[EventFilter] = Body(None, description='Additional optional filter criteria to narrow down the set of Events'), limit: int = Body(INTERACTIVE_PAGE_SIZE, ge=0, le=INTERACTIVE_PAGE_SIZE, embed=True, description='The number of events to return with each page'), db: PrefectDBInterface = Depends(provide_database_interface)) -> EventPage ``` Queries for Events matching the given filter criteria in the given Account. Returns the first page of results, and the URL to request the next page (if there are more results). ### `read_account_events_page` ```python read_account_events_page(request: Request, page_token: str = Depends(verified_page_token), db: PrefectDBInterface = Depends(provide_database_interface)) -> EventPage ``` Returns the next page of Events for a previous query against the given Account, and the URL to request the next page (if there are more results). ### `generate_next_page_link` ```python generate_next_page_link(request: Request, page_token: Optional[str]) -> Optional[str] ``` ### `count_account_events` ```python count_account_events(filter: EventFilter, countable: Countable = Path(...), time_unit: TimeUnit = Body(default=TimeUnit.day), time_interval: float = Body(default=1.0, ge=0.01), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[EventCount] ``` Returns distinct objects and the count of events associated with them. Objects that can be counted include the day the event occurred, the type of event, or the IDs of the resources associated with the event. ### `handle_event_count_request` ```python handle_event_count_request(session: AsyncSession, filter: EventFilter, countable: Countable, time_unit: TimeUnit, time_interval: float) -> List[EventCount] ``` # flow_run_states Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-flow_run_states # `prefect.server.api.flow_run_states` Routes for interacting with flow run state objects. ## Functions ### `read_flow_run_state` ```python read_flow_run_state(flow_run_state_id: UUID = Path(..., description='The flow run state id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.states.State ``` Get a flow run state by id. For more information, see [https://docs.prefect.io/v3/develop/write-flows#final-state-determination](https://docs.prefect.io/v3/develop/write-flows#final-state-determination). ### `read_flow_run_states` ```python read_flow_run_states(flow_run_id: UUID, db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.states.State] ``` Get states associated with a flow run. # flow_runs Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-flow_runs # `prefect.server.api.flow_runs` Routes for interacting with flow run objects. ## Functions ### `create_flow_run` ```python create_flow_run(flow_run: schemas.actions.FlowRunCreate, db: PrefectDBInterface = Depends(provide_database_interface), response: Response = None, created_by: Optional[schemas.core.CreatedBy] = Depends(dependencies.get_created_by), orchestration_parameters: Dict[str, Any] = Depends(orchestration_dependencies.provide_flow_orchestration_parameters), api_version: str = Depends(dependencies.provide_request_api_version), worker_lookups: WorkerLookups = Depends(WorkerLookups)) -> schemas.responses.FlowRunResponse ``` Create a flow run. If a flow run with the same flow\_id and idempotency key already exists, the existing flow run will be returned. If no state is provided, the flow run will be created in a PENDING state. For more information, see [https://docs.prefect.io/v3/develop/write-flows](https://docs.prefect.io/v3/develop/write-flows). ### `update_flow_run` ```python update_flow_run(flow_run: schemas.actions.FlowRunUpdate, flow_run_id: UUID = Path(..., description='The flow run id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Updates a flow run. ### `count_flow_runs` ```python count_flow_runs(flows: Optional[schemas.filters.FlowFilter] = None, flow_runs: Optional[schemas.filters.FlowRunFilter] = None, task_runs: Optional[schemas.filters.TaskRunFilter] = None, deployments: Optional[schemas.filters.DeploymentFilter] = None, work_pools: Optional[schemas.filters.WorkPoolFilter] = None, work_pool_queues: Optional[schemas.filters.WorkQueueFilter] = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> int ``` Query for flow runs. ### `average_flow_run_lateness` ```python average_flow_run_lateness(flows: Optional[schemas.filters.FlowFilter] = None, flow_runs: Optional[schemas.filters.FlowRunFilter] = None, task_runs: Optional[schemas.filters.TaskRunFilter] = None, deployments: Optional[schemas.filters.DeploymentFilter] = None, work_pools: Optional[schemas.filters.WorkPoolFilter] = None, work_pool_queues: Optional[schemas.filters.WorkQueueFilter] = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> Optional[float] ``` Query for average flow-run lateness in seconds. ### `flow_run_history` ```python flow_run_history(history_start: DateTime = Body(..., description="The history's start time."), history_end: DateTime = Body(..., description="The history's end time."), history_interval: float = Body(..., description='The size of each history interval, in seconds. Must be at least 1 second.', json_schema_extra={'format': 'time-delta'}, alias='history_interval_seconds'), flows: Optional[schemas.filters.FlowFilter] = None, flow_runs: Optional[schemas.filters.FlowRunFilter] = None, task_runs: Optional[schemas.filters.TaskRunFilter] = None, deployments: Optional[schemas.filters.DeploymentFilter] = None, work_pools: Optional[schemas.filters.WorkPoolFilter] = None, work_queues: Optional[schemas.filters.WorkQueueFilter] = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.responses.HistoryResponse] ``` Query for flow run history data across a given range and interval. ### `read_flow_run` ```python read_flow_run(flow_run_id: UUID = Path(..., description='The flow run id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.responses.FlowRunResponse ``` Get a flow run by id. ### `read_flow_run_graph_v1` ```python read_flow_run_graph_v1(flow_run_id: UUID = Path(..., description='The flow run id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[DependencyResult] ``` Get a task run dependency map for a given flow run. ### `read_flow_run_graph_v2` ```python read_flow_run_graph_v2(flow_run_id: UUID = Path(..., description='The flow run id', alias='id'), since: datetime.datetime = Query(default=jsonable_encoder(earliest_possible_datetime()), description='Only include runs that start or end after this time.'), db: PrefectDBInterface = Depends(provide_database_interface)) -> Graph ``` Get a graph of the tasks and subflow runs for the given flow run ### `resume_flow_run` ```python resume_flow_run(response: Response, flow_run_id: UUID = Path(..., description='The flow run id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface), run_input: Optional[dict[str, Any]] = Body(default=None, embed=True), flow_policy: type[FlowRunOrchestrationPolicy] = Depends(orchestration_dependencies.provide_flow_policy), task_policy: type[TaskRunOrchestrationPolicy] = Depends(orchestration_dependencies.provide_task_policy), orchestration_parameters: Dict[str, Any] = Depends(orchestration_dependencies.provide_flow_orchestration_parameters), api_version: str = Depends(dependencies.provide_request_api_version), client_version: Optional[str] = Depends(dependencies.get_prefect_client_version)) -> OrchestrationResult ``` Resume a paused flow run. ### `read_flow_runs` ```python read_flow_runs(sort: schemas.sorting.FlowRunSort = Body(schemas.sorting.FlowRunSort.ID_DESC), limit: int = dependencies.LimitBody(), offset: int = Body(0, ge=0), flows: Optional[schemas.filters.FlowFilter] = None, flow_runs: Optional[schemas.filters.FlowRunFilter] = None, task_runs: Optional[schemas.filters.TaskRunFilter] = None, deployments: Optional[schemas.filters.DeploymentFilter] = None, work_pools: Optional[schemas.filters.WorkPoolFilter] = None, work_pool_queues: Optional[schemas.filters.WorkQueueFilter] = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.responses.FlowRunResponse] ``` Query for flow runs. ### `delete_flow_run` ```python delete_flow_run(background_tasks: BackgroundTasks, flow_run_id: UUID = Path(..., description='The flow run id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Delete a flow run by id. ### `delete_flow_run_logs` ```python delete_flow_run_logs(db: PrefectDBInterface, flow_run_id: UUID) -> None ``` ### `set_flow_run_state` ```python set_flow_run_state(response: Response, flow_run_id: UUID = Path(..., description='The flow run id', alias='id'), state: schemas.actions.StateCreate = Body(..., description='The intended state.'), force: bool = Body(False, description='If false, orchestration rules will be applied that may alter or prevent the state transition. If True, orchestration rules are not applied.'), db: PrefectDBInterface = Depends(provide_database_interface), flow_policy: type[FlowRunOrchestrationPolicy] = Depends(orchestration_dependencies.provide_flow_policy), orchestration_parameters: Dict[str, Any] = Depends(orchestration_dependencies.provide_flow_orchestration_parameters), api_version: str = Depends(dependencies.provide_request_api_version), client_version: Optional[str] = Depends(dependencies.get_prefect_client_version)) -> OrchestrationResult ``` Set a flow run state, invoking any orchestration rules. ### `create_flow_run_input` ```python create_flow_run_input(flow_run_id: UUID = Path(..., description='The flow run id', alias='id'), key: str = Body(..., description='The input key'), value: bytes = Body(..., description='The value of the input'), sender: Optional[str] = Body(None, description='The sender of the input'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Create a key/value input for a flow run. ### `filter_flow_run_input` ```python filter_flow_run_input(flow_run_id: UUID = Path(..., description='The flow run id', alias='id'), prefix: str = Body(..., description='The input key prefix', embed=True), limit: int = Body(1, description='The maximum number of results to return', embed=True), exclude_keys: List[str] = Body([], description='Exclude inputs with these keys', embed=True), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.core.FlowRunInput] ``` Filter flow run inputs by key prefix ### `read_flow_run_input` ```python read_flow_run_input(flow_run_id: UUID = Path(..., description='The flow run id', alias='id'), key: str = Path(..., description='The input key', alias='key'), db: PrefectDBInterface = Depends(provide_database_interface)) -> PlainTextResponse ``` Create a value from a flow run input ### `delete_flow_run_input` ```python delete_flow_run_input(flow_run_id: UUID = Path(..., description='The flow run id', alias='id'), key: str = Path(..., description='The input key', alias='key'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Delete a flow run input ### `paginate_flow_runs` ```python paginate_flow_runs(sort: schemas.sorting.FlowRunSort = Body(schemas.sorting.FlowRunSort.ID_DESC), limit: int = dependencies.LimitBody(), page: int = Body(1, ge=1), flows: Optional[schemas.filters.FlowFilter] = None, flow_runs: Optional[schemas.filters.FlowRunFilter] = None, task_runs: Optional[schemas.filters.TaskRunFilter] = None, deployments: Optional[schemas.filters.DeploymentFilter] = None, work_pools: Optional[schemas.filters.WorkPoolFilter] = None, work_pool_queues: Optional[schemas.filters.WorkQueueFilter] = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> FlowRunPaginationResponse ``` Pagination query for flow runs. ### `download_logs` ```python download_logs(flow_run_id: UUID = Path(..., description='The flow run id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> StreamingResponse ``` Download all flow run logs as a CSV file, collecting all logs until there are no more logs to retrieve. ### `update_flow_run_labels` ```python update_flow_run_labels(flow_run_id: UUID = Path(..., description='The flow run id', alias='id'), labels: Dict[str, Any] = Body(..., description='The labels to update'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Update the labels of a flow run. # flows Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-flows # `prefect.server.api.flows` Routes for interacting with flow objects. ## Functions ### `create_flow` ```python create_flow(flow: schemas.actions.FlowCreate, response: Response, db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.Flow ``` Gracefully creates a new flow from the provided schema. If a flow with the same name already exists, the existing flow is returned. For more information, see [https://docs.prefect.io/v3/develop/write-flows](https://docs.prefect.io/v3/develop/write-flows). ### `update_flow` ```python update_flow(flow: schemas.actions.FlowUpdate, flow_id: UUID = Path(..., description='The flow id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Updates a flow. ### `count_flows` ```python count_flows(flows: schemas.filters.FlowFilter = None, flow_runs: schemas.filters.FlowRunFilter = None, task_runs: schemas.filters.TaskRunFilter = None, deployments: schemas.filters.DeploymentFilter = None, work_pools: schemas.filters.WorkPoolFilter = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> int ``` Count flows. ### `read_flow_by_name` ```python read_flow_by_name(name: str = Path(..., description='The name of the flow'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.Flow ``` Get a flow by name. ### `read_flow` ```python read_flow(flow_id: UUID = Path(..., description='The flow id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.Flow ``` Get a flow by id. ### `read_flows` ```python read_flows(limit: int = dependencies.LimitBody(), offset: int = Body(0, ge=0), flows: schemas.filters.FlowFilter = None, flow_runs: schemas.filters.FlowRunFilter = None, task_runs: schemas.filters.TaskRunFilter = None, deployments: schemas.filters.DeploymentFilter = None, work_pools: schemas.filters.WorkPoolFilter = None, sort: schemas.sorting.FlowSort = Body(schemas.sorting.FlowSort.NAME_ASC), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.core.Flow] ``` Query for flows. ### `delete_flow` ```python delete_flow(flow_id: UUID = Path(..., description='The flow id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Delete a flow by id. ### `paginate_flows` ```python paginate_flows(limit: int = dependencies.LimitBody(), page: int = Body(1, ge=1), flows: Optional[schemas.filters.FlowFilter] = None, flow_runs: Optional[schemas.filters.FlowRunFilter] = None, task_runs: Optional[schemas.filters.TaskRunFilter] = None, deployments: Optional[schemas.filters.DeploymentFilter] = None, work_pools: Optional[schemas.filters.WorkPoolFilter] = None, sort: schemas.sorting.FlowSort = Body(schemas.sorting.FlowSort.NAME_ASC), db: PrefectDBInterface = Depends(provide_database_interface)) -> FlowPaginationResponse ``` Pagination query for flows. # logs Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-logs # `prefect.server.api.logs` Routes for interacting with log objects. ## Functions ### `create_logs` ```python create_logs(logs: Sequence[LogCreate], db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Create new logs from the provided schema. For more information, see [https://docs.prefect.io/v3/develop/logging](https://docs.prefect.io/v3/develop/logging). ### `read_logs` ```python read_logs(limit: int = dependencies.LimitBody(), offset: int = Body(0, ge=0), logs: Optional[LogFilter] = None, sort: LogSort = Body(LogSort.TIMESTAMP_ASC), db: PrefectDBInterface = Depends(provide_database_interface)) -> Sequence[Log] ``` Query for logs. ### `stream_logs_out` ```python stream_logs_out(websocket: WebSocket) -> None ``` Serve a WebSocket to stream live logs # middleware Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-middleware # `prefect.server.api.middleware` ## Classes ### `CsrfMiddleware` Middleware for CSRF protection. This middleware will check for a CSRF token in the headers of any POST, PUT, PATCH, or DELETE request. If the token is not present or does not match the token stored in the database for the client, the request will be rejected with a 403 status code. **Methods:** #### `dispatch` ```python dispatch(self, request: Request, call_next: NextMiddlewareFunction) -> Response ``` Dispatch method for the middleware. This method will check for the presence of a CSRF token in the headers of the request and compare it to the token stored in the database for the client. If the token is not present or does not match, the request will be rejected with a 403 status code. # root Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-root # `prefect.server.api.root` Contains the `hello` route for testing and healthcheck purposes. ## Functions ### `hello` ```python hello() -> str ``` Say hello! ### `perform_readiness_check` ```python perform_readiness_check(db: PrefectDBInterface = Depends(provide_database_interface)) -> JSONResponse ``` # run_history Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-run_history # `prefect.server.api.run_history` Utilities for querying flow and task run history. ## Functions ### `run_history` ```python run_history(db: PrefectDBInterface, session: sa.orm.Session, run_type: Literal['flow_run', 'task_run'], history_start: DateTime, history_end: DateTime, history_interval: datetime.timedelta, flows: Optional[schemas.filters.FlowFilter] = None, flow_runs: Optional[schemas.filters.FlowRunFilter] = None, task_runs: Optional[schemas.filters.TaskRunFilter] = None, deployments: Optional[schemas.filters.DeploymentFilter] = None, work_pools: Optional[schemas.filters.WorkPoolFilter] = None, work_queues: Optional[schemas.filters.WorkQueueFilter] = None) -> list[schemas.responses.HistoryResponse] ``` Produce a history of runs aggregated by interval and state # saved_searches Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-saved_searches # `prefect.server.api.saved_searches` Routes for interacting with saved search objects. ## Functions ### `create_saved_search` ```python create_saved_search(saved_search: schemas.actions.SavedSearchCreate, response: Response, db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.SavedSearch ``` Gracefully creates a new saved search from the provided schema. If a saved search with the same name already exists, the saved search's fields are replaced. ### `read_saved_search` ```python read_saved_search(saved_search_id: UUID = Path(..., description='The saved search id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.SavedSearch ``` Get a saved search by id. ### `read_saved_searches` ```python read_saved_searches(limit: int = dependencies.LimitBody(), offset: int = Body(0, ge=0), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.core.SavedSearch] ``` Query for saved searches. ### `delete_saved_search` ```python delete_saved_search(saved_search_id: UUID = Path(..., description='The saved search id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Delete a saved search by id. # server Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-server # `prefect.server.api.server` Defines the Prefect REST API FastAPI app. ## Functions ### `validation_exception_handler` ```python validation_exception_handler(request: Request, exc: RequestValidationError) -> JSONResponse ``` Provide a detailed message for request validation errors. ### `integrity_exception_handler` ```python integrity_exception_handler(request: Request, exc: Exception) -> JSONResponse ``` Capture database integrity errors. ### `is_client_retryable_exception` ```python is_client_retryable_exception(exc: Exception) -> bool ``` ### `replace_placeholder_string_in_files` ```python replace_placeholder_string_in_files(directory: str, placeholder: str, replacement: str, allowed_extensions: list[str] | None = None) -> None ``` Recursively loops through all files in the given directory and replaces a placeholder string. ### `copy_directory` ```python copy_directory(directory: str, path: str) -> None ``` ### `custom_internal_exception_handler` ```python custom_internal_exception_handler(request: Request, exc: Exception) -> JSONResponse ``` Log a detailed exception for internal server errors before returning. Send 503 for errors clients can retry on. ### `prefect_object_not_found_exception_handler` ```python prefect_object_not_found_exception_handler(request: Request, exc: ObjectNotFoundError) -> JSONResponse ``` Return 404 status code on object not found exceptions. ### `create_api_app` ```python create_api_app(dependencies: list[Any] | None = None, health_check_path: str = '/health', version_check_path: str = '/version', fast_api_app_kwargs: dict[str, Any] | None = None, final: bool = False, ignore_cache: bool = False) -> FastAPI ``` Create a FastAPI app that includes the Prefect REST API **Args:** * `dependencies`: a list of global dependencies to add to each Prefect REST API router * `health_check_path`: the health check route path * `fast_api_app_kwargs`: kwargs to pass to the FastAPI constructor * `final`: whether this will be the last instance of the Prefect server to be created in this process, so that additional optimizations may be applied * `ignore_cache`: if set, a new app will be created even if the settings and fast\_api\_app\_kwargs match an existing app in the cache **Returns:** * a FastAPI app that serves the Prefect REST API ### `create_ui_app` ```python create_ui_app(ephemeral: bool) -> FastAPI ``` ### `create_app` ```python create_app(settings: Optional[prefect.settings.Settings] = None, ephemeral: bool = False, webserver_only: bool = False, final: bool = False, ignore_cache: bool = False) -> FastAPI ``` Create a FastAPI app that includes the Prefect REST API and UI **Args:** * `settings`: The settings to use to create the app. If not set, settings are pulled from the context. * `ephemeral`: If set, the application will be treated as ephemeral. The UI and services will be disabled. * `webserver_only`: If set, the webserver and UI will be available but all background services will be disabled. * `final`: whether this will be the last instance of the Prefect server to be created in this process, so that additional optimizations may be applied * `ignore_cache`: If set, a new application will be created even if the settings match. Otherwise, an application is returned from the cache. ## Classes ### `SPAStaticFiles` Implementation of `StaticFiles` for serving single page applications. Adds `get_response` handling to ensure that when a resource isn't found the application still returns the index. **Methods:** #### `get_response` ```python get_response(self, path: str, scope: Any) -> Response ``` ### `RequestLimitMiddleware` A middleware that limits the number of concurrent requests handled by the API. This is a blunt tool for limiting SQLite concurrent writes which will cause failures at high volume. Ideally, we would only apply the limit to routes that perform writes. ### `SubprocessASGIServer` **Methods:** #### `address` ```python address(self) -> str ``` #### `api_url` ```python api_url(self) -> str ``` #### `find_available_port` ```python find_available_port(self) -> int ``` #### `is_port_available` ```python is_port_available(port: int) -> bool ``` #### `start` ```python start(self, timeout: Optional[int] = None) -> None ``` Start the server in a separate process. Safe to call multiple times; only starts the server once. **Args:** * `timeout`: The maximum time to wait for the server to start #### `stop` ```python stop(self) -> None ``` # task_run_states Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-task_run_states # `prefect.server.api.task_run_states` Routes for interacting with task run state objects. ## Functions ### `read_task_run_state` ```python read_task_run_state(task_run_state_id: UUID = Path(..., description='The task run state id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.states.State ``` Get a task run state by id. For more information, see [https://docs.prefect.io/v3/develop/write-tasks](https://docs.prefect.io/v3/develop/write-tasks). ### `read_task_run_states` ```python read_task_run_states(task_run_id: UUID, db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.states.State] ``` Get states associated with a task run. # task_runs Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-task_runs # `prefect.server.api.task_runs` Routes for interacting with task run objects. ## Functions ### `create_task_run` ```python create_task_run(task_run: schemas.actions.TaskRunCreate, response: Response, db: PrefectDBInterface = Depends(provide_database_interface), orchestration_parameters: Dict[str, Any] = Depends(orchestration_dependencies.provide_task_orchestration_parameters)) -> schemas.core.TaskRun ``` Create a task run. If a task run with the same flow\_run\_id, task\_key, and dynamic\_key already exists, the existing task run will be returned. If no state is provided, the task run will be created in a PENDING state. For more information, see [https://docs.prefect.io/v3/develop/write-tasks](https://docs.prefect.io/v3/develop/write-tasks). ### `update_task_run` ```python update_task_run(task_run: schemas.actions.TaskRunUpdate, task_run_id: UUID = Path(..., description='The task run id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Updates a task run. ### `count_task_runs` ```python count_task_runs(db: PrefectDBInterface = Depends(provide_database_interface), flows: schemas.filters.FlowFilter = None, flow_runs: schemas.filters.FlowRunFilter = None, task_runs: schemas.filters.TaskRunFilter = None, deployments: schemas.filters.DeploymentFilter = None) -> int ``` Count task runs. ### `task_run_history` ```python task_run_history(history_start: DateTime = Body(..., description="The history's start time."), history_end: DateTime = Body(..., description="The history's end time."), history_interval: float = Body(..., description='The size of each history interval, in seconds. Must be at least 1 second.', json_schema_extra={'format': 'time-delta'}, alias='history_interval_seconds'), flows: schemas.filters.FlowFilter = None, flow_runs: schemas.filters.FlowRunFilter = None, task_runs: schemas.filters.TaskRunFilter = None, deployments: schemas.filters.DeploymentFilter = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.responses.HistoryResponse] ``` Query for task run history data across a given range and interval. ### `read_task_run` ```python read_task_run(task_run_id: UUID = Path(..., description='The task run id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.TaskRun ``` Get a task run by id. ### `read_task_runs` ```python read_task_runs(sort: schemas.sorting.TaskRunSort = Body(schemas.sorting.TaskRunSort.ID_DESC), limit: int = dependencies.LimitBody(), offset: int = Body(0, ge=0), flows: Optional[schemas.filters.FlowFilter] = None, flow_runs: Optional[schemas.filters.FlowRunFilter] = None, task_runs: Optional[schemas.filters.TaskRunFilter] = None, deployments: Optional[schemas.filters.DeploymentFilter] = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.core.TaskRun] ``` Query for task runs. ### `paginate_task_runs` ```python paginate_task_runs(sort: schemas.sorting.TaskRunSort = Body(schemas.sorting.TaskRunSort.ID_DESC), limit: int = dependencies.LimitBody(), page: int = Body(1, ge=1), flows: Optional[schemas.filters.FlowFilter] = None, flow_runs: Optional[schemas.filters.FlowRunFilter] = None, task_runs: Optional[schemas.filters.TaskRunFilter] = None, deployments: Optional[schemas.filters.DeploymentFilter] = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> TaskRunPaginationResponse ``` Pagination query for task runs. ### `delete_task_run` ```python delete_task_run(background_tasks: BackgroundTasks, task_run_id: UUID = Path(..., description='The task run id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Delete a task run by id. ### `delete_task_run_logs` ```python delete_task_run_logs(db: PrefectDBInterface, task_run_id: UUID) -> None ``` ### `set_task_run_state` ```python set_task_run_state(task_run_id: UUID = Path(..., description='The task run id', alias='id'), state: schemas.actions.StateCreate = Body(..., description='The intended state.'), force: bool = Body(False, description='If false, orchestration rules will be applied that may alter or prevent the state transition. If True, orchestration rules are not applied.'), db: PrefectDBInterface = Depends(provide_database_interface), response: Response = None, task_policy: TaskRunOrchestrationPolicy = Depends(orchestration_dependencies.provide_task_policy), orchestration_parameters: Dict[str, Any] = Depends(orchestration_dependencies.provide_task_orchestration_parameters)) -> OrchestrationResult ``` Set a task run state, invoking any orchestration rules. ### `scheduled_task_subscription` ```python scheduled_task_subscription(websocket: WebSocket) -> None ``` # task_workers Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-task_workers # `prefect.server.api.task_workers` ## Functions ### `read_task_workers` ```python read_task_workers(task_worker_filter: Optional[TaskWorkerFilter] = Body(default=None, description='The task worker filter', embed=True)) -> List[TaskWorkerResponse] ``` Read active task workers. Optionally filter by task keys. For more information, see [https://docs.prefect.io/v3/concepts/flows-and-tasks#background-tasks](https://docs.prefect.io/v3/concepts/flows-and-tasks#background-tasks). ## Classes ### `TaskWorkerFilter` # templates Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-templates # `prefect.server.api.templates` ## Functions ### `validate_template` ```python validate_template(template: str = Body(default='')) -> Response ``` # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-ui-__init__ # `prefect.server.api.ui` Routes primarily for use by the UI # flow_runs Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-ui-flow_runs # `prefect.server.api.ui.flow_runs` ## Functions ### `read_flow_run_history` ```python read_flow_run_history(sort: schemas.sorting.FlowRunSort = Body(schemas.sorting.FlowRunSort.EXPECTED_START_TIME_DESC), limit: int = Body(1000, le=1000), offset: int = Body(0, ge=0), flows: schemas.filters.FlowFilter = None, flow_runs: schemas.filters.FlowRunFilter = None, task_runs: schemas.filters.TaskRunFilter = None, deployments: schemas.filters.DeploymentFilter = None, work_pools: schemas.filters.WorkPoolFilter = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> List[SimpleFlowRun] ``` ### `count_task_runs_by_flow_run` ```python count_task_runs_by_flow_run(flow_run_ids: list[UUID] = Body(default=..., embed=True, max_items=200), db: PrefectDBInterface = Depends(provide_database_interface)) -> dict[UUID, int] ``` Get task run counts by flow run id. ## Classes ### `SimpleFlowRun` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. # flows Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-ui-flows # `prefect.server.api.ui.flows` ## Functions ### `count_deployments_by_flow` ```python count_deployments_by_flow(flow_ids: List[UUID] = Body(default=..., embed=True, max_items=200), db: PrefectDBInterface = Depends(provide_database_interface)) -> Dict[UUID, int] ``` Get deployment counts by flow id. ### `next_runs_by_flow` ```python next_runs_by_flow(flow_ids: List[UUID] = Body(default=..., embed=True, max_items=200), db: PrefectDBInterface = Depends(provide_database_interface)) -> Dict[UUID, Optional[SimpleNextFlowRun]] ``` Get the next flow run by flow id. ## Classes ### `SimpleNextFlowRun` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `validate_next_scheduled_start_time` ```python validate_next_scheduled_start_time(cls, v: DateTime | datetime) -> DateTime ``` # schemas Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-ui-schemas # `prefect.server.api.ui.schemas` ## Functions ### `validate_obj` ```python validate_obj(json_schema: dict[str, Any] = Body(..., embed=True, alias='schema', validation_alias='schema', json_schema_extra={'additionalProperties': True}), values: dict[str, Any] = Body(..., embed=True, json_schema_extra={'additionalProperties': True}), db: PrefectDBInterface = Depends(provide_database_interface)) -> SchemaValuesValidationResponse ``` # task_runs Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-ui-task_runs # `prefect.server.api.ui.task_runs` ## Functions ### `read_dashboard_task_run_counts` ```python read_dashboard_task_run_counts(task_runs: schemas.filters.TaskRunFilter, flows: Optional[schemas.filters.FlowFilter] = None, flow_runs: Optional[schemas.filters.FlowRunFilter] = None, deployments: Optional[schemas.filters.DeploymentFilter] = None, work_pools: Optional[schemas.filters.WorkPoolFilter] = None, work_queues: Optional[schemas.filters.WorkQueueFilter] = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> List[TaskRunCount] ``` ### `read_task_run_counts_by_state` ```python read_task_run_counts_by_state(flows: Optional[schemas.filters.FlowFilter] = None, flow_runs: Optional[schemas.filters.FlowRunFilter] = None, task_runs: Optional[schemas.filters.TaskRunFilter] = None, deployments: Optional[schemas.filters.DeploymentFilter] = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.states.CountByState ``` ### `read_task_run_with_flow_run_name` ```python read_task_run_with_flow_run_name(task_run_id: UUID = Path(..., description='The task run id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.ui.UITaskRun ``` Get a task run by id. ## Classes ### `TaskRunCount` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `ser_model` ```python ser_model(self) -> dict[str, int] ``` # validation Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-validation # `prefect.server.api.validation` This module contains functions for validating job variables for deployments, work pools, flow runs, and RunDeployment actions. These functions are used to validate that job variables provided by users conform to the JSON schema defined in the work pool's base job template. Note some important details: 1. The order of applying job variables is: work pool's base job template, deployment, flow run. This means that flow run job variables override deployment job variables, which override work pool job variables. 2. The validation of job variables for work pools and deployments ignores required keys in because we don't know if the full set of overrides will include values for any required fields. 3. Work pools can include default values for job variables. These can be normal types or references to blocks. We have not been validating these values or whether default blocks satisfy job variable JSON schemas. To avoid failing validation for existing (otherwise working) data, we ignore invalid defaults when validating deployment and flow run variables, but not when validating the work pool's base template, e.g. during work pool creation or updates. If we find defaults that are invalid, we have to ignore required fields when we run the full validation. 4. A flow run is the terminal point for job variables, so it is the only place where we validate required variables and default values. Thus, `validate_job_variables_for_deployment_flow_run` and `validate_job_variables_for_run_deployment_action` check for required fields. 5. We have been using Pydantic v1 to generate work pool base job templates, and it produces invalid JSON schemas for some fields, e.g. tuples and optional fields. We try to fix these schemas on the fly while validating job variables, but there is a case we can't resolve, which is whether or not an optional field supports a None value. In this case, we allow None values to be passed in, which means that if an optional field does not actually allow None values, the Pydantic model will fail to validate at runtime. ## Functions ### `validate_job_variables_for_deployment_flow_run` ```python validate_job_variables_for_deployment_flow_run(session: AsyncSession, deployment: BaseDeployment, flow_run: FlowRunAction) -> None ``` Validate job variables for a flow run created for a deployment. Flow runs are the terminal point for job variable overlays, so we validate required job variables because all variables should now be present. ### `validate_job_variables_for_deployment` ```python validate_job_variables_for_deployment(session: AsyncSession, work_pool: WorkPool, deployment: DeploymentAction) -> None ``` Validate job variables for deployment creation and updates. This validation applies only to deployments that have a work pool. If the deployment does not have a work pool, we cannot validate job variables because we don't have a base job template to validate against, so we skip this validation. Unlike validations for flow runs, validation here ignores required keys in the schema because we don't know if the full set of overrides will include values for any required fields. If the full set of job variables when a flow is running, including the deployment's and flow run's overrides, fails to specify a value for the required key, that's an error. ### `validate_job_variable_defaults_for_work_pool` ```python validate_job_variable_defaults_for_work_pool(session: AsyncSession, work_pool_name: str, base_job_template: Dict[str, Any]) -> None ``` Validate the default job variables for a work pool. This validation checks that default values for job variables match the JSON schema defined in the work pool's base job template. It also resolves references to block documents in the default values and hydrates them to perform the validation. Unlike validations for flow runs, validation here ignores required keys in the schema because we're only concerned with default values. The absence of a default for a required field is not an error, but if the full set of job variables when a flow is running, including the deployment's and flow run's overrides, fails to specify a value for the required key, that's an error. NOTE: This will raise an HTTP 404 error if a referenced block document does not exist. ### `validate_job_variables_for_run_deployment_action` ```python validate_job_variables_for_run_deployment_action(session: AsyncSession, run_action: RunDeployment) -> None ``` Validate the job variables for a RunDeployment action. This action is equivalent to creating a flow run for a deployment, so we validate required job variables because all variables should now be present. # variables Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-variables # `prefect.server.api.variables` Routes for interacting with variable objects ## Functions ### `get_variable_or_404` ```python get_variable_or_404(session: AsyncSession, variable_id: UUID) -> orm_models.Variable ``` Returns a variable or raises 404 HTTPException if it does not exist ### `get_variable_by_name_or_404` ```python get_variable_by_name_or_404(session: AsyncSession, name: str) -> orm_models.Variable ``` Returns a variable or raises 404 HTTPException if it does not exist ### `create_variable` ```python create_variable(variable: actions.VariableCreate, db: PrefectDBInterface = Depends(provide_database_interface)) -> core.Variable ``` Create a variable. For more information, see [https://docs.prefect.io/v3/develop/variables](https://docs.prefect.io/v3/develop/variables). ### `read_variable` ```python read_variable(variable_id: UUID = Path(..., alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> core.Variable ``` ### `read_variable_by_name` ```python read_variable_by_name(name: str = Path(...), db: PrefectDBInterface = Depends(provide_database_interface)) -> core.Variable ``` ### `read_variables` ```python read_variables(limit: int = LimitBody(), offset: int = Body(0, ge=0), variables: Optional[filters.VariableFilter] = None, sort: sorting.VariableSort = Body(sorting.VariableSort.NAME_ASC), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[core.Variable] ``` ### `count_variables` ```python count_variables(variables: Optional[filters.VariableFilter] = Body(None, embed=True), db: PrefectDBInterface = Depends(provide_database_interface)) -> int ``` ### `update_variable` ```python update_variable(variable: actions.VariableUpdate, variable_id: UUID = Path(..., alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `update_variable_by_name` ```python update_variable_by_name(variable: actions.VariableUpdate, name: str = Path(..., alias='name'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `delete_variable` ```python delete_variable(variable_id: UUID = Path(..., alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `delete_variable_by_name` ```python delete_variable_by_name(name: str = Path(...), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` # work_queues Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-work_queues # `prefect.server.api.work_queues` Routes for interacting with work queue objects. ## Functions ### `create_work_queue` ```python create_work_queue(work_queue: schemas.actions.WorkQueueCreate, db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.responses.WorkQueueResponse ``` Creates a new work queue. If a work queue with the same name already exists, an error will be raised. For more information, see [https://docs.prefect.io/v3/deploy/infrastructure-concepts/work-pools#work-queues](https://docs.prefect.io/v3/deploy/infrastructure-concepts/work-pools#work-queues). ### `update_work_queue` ```python update_work_queue(work_queue: schemas.actions.WorkQueueUpdate, work_queue_id: UUID = Path(..., description='The work queue id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Updates an existing work queue. ### `read_work_queue_by_name` ```python read_work_queue_by_name(name: str = Path(..., description='The work queue name'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.responses.WorkQueueResponse ``` Get a work queue by id. ### `read_work_queue` ```python read_work_queue(work_queue_id: UUID = Path(..., description='The work queue id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.responses.WorkQueueResponse ``` Get a work queue by id. ### `read_work_queue_runs` ```python read_work_queue_runs(background_tasks: BackgroundTasks, work_queue_id: UUID = Path(..., description='The work queue id', alias='id'), limit: int = dependencies.LimitBody(), scheduled_before: DateTime = Body(None, description='Only flow runs scheduled to start before this time will be returned.'), x_prefect_ui: Optional[bool] = Header(default=False, description='A header to indicate this request came from the Prefect UI.'), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.responses.FlowRunResponse] ``` Get flow runs from the work queue. ### `read_work_queues` ```python read_work_queues(limit: int = dependencies.LimitBody(), offset: int = Body(0, ge=0), work_queues: Optional[schemas.filters.WorkQueueFilter] = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.responses.WorkQueueResponse] ``` Query for work queues. ### `delete_work_queue` ```python delete_work_queue(work_queue_id: UUID = Path(..., description='The work queue id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Delete a work queue by id. ### `read_work_queue_status` ```python read_work_queue_status(work_queue_id: UUID = Path(..., description='The work queue id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.WorkQueueStatusDetail ``` Get the status of a work queue. # workers Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-api-workers # `prefect.server.api.workers` Routes for interacting with work queue objects. ## Functions ### `create_work_pool` ```python create_work_pool(work_pool: schemas.actions.WorkPoolCreate, db: PrefectDBInterface = Depends(provide_database_interface), prefect_client_version: Optional[str] = Depends(dependencies.get_prefect_client_version)) -> schemas.core.WorkPool ``` Creates a new work pool. If a work pool with the same name already exists, an error will be raised. For more information, see [https://docs.prefect.io/v3/deploy/infrastructure-concepts/work-pools](https://docs.prefect.io/v3/deploy/infrastructure-concepts/work-pools). ### `read_work_pool` ```python read_work_pool(work_pool_name: str = Path(..., description='The work pool name', alias='name'), worker_lookups: WorkerLookups = Depends(WorkerLookups), db: PrefectDBInterface = Depends(provide_database_interface), prefect_client_version: Optional[str] = Depends(dependencies.get_prefect_client_version)) -> schemas.core.WorkPool ``` Read a work pool by name ### `read_work_pools` ```python read_work_pools(work_pools: Optional[schemas.filters.WorkPoolFilter] = None, limit: int = dependencies.LimitBody(), offset: int = Body(0, ge=0), db: PrefectDBInterface = Depends(provide_database_interface), prefect_client_version: Optional[str] = Depends(dependencies.get_prefect_client_version)) -> List[schemas.core.WorkPool] ``` Read multiple work pools ### `count_work_pools` ```python count_work_pools(work_pools: Optional[schemas.filters.WorkPoolFilter] = Body(None, embed=True), db: PrefectDBInterface = Depends(provide_database_interface)) -> int ``` Count work pools ### `update_work_pool` ```python update_work_pool(work_pool: schemas.actions.WorkPoolUpdate, work_pool_name: str = Path(..., description='The work pool name', alias='name'), worker_lookups: WorkerLookups = Depends(WorkerLookups), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Update a work pool ### `delete_work_pool` ```python delete_work_pool(work_pool_name: str = Path(..., description='The work pool name', alias='name'), worker_lookups: WorkerLookups = Depends(WorkerLookups), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Delete a work pool ### `get_scheduled_flow_runs` ```python get_scheduled_flow_runs(background_tasks: BackgroundTasks, work_pool_name: str = Path(..., description='The work pool name', alias='name'), work_queue_names: List[str] = Body(None, description='The names of work pool queues'), scheduled_before: DateTime = Body(None, description='The maximum time to look for scheduled flow runs'), scheduled_after: DateTime = Body(None, description='The minimum time to look for scheduled flow runs'), limit: int = dependencies.LimitBody(), worker_lookups: WorkerLookups = Depends(WorkerLookups), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.responses.WorkerFlowRunResponse] ``` Load scheduled runs for a worker ### `create_work_queue` ```python create_work_queue(work_queue: schemas.actions.WorkQueueCreate, work_pool_name: str = Path(..., description='The work pool name'), worker_lookups: WorkerLookups = Depends(WorkerLookups), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.responses.WorkQueueResponse ``` Creates a new work pool queue. If a work pool queue with the same name already exists, an error will be raised. For more information, see [https://docs.prefect.io/v3/deploy/infrastructure-concepts/work-pools#work-queues](https://docs.prefect.io/v3/deploy/infrastructure-concepts/work-pools#work-queues). ### `read_work_queue` ```python read_work_queue(work_pool_name: str = Path(..., description='The work pool name'), work_queue_name: str = Path(..., description='The work pool queue name', alias='name'), worker_lookups: WorkerLookups = Depends(WorkerLookups), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.responses.WorkQueueResponse ``` Read a work pool queue ### `read_work_queues` ```python read_work_queues(work_pool_name: str = Path(..., description='The work pool name'), work_queues: schemas.filters.WorkQueueFilter = None, limit: int = dependencies.LimitBody(), offset: int = Body(0, ge=0), worker_lookups: WorkerLookups = Depends(WorkerLookups), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.responses.WorkQueueResponse] ``` Read all work pool queues ### `update_work_queue` ```python update_work_queue(work_queue: schemas.actions.WorkQueueUpdate, work_pool_name: str = Path(..., description='The work pool name'), work_queue_name: str = Path(..., description='The work pool queue name', alias='name'), worker_lookups: WorkerLookups = Depends(WorkerLookups), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Update a work pool queue ### `delete_work_queue` ```python delete_work_queue(work_pool_name: str = Path(..., description='The work pool name'), work_queue_name: str = Path(..., description='The work pool queue name', alias='name'), worker_lookups: WorkerLookups = Depends(WorkerLookups), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Delete a work pool queue ### `worker_heartbeat` ```python worker_heartbeat(work_pool_name: str = Path(..., description='The work pool name'), name: str = Body(..., description='The worker process name', embed=True), heartbeat_interval_seconds: Optional[int] = Body(None, description="The worker's heartbeat interval in seconds", embed=True), worker_lookups: WorkerLookups = Depends(WorkerLookups), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `read_workers` ```python read_workers(work_pool_name: str = Path(..., description='The work pool name'), workers: Optional[schemas.filters.WorkerFilter] = None, limit: int = dependencies.LimitBody(), offset: int = Body(0, ge=0), worker_lookups: WorkerLookups = Depends(WorkerLookups), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.responses.WorkerResponse] ``` Read all worker processes ### `delete_worker` ```python delete_worker(work_pool_name: str = Path(..., description='The work pool name'), worker_name: str = Path(..., description="The work pool's worker name", alias='name'), worker_lookups: WorkerLookups = Depends(WorkerLookups), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Delete a work pool's worker ## Classes ### `WorkerLookups` # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-database-__init__ # `prefect.server.database` *This module is empty or contains only private/internal implementations.* # alembic_commands Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-database-alembic_commands # `prefect.server.database.alembic_commands` ## Functions ### `with_alembic_lock` ```python with_alembic_lock(fn: Callable[P, R]) -> Callable[P, R] ``` Decorator that prevents alembic commands from running concurrently. This is necessary because alembic uses a global configuration object that is not thread-safe. This issue occurred in [https://github.com/PrefectHQ/prefect-dask/pull/50](https://github.com/PrefectHQ/prefect-dask/pull/50), where dask threads were simultaneously performing alembic upgrades, and causing cryptic `KeyError: 'config'` when `del globals_[attr_name]`. ### `alembic_config` ```python alembic_config() -> 'Config' ``` ### `alembic_upgrade` ```python alembic_upgrade(revision: str = 'head', dry_run: bool = False) -> None ``` Run alembic upgrades on Prefect REST API database **Args:** * `revision`: The revision passed to `alembic downgrade`. Defaults to 'head', upgrading all revisions. * `dry_run`: Show what migrations would be made without applying them. Will emit sql statements to stdout. ### `alembic_downgrade` ```python alembic_downgrade(revision: str = '-1', dry_run: bool = False) -> None ``` Run alembic downgrades on Prefect REST API database **Args:** * `revision`: The revision passed to `alembic downgrade`. Defaults to 'base', downgrading all revisions. * `dry_run`: Show what migrations would be made without applying them. Will emit sql statements to stdout. ### `alembic_revision` ```python alembic_revision(message: Optional[str] = None, autogenerate: bool = False, **kwargs: Any) -> None ``` Create a new revision file for the database. **Args:** * `message`: string message to apply to the revision. * `autogenerate`: whether or not to autogenerate the script from the database. ### `alembic_stamp` ```python alembic_stamp(revision: Union[str, list[str], tuple[str, ...]]) -> None ``` Stamp the revision table with the given revision; don't run any migrations **Args:** * `revision`: The revision passed to `alembic stamp`. # configurations Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-database-configurations # `prefect.server.database.configurations` ## Classes ### `ConnectionTracker` A test utility which tracks the connections given out by a connection pool, to make it easy to see which connections are currently checked out and open. **Methods:** #### `clear` ```python clear(self) -> None ``` #### `on_close` ```python on_close(self, adapted_connection: AdaptedConnection, connection_record: ConnectionPoolEntry) -> None ``` #### `on_close_detached` ```python on_close_detached(self, adapted_connection: AdaptedConnection) -> None ``` #### `on_connect` ```python on_connect(self, adapted_connection: AdaptedConnection, connection_record: ConnectionPoolEntry) -> None ``` #### `track_pool` ```python track_pool(self, pool: sa.pool.Pool) -> None ``` ### `BaseDatabaseConfiguration` Abstract base class used to inject database connection configuration into Prefect. This configuration is responsible for defining how Prefect REST API creates and manages database connections and sessions. **Methods:** #### `begin_transaction` ```python begin_transaction(self, session: AsyncSession, with_for_update: bool = False) -> AbstractAsyncContextManager[AsyncSessionTransaction] ``` Enter a transaction for a session #### `create_db` ```python create_db(self, connection: AsyncConnection, base_metadata: sa.MetaData) -> None ``` Create the database #### `drop_db` ```python drop_db(self, connection: AsyncConnection, base_metadata: sa.MetaData) -> None ``` Drop the database #### `engine` ```python engine(self) -> AsyncEngine ``` Returns a SqlAlchemy engine #### `is_inmemory` ```python is_inmemory(self) -> bool ``` Returns true if database is run in memory #### `session` ```python session(self, engine: AsyncEngine) -> AsyncSession ``` Retrieves a SQLAlchemy session for an engine. #### `unique_key` ```python unique_key(self) -> tuple[Hashable, ...] ``` Returns a key used to determine whether to instantiate a new DB interface. ### `AsyncPostgresConfiguration` **Methods:** #### `begin_transaction` ```python begin_transaction(self, session: AsyncSession, with_for_update: bool = False) -> AsyncGenerator[AsyncSessionTransaction, None] ``` #### `begin_transaction` ```python begin_transaction(self, session: AsyncSession, with_for_update: bool = False) -> AbstractAsyncContextManager[AsyncSessionTransaction] ``` Enter a transaction for a session #### `create_db` ```python create_db(self, connection: AsyncConnection, base_metadata: sa.MetaData) -> None ``` Create the database #### `create_db` ```python create_db(self, connection: AsyncConnection, base_metadata: sa.MetaData) -> None ``` Create the database #### `drop_db` ```python drop_db(self, connection: AsyncConnection, base_metadata: sa.MetaData) -> None ``` Drop the database #### `drop_db` ```python drop_db(self, connection: AsyncConnection, base_metadata: sa.MetaData) -> None ``` Drop the database #### `engine` ```python engine(self) -> AsyncEngine ``` Retrieves an async SQLAlchemy engine. **Args:** * `connection_url`: The database connection string. Defaults to self.connection\_url * `echo`: Whether to echo SQL sent to the database. Defaults to self.echo * `timeout`: The database statement timeout, in seconds. Defaults to self.timeout **Returns:** * a SQLAlchemy engine #### `engine` ```python engine(self) -> AsyncEngine ``` Returns a SqlAlchemy engine #### `is_inmemory` ```python is_inmemory(self) -> bool ``` Returns true if database is run in memory #### `is_inmemory` ```python is_inmemory(self) -> bool ``` Returns true if database is run in memory #### `schedule_engine_disposal` ```python schedule_engine_disposal(self, cache_key: _EngineCacheKey) -> None ``` Dispose of an engine once the event loop is closing. See caveats at `add_event_loop_shutdown_callback`. We attempted to lazily clean up old engines when new engines are created, but if the loop the engine is attached to is already closed then the connections cannot be cleaned up properly and warnings are displayed. Engine disposal should only be important when running the application ephemerally. Notably, this is an issue in our tests where many short-lived event loops and engines are created which can consume all of the available database connection slots. Users operating at a scale where connection limits are encountered should be encouraged to use a standalone server. #### `session` ```python session(self, engine: AsyncEngine) -> AsyncSession ``` Retrieves a SQLAlchemy session for an engine. **Args:** * `engine`: a sqlalchemy engine #### `session` ```python session(self, engine: AsyncEngine) -> AsyncSession ``` Retrieves a SQLAlchemy session for an engine. #### `unique_key` ```python unique_key(self) -> tuple[Hashable, ...] ``` Returns a key used to determine whether to instantiate a new DB interface. ### `AioSqliteConfiguration` **Methods:** #### `begin_sqlite_conn` ```python begin_sqlite_conn(self, conn: aiosqlite.AsyncAdapt_aiosqlite_connection) -> None ``` #### `begin_sqlite_stmt` ```python begin_sqlite_stmt(self, conn: sa.Connection) -> None ``` #### `begin_transaction` ```python begin_transaction(self, session: AsyncSession, with_for_update: bool = False) -> AsyncGenerator[AsyncSessionTransaction, None] ``` #### `begin_transaction` ```python begin_transaction(self, session: AsyncSession, with_for_update: bool = False) -> AbstractAsyncContextManager[AsyncSessionTransaction] ``` Enter a transaction for a session #### `create_db` ```python create_db(self, connection: AsyncConnection, base_metadata: sa.MetaData) -> None ``` Create the database #### `create_db` ```python create_db(self, connection: AsyncConnection, base_metadata: sa.MetaData) -> None ``` Create the database #### `drop_db` ```python drop_db(self, connection: AsyncConnection, base_metadata: sa.MetaData) -> None ``` Drop the database #### `drop_db` ```python drop_db(self, connection: AsyncConnection, base_metadata: sa.MetaData) -> None ``` Drop the database #### `engine` ```python engine(self) -> AsyncEngine ``` Retrieves an async SQLAlchemy engine. **Args:** * `connection_url`: The database connection string. Defaults to self.connection\_url * `echo`: Whether to echo SQL sent to the database. Defaults to self.echo * `timeout`: The database statement timeout, in seconds. Defaults to self.timeout **Returns:** * a SQLAlchemy engine #### `engine` ```python engine(self) -> AsyncEngine ``` Returns a SqlAlchemy engine #### `is_inmemory` ```python is_inmemory(self) -> bool ``` Returns true if database is run in memory #### `is_inmemory` ```python is_inmemory(self) -> bool ``` Returns true if database is run in memory #### `schedule_engine_disposal` ```python schedule_engine_disposal(self, cache_key: _EngineCacheKey) -> None ``` Dispose of an engine once the event loop is closing. See caveats at `add_event_loop_shutdown_callback`. We attempted to lazily clean up old engines when new engines are created, but if the loop the engine is attached to is already closed then the connections cannot be cleaned up properly and warnings are displayed. Engine disposal should only be important when running the application ephemerally. Notably, this is an issue in our tests where many short-lived event loops and engines are created which can consume all of the available database connection slots. Users operating at a scale where connection limits are encountered should be encouraged to use a standalone server. #### `session` ```python session(self, engine: AsyncEngine) -> AsyncSession ``` Retrieves a SQLAlchemy session for an engine. **Args:** * `engine`: a sqlalchemy engine #### `session` ```python session(self, engine: AsyncEngine) -> AsyncSession ``` Retrieves a SQLAlchemy session for an engine. #### `setup_sqlite` ```python setup_sqlite(self, conn: DBAPIConnection, record: ConnectionPoolEntry) -> None ``` Issue PRAGMA statements to SQLITE on connect. PRAGMAs only last for the duration of the connection. See [https://www.sqlite.org/pragma.html](https://www.sqlite.org/pragma.html) for more info. #### `unique_key` ```python unique_key(self) -> tuple[Hashable, ...] ``` Returns a key used to determine whether to instantiate a new DB interface. # dependencies Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-database-dependencies # `prefect.server.database.dependencies` Injected database interface dependencies ## Functions ### `provide_database_interface` ```python provide_database_interface() -> 'PrefectDBInterface' ``` Get the current Prefect REST API database interface. If components of the interface are not set, defaults will be inferred based on the dialect of the connection URL. ### `inject_db` ```python inject_db(fn: Callable[P, R]) -> Callable[P, R] ``` Decorator that provides a database interface to a function. The decorated function *must* take a `db` kwarg and if a db is passed when called it will be used instead of creating a new one. ### `db_injector` ```python db_injector(func: Union[_DBMethod[T, P, R], _DBFunction[P, R]]) -> Union[_Method[T, P, R], _Function[P, R]] ``` Decorator to inject a PrefectDBInterface instance as the first positional argument to the decorated function. Unlike `inject_db`, which injects the database connection as a keyword argument, `db_injector` adds it explicitly as the first positional argument. This change enhances type hinting by making the dependency on PrefectDBInterface explicit in the function signature. When decorating a coroutine function, the result will continue to pass the iscoroutinefunction() test. **Args:** * `func`: The function or method to decorate. **Returns:** * A wrapped descriptor object which injects the PrefectDBInterface instance * as the first argument to the function or method. This handles method * binding transparently. ### `temporary_database_config` ```python temporary_database_config(tmp_database_config: Optional[BaseDatabaseConfiguration]) -> Generator[None, object, None] ``` Temporarily override the Prefect REST API database configuration. When the context is closed, the existing database configuration will be restored. **Args:** * `tmp_database_config`: Prefect REST API database configuration to inject. ### `temporary_query_components` ```python temporary_query_components(tmp_queries: Optional['BaseQueryComponents']) -> Generator[None, object, None] ``` Temporarily override the Prefect REST API database query components. When the context is closed, the existing query components will be restored. **Args:** * `tmp_queries`: Prefect REST API query components to inject. ### `temporary_orm_config` ```python temporary_orm_config(tmp_orm_config: Optional['BaseORMConfiguration']) -> Generator[None, object, None] ``` Temporarily override the Prefect REST API ORM configuration. When the context is closed, the existing orm configuration will be restored. **Args:** * `tmp_orm_config`: Prefect REST API ORM configuration to inject. ### `temporary_interface_class` ```python temporary_interface_class(tmp_interface_class: Optional[type['PrefectDBInterface']]) -> Generator[None, object, None] ``` Temporarily override the Prefect REST API interface class When the context is closed, the existing interface will be restored. **Args:** * `tmp_interface_class`: Prefect REST API interface class to inject. ### `temporary_database_interface` ```python temporary_database_interface(tmp_database_config: Optional[BaseDatabaseConfiguration] = None, tmp_queries: Optional['BaseQueryComponents'] = None, tmp_orm_config: Optional['BaseORMConfiguration'] = None, tmp_interface_class: Optional[type['PrefectDBInterface']] = None) -> Generator[None, object, None] ``` Temporarily override the Prefect REST API database interface. Any interface components that are not explicitly provided will be cleared and inferred from the Prefect REST API database connection string dialect. When the context is closed, the existing database interface will be restored. **Args:** * `tmp_database_config`: An optional Prefect REST API database configuration to inject. * `tmp_orm_config`: An optional Prefect REST API ORM configuration to inject. * `tmp_queries`: Optional Prefect REST API query components to inject. * `tmp_interface_class`: Optional database interface class to inject ### `set_database_config` ```python set_database_config(database_config: Optional[BaseDatabaseConfiguration]) -> None ``` Set Prefect REST API database configuration. ### `set_query_components` ```python set_query_components(query_components: Optional['BaseQueryComponents']) -> None ``` Set Prefect REST API query components. ### `set_orm_config` ```python set_orm_config(orm_config: Optional['BaseORMConfiguration']) -> None ``` Set Prefect REST API orm configuration. ### `set_interface_class` ```python set_interface_class(interface_class: Optional[type['PrefectDBInterface']]) -> None ``` Set Prefect REST API interface class. ## Classes ### `DBInjector` # interface Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-database-interface # `prefect.server.database.interface` ## Classes ### `DBSingleton` Ensures that only one database interface is created per unique key ### `PrefectDBInterface` An interface for backend-specific SqlAlchemy actions and ORM models. The REST API can be configured to run against different databases in order maintain performance at different scales. This interface integrates database- and dialect- specific configuration into a unified interface that the orchestration engine runs against. **Methods:** #### `Agent` ```python Agent(self) -> type[orm_models.Agent] ``` An agent model #### `Artifact` ```python Artifact(self) -> type[orm_models.Artifact] ``` An artifact orm model #### `ArtifactCollection` ```python ArtifactCollection(self) -> type[orm_models.ArtifactCollection] ``` An artifact collection orm model #### `Automation` ```python Automation(self) -> type[orm_models.Automation] ``` An automation model #### `AutomationBucket` ```python AutomationBucket(self) -> type[orm_models.AutomationBucket] ``` An automation bucket model #### `AutomationEventFollower` ```python AutomationEventFollower(self) -> type[orm_models.AutomationEventFollower] ``` A model capturing one event following another event #### `AutomationRelatedResource` ```python AutomationRelatedResource(self) -> type[orm_models.AutomationRelatedResource] ``` An automation related resource model #### `Base` ```python Base(self) -> type[orm_models.Base] ``` Base class for orm models #### `BlockDocument` ```python BlockDocument(self) -> type[orm_models.BlockDocument] ``` A block document model #### `BlockDocumentReference` ```python BlockDocumentReference(self) -> type[orm_models.BlockDocumentReference] ``` A block document reference model #### `BlockSchema` ```python BlockSchema(self) -> type[orm_models.BlockSchema] ``` A block schema model #### `BlockSchemaReference` ```python BlockSchemaReference(self) -> type[orm_models.BlockSchemaReference] ``` A block schema reference model #### `BlockType` ```python BlockType(self) -> type[orm_models.BlockType] ``` A block type model #### `CompositeTriggerChildFiring` ```python CompositeTriggerChildFiring(self) -> type[orm_models.CompositeTriggerChildFiring] ``` A model capturing a composite trigger's child firing #### `ConcurrencyLimit` ```python ConcurrencyLimit(self) -> type[orm_models.ConcurrencyLimit] ``` A concurrency model #### `ConcurrencyLimitV2` ```python ConcurrencyLimitV2(self) -> type[orm_models.ConcurrencyLimitV2] ``` A v2 concurrency model #### `Configuration` ```python Configuration(self) -> type[orm_models.Configuration] ``` An configuration model #### `CsrfToken` ```python CsrfToken(self) -> type[orm_models.CsrfToken] ``` A csrf token model #### `Deployment` ```python Deployment(self) -> type[orm_models.Deployment] ``` A deployment orm model #### `DeploymentSchedule` ```python DeploymentSchedule(self) -> type[orm_models.DeploymentSchedule] ``` A deployment schedule orm model #### `Event` ```python Event(self) -> type[orm_models.Event] ``` An event model #### `EventResource` ```python EventResource(self) -> type[orm_models.EventResource] ``` An event resource model #### `Flow` ```python Flow(self) -> type[orm_models.Flow] ``` A flow orm model #### `FlowRun` ```python FlowRun(self) -> type[orm_models.FlowRun] ``` A flow run orm model #### `FlowRunInput` ```python FlowRunInput(self) -> type[orm_models.FlowRunInput] ``` A flow run input model #### `FlowRunState` ```python FlowRunState(self) -> type[orm_models.FlowRunState] ``` A flow run state orm model #### `Log` ```python Log(self) -> type[orm_models.Log] ``` A log orm model #### `SavedSearch` ```python SavedSearch(self) -> type[orm_models.SavedSearch] ``` A saved search orm model #### `TaskRun` ```python TaskRun(self) -> type[orm_models.TaskRun] ``` A task run orm model #### `TaskRunState` ```python TaskRunState(self) -> type[orm_models.TaskRunState] ``` A task run state orm model #### `TaskRunStateCache` ```python TaskRunStateCache(self) -> type[orm_models.TaskRunStateCache] ``` A task run state cache orm model #### `Variable` ```python Variable(self) -> type[orm_models.Variable] ``` A variable model #### `WorkPool` ```python WorkPool(self) -> type[orm_models.WorkPool] ``` A work pool orm model #### `WorkQueue` ```python WorkQueue(self) -> type[orm_models.WorkQueue] ``` A work queue model #### `Worker` ```python Worker(self) -> type[orm_models.Worker] ``` A worker process orm model #### `create_db` ```python create_db(self) -> None ``` Create the database #### `dialect` ```python dialect(self) -> type[sa.engine.Dialect] ``` #### `drop_db` ```python drop_db(self) -> None ``` Drop the database #### `engine` ```python engine(self) -> AsyncEngine ``` Provides a SqlAlchemy engine against a specific database. #### `is_db_connectable` ```python is_db_connectable(self) -> bool ``` Returns boolean indicating if the database is connectable. This method is used to determine if the server is ready to accept requests. #### `run_migrations_downgrade` ```python run_migrations_downgrade(self, revision: str = '-1') -> None ``` Run all downgrade migrations #### `run_migrations_upgrade` ```python run_migrations_upgrade(self) -> None ``` Run all upgrade migrations #### `session` ```python session(self) -> AsyncSession ``` Provides a SQLAlchemy session. #### `session_context` ```python session_context(self, begin_transaction: bool = False, with_for_update: bool = False) ``` Provides a SQLAlchemy session and a context manager for opening/closing the underlying connection. **Args:** * `begin_transaction`: if True, the context manager will begin a SQL transaction. Exiting the context manager will COMMIT or ROLLBACK any changes. # orm_models Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-database-orm_models # `prefect.server.database.orm_models` ## Classes ### `Base` Base SQLAlchemy model that automatically infers the table name and provides ID, created, and updated columns ### `Flow` SQLAlchemy mixin of a flow. ### `FlowRunState` SQLAlchemy mixin of a flow run state. **Methods:** #### `as_state` ```python as_state(self) -> schemas.states.State ``` #### `data` ```python data(self) -> Optional[Any] ``` ### `TaskRunState` SQLAlchemy model of a task run state. **Methods:** #### `as_state` ```python as_state(self) -> schemas.states.State ``` #### `data` ```python data(self) -> Optional[Any] ``` ### `Artifact` SQLAlchemy model of artifacts. ### `ArtifactCollection` ### `TaskRunStateCache` SQLAlchemy model of a task run state cache. ### `Run` Common columns and logic for FlowRun and TaskRun models **Methods:** #### `estimated_run_time` ```python estimated_run_time(self) -> datetime.timedelta ``` Total run time is incremented in the database whenever a RUNNING state is exited. To give up-to-date estimates, we estimate incremental run time for any runs currently in a RUNNING state. #### `estimated_start_time_delta` ```python estimated_start_time_delta(self) -> datetime.timedelta ``` The delta to the expected start time (or "lateness") is computed as the difference between the actual start time and expected start time. To give up-to-date estimates, we estimate lateness for any runs that don't have a start time and are not in a final state and were expected to start already. ### `FlowRun` SQLAlchemy model of a flow run. **Methods:** #### `estimated_run_time` ```python estimated_run_time(self) -> datetime.timedelta ``` Total run time is incremented in the database whenever a RUNNING state is exited. To give up-to-date estimates, we estimate incremental run time for any runs currently in a RUNNING state. #### `estimated_start_time_delta` ```python estimated_start_time_delta(self) -> datetime.timedelta ``` The delta to the expected start time (or "lateness") is computed as the difference between the actual start time and expected start time. To give up-to-date estimates, we estimate lateness for any runs that don't have a start time and are not in a final state and were expected to start already. #### `set_state` ```python set_state(self, state: Optional[FlowRunState]) -> None ``` If a state is assigned to this run, populate its run id. This would normally be handled by the back-populated SQLAlchemy relationship, but because this is a one-to-one pointer to a one-to-many relationship, SQLAlchemy can't figure it out. #### `state` ```python state(self) -> Optional[FlowRunState] ``` ### `TaskRun` SQLAlchemy model of a task run. **Methods:** #### `estimated_run_time` ```python estimated_run_time(self) -> datetime.timedelta ``` Total run time is incremented in the database whenever a RUNNING state is exited. To give up-to-date estimates, we estimate incremental run time for any runs currently in a RUNNING state. #### `estimated_start_time_delta` ```python estimated_start_time_delta(self) -> datetime.timedelta ``` The delta to the expected start time (or "lateness") is computed as the difference between the actual start time and expected start time. To give up-to-date estimates, we estimate lateness for any runs that don't have a start time and are not in a final state and were expected to start already. #### `set_state` ```python set_state(self, state: Optional[TaskRunState]) -> None ``` If a state is assigned to this run, populate its run id. This would normally be handled by the back-populated SQLAlchemy relationship, but because this is a one-to-one pointer to a one-to-many relationship, SQLAlchemy can't figure it out. #### `state` ```python state(self) -> Optional[TaskRunState] ``` ### `DeploymentSchedule` ### `Deployment` SQLAlchemy model of a deployment. **Methods:** #### `job_variables` ```python job_variables(self) -> Mapped[dict[str, Any]] ``` ### `Log` SQLAlchemy model of a logging statement. ### `ConcurrencyLimit` ### `ConcurrencyLimitV2` ### `BlockType` ### `BlockSchema` ### `BlockSchemaReference` ### `BlockDocument` **Methods:** #### `decrypt_data` ```python decrypt_data(self, session: AsyncSession) -> dict[str, Any] ``` Retrieve decrypted data from the ORM model. Note: will only succeed if the caller has sufficient permission. #### `encrypt_data` ```python encrypt_data(self, session: AsyncSession, data: dict[str, Any]) -> None ``` Store encrypted data on the ORM model Note: will only succeed if the caller has sufficient permission. ### `BlockDocumentReference` ### `Configuration` ### `SavedSearch` SQLAlchemy model of a saved search. ### `WorkQueue` SQLAlchemy model of a work queue ### `WorkPool` SQLAlchemy model of an worker ### `Worker` SQLAlchemy model of an worker ### `Agent` SQLAlchemy model of an agent ### `Variable` ### `FlowRunInput` ### `CsrfToken` ### `Automation` **Methods:** #### `sort_expression` ```python sort_expression(cls, value: AutomationSort) -> sa.ColumnExpressionArgument[Any] ``` Return an expression used to sort Automations ### `AutomationBucket` ### `AutomationRelatedResource` ### `CompositeTriggerChildFiring` ### `AutomationEventFollower` ### `Event` ### `EventResource` ### `BaseORMConfiguration` Abstract base class used to inject database-specific ORM configuration into Prefect. Modifications to core Prefect REST API data structures can have unintended consequences. Use with caution. **Methods:** #### `artifact_collection_unique_upsert_columns` ```python artifact_collection_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting an ArtifactCollection #### `block_document_unique_upsert_columns` ```python block_document_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a BlockDocument #### `block_schema_unique_upsert_columns` ```python block_schema_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a BlockSchema #### `block_type_unique_upsert_columns` ```python block_type_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a BlockType #### `concurrency_limit_unique_upsert_columns` ```python concurrency_limit_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a ConcurrencyLimit #### `deployment_unique_upsert_columns` ```python deployment_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a Deployment #### `flow_run_unique_upsert_columns` ```python flow_run_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a FlowRun #### `flow_unique_upsert_columns` ```python flow_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a Flow #### `saved_search_unique_upsert_columns` ```python saved_search_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a SavedSearch #### `task_run_unique_upsert_columns` ```python task_run_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a TaskRun #### `unique_key` ```python unique_key(self) -> tuple[Hashable, ...] ``` Returns a key used to determine whether to instantiate a new DB interface. #### `versions_dir` ```python versions_dir(self) -> Path ``` Directory containing migrations ### `AsyncPostgresORMConfiguration` Postgres specific orm configuration **Methods:** #### `artifact_collection_unique_upsert_columns` ```python artifact_collection_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting an ArtifactCollection #### `block_document_unique_upsert_columns` ```python block_document_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a BlockDocument #### `block_schema_unique_upsert_columns` ```python block_schema_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a BlockSchema #### `block_type_unique_upsert_columns` ```python block_type_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a BlockType #### `concurrency_limit_unique_upsert_columns` ```python concurrency_limit_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a ConcurrencyLimit #### `deployment_unique_upsert_columns` ```python deployment_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a Deployment #### `flow_run_unique_upsert_columns` ```python flow_run_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a FlowRun #### `flow_unique_upsert_columns` ```python flow_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a Flow #### `saved_search_unique_upsert_columns` ```python saved_search_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a SavedSearch #### `task_run_unique_upsert_columns` ```python task_run_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a TaskRun #### `unique_key` ```python unique_key(self) -> tuple[Hashable, ...] ``` Returns a key used to determine whether to instantiate a new DB interface. #### `versions_dir` ```python versions_dir(self) -> Path ``` Directory containing migrations #### `versions_dir` ```python versions_dir(self) -> Path ``` Directory containing migrations ### `AioSqliteORMConfiguration` SQLite specific orm configuration **Methods:** #### `artifact_collection_unique_upsert_columns` ```python artifact_collection_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting an ArtifactCollection #### `block_document_unique_upsert_columns` ```python block_document_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a BlockDocument #### `block_schema_unique_upsert_columns` ```python block_schema_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a BlockSchema #### `block_type_unique_upsert_columns` ```python block_type_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a BlockType #### `concurrency_limit_unique_upsert_columns` ```python concurrency_limit_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a ConcurrencyLimit #### `deployment_unique_upsert_columns` ```python deployment_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a Deployment #### `flow_run_unique_upsert_columns` ```python flow_run_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a FlowRun #### `flow_unique_upsert_columns` ```python flow_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a Flow #### `saved_search_unique_upsert_columns` ```python saved_search_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a SavedSearch #### `task_run_unique_upsert_columns` ```python task_run_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a TaskRun #### `unique_key` ```python unique_key(self) -> tuple[Hashable, ...] ``` Returns a key used to determine whether to instantiate a new DB interface. #### `versions_dir` ```python versions_dir(self) -> Path ``` Directory containing migrations #### `versions_dir` ```python versions_dir(self) -> Path ``` Directory containing migrations # query_components Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-database-query_components # `prefect.server.database.query_components` ## Classes ### `FlowRunGraphV2Node` ### `BaseQueryComponents` Abstract base class used to inject dialect-specific SQL operations into Prefect. **Methods:** #### `build_json_object` ```python build_json_object(self, *args: Union[str, sa.ColumnElement[Any]]) -> sa.ColumnElement[Any] ``` builds a JSON object from sequential key-value pairs #### `cast_to_json` ```python cast_to_json(self, json_obj: sa.ColumnElement[T]) -> sa.ColumnElement[T] ``` casts to JSON object if necessary #### `clear_configuration_value_cache_for_key` ```python clear_configuration_value_cache_for_key(self, key: str) -> None ``` Removes a configuration key from the cache. #### `flow_run_graph_v2` ```python flow_run_graph_v2(self, db: PrefectDBInterface, session: AsyncSession, flow_run_id: UUID, since: DateTime, max_nodes: int, max_artifacts: int) -> Graph ``` Returns the query that selects all of the nodes and edges for a flow run graph (version 2). #### `get_scheduled_flow_runs_from_work_pool` ```python get_scheduled_flow_runs_from_work_pool(self, db: PrefectDBInterface, session: AsyncSession, limit: Optional[int] = None, worker_limit: Optional[int] = None, queue_limit: Optional[int] = None, work_pool_ids: Optional[list[UUID]] = None, work_queue_ids: Optional[list[UUID]] = None, scheduled_before: Optional[DateTime] = None, scheduled_after: Optional[DateTime] = None, respect_queue_priorities: bool = False) -> list[schemas.responses.WorkerFlowRunResponse] ``` #### `get_scheduled_flow_runs_from_work_queues` ```python get_scheduled_flow_runs_from_work_queues(self, db: PrefectDBInterface, limit_per_queue: Optional[int] = None, work_queue_ids: Optional[list[UUID]] = None, scheduled_before: Optional[DateTime] = None) -> sa.Select[tuple[orm_models.FlowRun, UUID]] ``` Returns all scheduled runs in work queues, subject to provided parameters. This query returns a `(orm_models.FlowRun, orm_models.WorkQueue.id)` pair; calling `result.all()` will return both; calling `result.scalars().unique().all()` will return only the flow run because it grabs the first result. #### `insert` ```python insert(self, obj: type[orm_models.Base]) -> Union[postgresql.Insert, sqlite.Insert] ``` dialect-specific insert statement #### `json_arr_agg` ```python json_arr_agg(self, json_array: sa.ColumnElement[Any]) -> sa.ColumnElement[Any] ``` aggregates a JSON array #### `make_timestamp_intervals` ```python make_timestamp_intervals(self, start_time: datetime.datetime, end_time: datetime.datetime, interval: datetime.timedelta) -> sa.Select[tuple[datetime.datetime, datetime.datetime]] ``` #### `read_configuration_value` ```python read_configuration_value(self, db: PrefectDBInterface, session: AsyncSession, key: str) -> Optional[dict[str, Any]] ``` Read a configuration value by key. Configuration values should not be changed at run time, so retrieved values are cached in memory. The main use of configurations is encrypting blocks, this speeds up nested block document queries. #### `set_state_id_on_inserted_flow_runs_statement` ```python set_state_id_on_inserted_flow_runs_statement(self, inserted_flow_run_ids: Sequence[UUID], insert_flow_run_states: Iterable[dict[str, Any]]) -> sa.Update ``` #### `unique_key` ```python unique_key(self) -> tuple[Hashable, ...] ``` Returns a key used to determine whether to instantiate a new DB interface. #### `uses_json_strings` ```python uses_json_strings(self) -> bool ``` specifies whether the configured dialect returns JSON as strings ### `AsyncPostgresQueryComponents` **Methods:** #### `build_json_object` ```python build_json_object(self, *args: Union[str, sa.ColumnElement[Any]]) -> sa.ColumnElement[Any] ``` #### `build_json_object` ```python build_json_object(self, *args: Union[str, sa.ColumnElement[Any]]) -> sa.ColumnElement[Any] ``` builds a JSON object from sequential key-value pairs #### `cast_to_json` ```python cast_to_json(self, json_obj: sa.ColumnElement[T]) -> sa.ColumnElement[T] ``` #### `cast_to_json` ```python cast_to_json(self, json_obj: sa.ColumnElement[T]) -> sa.ColumnElement[T] ``` casts to JSON object if necessary #### `clear_configuration_value_cache_for_key` ```python clear_configuration_value_cache_for_key(self, key: str) -> None ``` Removes a configuration key from the cache. #### `flow_run_graph_v2` ```python flow_run_graph_v2(self, db: PrefectDBInterface, session: AsyncSession, flow_run_id: UUID, since: DateTime, max_nodes: int, max_artifacts: int) -> Graph ``` Returns the query that selects all of the nodes and edges for a flow run graph (version 2). #### `get_scheduled_flow_runs_from_work_pool` ```python get_scheduled_flow_runs_from_work_pool(self, db: PrefectDBInterface, session: AsyncSession, limit: Optional[int] = None, worker_limit: Optional[int] = None, queue_limit: Optional[int] = None, work_pool_ids: Optional[list[UUID]] = None, work_queue_ids: Optional[list[UUID]] = None, scheduled_before: Optional[DateTime] = None, scheduled_after: Optional[DateTime] = None, respect_queue_priorities: bool = False) -> list[schemas.responses.WorkerFlowRunResponse] ``` #### `get_scheduled_flow_runs_from_work_queues` ```python get_scheduled_flow_runs_from_work_queues(self, db: PrefectDBInterface, limit_per_queue: Optional[int] = None, work_queue_ids: Optional[list[UUID]] = None, scheduled_before: Optional[DateTime] = None) -> sa.Select[tuple[orm_models.FlowRun, UUID]] ``` Returns all scheduled runs in work queues, subject to provided parameters. This query returns a `(orm_models.FlowRun, orm_models.WorkQueue.id)` pair; calling `result.all()` will return both; calling `result.scalars().unique().all()` will return only the flow run because it grabs the first result. #### `insert` ```python insert(self, obj: type[orm_models.Base]) -> postgresql.Insert ``` #### `insert` ```python insert(self, obj: type[orm_models.Base]) -> Union[postgresql.Insert, sqlite.Insert] ``` dialect-specific insert statement #### `json_arr_agg` ```python json_arr_agg(self, json_array: sa.ColumnElement[Any]) -> sa.ColumnElement[Any] ``` #### `json_arr_agg` ```python json_arr_agg(self, json_array: sa.ColumnElement[Any]) -> sa.ColumnElement[Any] ``` aggregates a JSON array #### `make_timestamp_intervals` ```python make_timestamp_intervals(self, start_time: datetime.datetime, end_time: datetime.datetime, interval: datetime.timedelta) -> sa.Select[tuple[datetime.datetime, datetime.datetime]] ``` #### `make_timestamp_intervals` ```python make_timestamp_intervals(self, start_time: datetime.datetime, end_time: datetime.datetime, interval: datetime.timedelta) -> sa.Select[tuple[datetime.datetime, datetime.datetime]] ``` #### `read_configuration_value` ```python read_configuration_value(self, db: PrefectDBInterface, session: AsyncSession, key: str) -> Optional[dict[str, Any]] ``` Read a configuration value by key. Configuration values should not be changed at run time, so retrieved values are cached in memory. The main use of configurations is encrypting blocks, this speeds up nested block document queries. #### `set_state_id_on_inserted_flow_runs_statement` ```python set_state_id_on_inserted_flow_runs_statement(self, db: PrefectDBInterface, inserted_flow_run_ids: Sequence[UUID], insert_flow_run_states: Iterable[dict[str, Any]]) -> sa.Update ``` Given a list of flow run ids and associated states, set the state\_id to the appropriate state for all flow runs #### `set_state_id_on_inserted_flow_runs_statement` ```python set_state_id_on_inserted_flow_runs_statement(self, inserted_flow_run_ids: Sequence[UUID], insert_flow_run_states: Iterable[dict[str, Any]]) -> sa.Update ``` #### `unique_key` ```python unique_key(self) -> tuple[Hashable, ...] ``` Returns a key used to determine whether to instantiate a new DB interface. #### `uses_json_strings` ```python uses_json_strings(self) -> bool ``` #### `uses_json_strings` ```python uses_json_strings(self) -> bool ``` specifies whether the configured dialect returns JSON as strings ### `UUIDList` Map a JSON list of strings back to a list of UUIDs at the result loading stage **Methods:** #### `process_result_value` ```python process_result_value(self, value: Optional[list[Union[str, UUID]]], dialect: sa.Dialect) -> Optional[list[UUID]] ``` ### `AioSqliteQueryComponents` **Methods:** #### `build_json_object` ```python build_json_object(self, *args: Union[str, sa.ColumnElement[Any]]) -> sa.ColumnElement[Any] ``` #### `build_json_object` ```python build_json_object(self, *args: Union[str, sa.ColumnElement[Any]]) -> sa.ColumnElement[Any] ``` builds a JSON object from sequential key-value pairs #### `cast_to_json` ```python cast_to_json(self, json_obj: sa.ColumnElement[T]) -> sa.ColumnElement[T] ``` #### `cast_to_json` ```python cast_to_json(self, json_obj: sa.ColumnElement[T]) -> sa.ColumnElement[T] ``` casts to JSON object if necessary #### `clear_configuration_value_cache_for_key` ```python clear_configuration_value_cache_for_key(self, key: str) -> None ``` Removes a configuration key from the cache. #### `flow_run_graph_v2` ```python flow_run_graph_v2(self, db: PrefectDBInterface, session: AsyncSession, flow_run_id: UUID, since: DateTime, max_nodes: int, max_artifacts: int) -> Graph ``` Returns the query that selects all of the nodes and edges for a flow run graph (version 2). #### `get_scheduled_flow_runs_from_work_pool` ```python get_scheduled_flow_runs_from_work_pool(self, db: PrefectDBInterface, session: AsyncSession, limit: Optional[int] = None, worker_limit: Optional[int] = None, queue_limit: Optional[int] = None, work_pool_ids: Optional[list[UUID]] = None, work_queue_ids: Optional[list[UUID]] = None, scheduled_before: Optional[DateTime] = None, scheduled_after: Optional[DateTime] = None, respect_queue_priorities: bool = False) -> list[schemas.responses.WorkerFlowRunResponse] ``` #### `get_scheduled_flow_runs_from_work_queues` ```python get_scheduled_flow_runs_from_work_queues(self, db: PrefectDBInterface, limit_per_queue: Optional[int] = None, work_queue_ids: Optional[list[UUID]] = None, scheduled_before: Optional[DateTime] = None) -> sa.Select[tuple[orm_models.FlowRun, UUID]] ``` Returns all scheduled runs in work queues, subject to provided parameters. This query returns a `(orm_models.FlowRun, orm_models.WorkQueue.id)` pair; calling `result.all()` will return both; calling `result.scalars().unique().all()` will return only the flow run because it grabs the first result. #### `insert` ```python insert(self, obj: type[orm_models.Base]) -> sqlite.Insert ``` #### `insert` ```python insert(self, obj: type[orm_models.Base]) -> Union[postgresql.Insert, sqlite.Insert] ``` dialect-specific insert statement #### `json_arr_agg` ```python json_arr_agg(self, json_array: sa.ColumnElement[Any]) -> sa.ColumnElement[Any] ``` #### `json_arr_agg` ```python json_arr_agg(self, json_array: sa.ColumnElement[Any]) -> sa.ColumnElement[Any] ``` aggregates a JSON array #### `make_timestamp_intervals` ```python make_timestamp_intervals(self, start_time: datetime.datetime, end_time: datetime.datetime, interval: datetime.timedelta) -> sa.Select[tuple[datetime.datetime, datetime.datetime]] ``` #### `make_timestamp_intervals` ```python make_timestamp_intervals(self, start_time: datetime.datetime, end_time: datetime.datetime, interval: datetime.timedelta) -> sa.Select[tuple[datetime.datetime, datetime.datetime]] ``` #### `read_configuration_value` ```python read_configuration_value(self, db: PrefectDBInterface, session: AsyncSession, key: str) -> Optional[dict[str, Any]] ``` Read a configuration value by key. Configuration values should not be changed at run time, so retrieved values are cached in memory. The main use of configurations is encrypting blocks, this speeds up nested block document queries. #### `set_state_id_on_inserted_flow_runs_statement` ```python set_state_id_on_inserted_flow_runs_statement(self, db: PrefectDBInterface, inserted_flow_run_ids: Sequence[UUID], insert_flow_run_states: Iterable[dict[str, Any]]) -> sa.Update ``` Given a list of flow run ids and associated states, set the state\_id to the appropriate state for all flow runs #### `set_state_id_on_inserted_flow_runs_statement` ```python set_state_id_on_inserted_flow_runs_statement(self, inserted_flow_run_ids: Sequence[UUID], insert_flow_run_states: Iterable[dict[str, Any]]) -> sa.Update ``` #### `unique_key` ```python unique_key(self) -> tuple[Hashable, ...] ``` Returns a key used to determine whether to instantiate a new DB interface. #### `uses_json_strings` ```python uses_json_strings(self) -> bool ``` #### `uses_json_strings` ```python uses_json_strings(self) -> bool ``` specifies whether the configured dialect returns JSON as strings # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-events-__init__ # `prefect.server.events` *This module is empty or contains only private/internal implementations.* # actions Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-events-actions # `prefect.server.events.actions` The actions consumer watches for actions that have been triggered by Automations and carries them out. Also includes the various concrete subtypes of Actions ## Functions ### `record_action_happening` ```python record_action_happening(id: UUID) -> None ``` Record that an action has happened, with an expiration of an hour. ### `action_has_already_happened` ```python action_has_already_happened(id: UUID) -> bool ``` Check if the action has already happened ### `consumer` ```python consumer() -> AsyncGenerator[MessageHandler, None] ``` ## Classes ### `ActionFailed` ### `Action` An Action that may be performed when an Automation is triggered **Methods:** #### `act` ```python act(self, triggered_action: 'TriggeredAction') -> None ``` Perform the requested Action #### `fail` ```python fail(self, triggered_action: 'TriggeredAction', reason: str) -> None ``` #### `logging_context` ```python logging_context(self, triggered_action: 'TriggeredAction') -> Dict[str, Any] ``` Common logging context for all actions #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `succeed` ```python succeed(self, triggered_action: 'TriggeredAction') -> None ``` ### `DoNothing` Do nothing when an Automation is triggered **Methods:** #### `act` ```python act(self, triggered_action: 'TriggeredAction') -> None ``` #### `describe_for_cli` ```python describe_for_cli(self) -> str ``` A human-readable description of the action ### `EmitEventAction` **Methods:** #### `act` ```python act(self, triggered_action: 'TriggeredAction') -> None ``` #### `create_event` ```python create_event(self, triggered_action: 'TriggeredAction') -> 'Event' ``` Create an event from the TriggeredAction #### `describe_for_cli` ```python describe_for_cli(self) -> str ``` A human-readable description of the action ### `ExternalDataAction` Base class for Actions that require data from an external source such as the Orchestration API **Methods:** #### `describe_for_cli` ```python describe_for_cli(self) -> str ``` A human-readable description of the action #### `events_api_client` ```python events_api_client(self, triggered_action: 'TriggeredAction') -> PrefectServerEventsAPIClient ``` #### `orchestration_client` ```python orchestration_client(self, triggered_action: 'TriggeredAction') -> 'OrchestrationClient' ``` #### `reason_from_response` ```python reason_from_response(self, response: Response) -> str ``` ### `JinjaTemplateAction` Base class for Actions that use Jinja templates supplied by the user and are rendered with a context containing data from the triggered action, and the orchestration API. **Methods:** #### `events_api_client` ```python events_api_client(self, triggered_action: 'TriggeredAction') -> PrefectServerEventsAPIClient ``` #### `instantiate_object` ```python instantiate_object(self, model: Type[PrefectBaseModel], data: Dict[str, Any], triggered_action: 'TriggeredAction', resource: Optional['Resource'] = None) -> PrefectBaseModel ``` #### `orchestration_client` ```python orchestration_client(self, triggered_action: 'TriggeredAction') -> 'OrchestrationClient' ``` #### `reason_from_response` ```python reason_from_response(self, response: Response) -> str ``` #### `templates_in_dictionary` ```python templates_in_dictionary(cls, dict_: dict[Any, Any | dict[Any, Any]]) -> list[tuple[dict[Any, Any], dict[Any, str]]] ``` #### `validate_template` ```python validate_template(cls, template: str, field_name: str) -> str ``` ### `DeploymentAction` Base class for Actions that operate on Deployments and need to infer them from events **Methods:** #### `deployment_id_to_use` ```python deployment_id_to_use(self, triggered_action: 'TriggeredAction') -> UUID ``` #### `describe_for_cli` ```python describe_for_cli(self) -> str ``` A human-readable description of the action #### `selected_deployment_requires_id` ```python selected_deployment_requires_id(self) -> Self ``` ### `DeploymentCommandAction` Executes a command against a matching deployment **Methods:** #### `act` ```python act(self, triggered_action: 'TriggeredAction') -> None ``` #### `command` ```python command(self, orchestration: 'OrchestrationClient', deployment_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` Execute the deployment command #### `events_api_client` ```python events_api_client(self, triggered_action: 'TriggeredAction') -> PrefectServerEventsAPIClient ``` #### `orchestration_client` ```python orchestration_client(self, triggered_action: 'TriggeredAction') -> 'OrchestrationClient' ``` #### `reason_from_response` ```python reason_from_response(self, response: Response) -> str ``` #### `selected_deployment_requires_id` ```python selected_deployment_requires_id(self) ``` ### `RunDeployment` Runs the given deployment with the given parameters **Methods:** #### `act` ```python act(self, triggered_action: 'TriggeredAction') -> None ``` #### `command` ```python command(self, orchestration: 'OrchestrationClient', deployment_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` #### `command` ```python command(self, orchestration: 'OrchestrationClient', deployment_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` Execute the deployment command #### `instantiate_object` ```python instantiate_object(self, model: Type[PrefectBaseModel], data: Dict[str, Any], triggered_action: 'TriggeredAction', resource: Optional['Resource'] = None) -> PrefectBaseModel ``` #### `render_parameters` ```python render_parameters(self, triggered_action: 'TriggeredAction') -> Dict[str, Any] ``` #### `templates_in_dictionary` ```python templates_in_dictionary(cls, dict_: dict[Any, Any | dict[Any, Any]]) -> list[tuple[dict[Any, Any], dict[Any, str]]] ``` #### `validate_parameters` ```python validate_parameters(cls, value: dict[str, Any] | None) -> dict[str, Any] | None ``` #### `validate_template` ```python validate_template(cls, template: str, field_name: str) -> str ``` ### `PauseDeployment` Pauses the given Deployment **Methods:** #### `act` ```python act(self, triggered_action: 'TriggeredAction') -> None ``` #### `command` ```python command(self, orchestration: 'OrchestrationClient', deployment_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` #### `command` ```python command(self, orchestration: 'OrchestrationClient', deployment_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` Execute the deployment command ### `ResumeDeployment` Resumes the given Deployment **Methods:** #### `act` ```python act(self, triggered_action: 'TriggeredAction') -> None ``` #### `command` ```python command(self, orchestration: 'OrchestrationClient', deployment_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` #### `command` ```python command(self, orchestration: 'OrchestrationClient', deployment_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` Execute the deployment command ### `FlowRunAction` An action that operates on a flow run **Methods:** #### `events_api_client` ```python events_api_client(self, triggered_action: 'TriggeredAction') -> PrefectServerEventsAPIClient ``` #### `flow_run` ```python flow_run(self, triggered_action: 'TriggeredAction') -> UUID ``` #### `orchestration_client` ```python orchestration_client(self, triggered_action: 'TriggeredAction') -> 'OrchestrationClient' ``` #### `reason_from_response` ```python reason_from_response(self, response: Response) -> str ``` ### `FlowRunStateChangeAction` Changes the state of a flow run associated with the trigger **Methods:** #### `act` ```python act(self, triggered_action: 'TriggeredAction') -> None ``` #### `flow_run` ```python flow_run(self, triggered_action: 'TriggeredAction') -> UUID ``` #### `new_state` ```python new_state(self, triggered_action: 'TriggeredAction') -> StateCreate ``` Return the new state for the flow run ### `ChangeFlowRunState` Changes the state of a flow run associated with the trigger **Methods:** #### `act` ```python act(self, triggered_action: 'TriggeredAction') -> None ``` #### `new_state` ```python new_state(self, triggered_action: 'TriggeredAction') -> StateCreate ``` #### `new_state` ```python new_state(self, triggered_action: 'TriggeredAction') -> StateCreate ``` Return the new state for the flow run ### `CancelFlowRun` Cancels a flow run associated with the trigger **Methods:** #### `act` ```python act(self, triggered_action: 'TriggeredAction') -> None ``` #### `new_state` ```python new_state(self, triggered_action: 'TriggeredAction') -> StateCreate ``` #### `new_state` ```python new_state(self, triggered_action: 'TriggeredAction') -> StateCreate ``` Return the new state for the flow run ### `SuspendFlowRun` Suspends a flow run associated with the trigger **Methods:** #### `act` ```python act(self, triggered_action: 'TriggeredAction') -> None ``` #### `new_state` ```python new_state(self, triggered_action: 'TriggeredAction') -> StateCreate ``` #### `new_state` ```python new_state(self, triggered_action: 'TriggeredAction') -> StateCreate ``` Return the new state for the flow run ### `ResumeFlowRun` Resumes a paused or suspended flow run associated with the trigger **Methods:** #### `act` ```python act(self, triggered_action: 'TriggeredAction') -> None ``` #### `flow_run` ```python flow_run(self, triggered_action: 'TriggeredAction') -> UUID ``` ### `CallWebhook` Call a webhook when an Automation is triggered. **Methods:** #### `act` ```python act(self, triggered_action: 'TriggeredAction') -> None ``` #### `ensure_payload_is_a_string` ```python ensure_payload_is_a_string(cls, value: Union[str, Dict[str, Any], None]) -> Optional[str] ``` Temporary measure while we migrate payloads from being a dictionary to a string template. This covers both reading from the database where values may currently be a dictionary, as well as the API, where older versions of the frontend may be sending a JSON object with the single `"message"` key. #### `instantiate_object` ```python instantiate_object(self, model: Type[PrefectBaseModel], data: Dict[str, Any], triggered_action: 'TriggeredAction', resource: Optional['Resource'] = None) -> PrefectBaseModel ``` #### `templates_in_dictionary` ```python templates_in_dictionary(cls, dict_: dict[Any, Any | dict[Any, Any]]) -> list[tuple[dict[Any, Any], dict[Any, str]]] ``` #### `validate_payload_templates` ```python validate_payload_templates(cls, value: Optional[str]) -> Optional[str] ``` Validate user-provided payload template. #### `validate_template` ```python validate_template(cls, template: str, field_name: str) -> str ``` ### `SendNotification` Send a notification when an Automation is triggered **Methods:** #### `act` ```python act(self, triggered_action: 'TriggeredAction') -> None ``` #### `instantiate_object` ```python instantiate_object(self, model: Type[PrefectBaseModel], data: Dict[str, Any], triggered_action: 'TriggeredAction', resource: Optional['Resource'] = None) -> PrefectBaseModel ``` #### `is_valid_template` ```python is_valid_template(cls, value: str, info: ValidationInfo) -> str ``` #### `render` ```python render(self, triggered_action: 'TriggeredAction') -> List[str] ``` #### `templates_in_dictionary` ```python templates_in_dictionary(cls, dict_: dict[Any, Any | dict[Any, Any]]) -> list[tuple[dict[Any, Any], dict[Any, str]]] ``` #### `validate_template` ```python validate_template(cls, template: str, field_name: str) -> str ``` ### `WorkPoolAction` Base class for Actions that operate on Work Pools and need to infer them from events **Methods:** #### `describe_for_cli` ```python describe_for_cli(self) -> str ``` A human-readable description of the action #### `selected_work_pool_requires_id` ```python selected_work_pool_requires_id(self) -> Self ``` #### `work_pool_id_to_use` ```python work_pool_id_to_use(self, triggered_action: 'TriggeredAction') -> UUID ``` ### `WorkPoolCommandAction` **Methods:** #### `act` ```python act(self, triggered_action: 'TriggeredAction') -> None ``` #### `command` ```python command(self, orchestration: 'OrchestrationClient', work_pool: WorkPool, triggered_action: 'TriggeredAction') -> Response ``` Issue the command to the Work Pool #### `events_api_client` ```python events_api_client(self, triggered_action: 'TriggeredAction') -> PrefectServerEventsAPIClient ``` #### `orchestration_client` ```python orchestration_client(self, triggered_action: 'TriggeredAction') -> 'OrchestrationClient' ``` #### `reason_from_response` ```python reason_from_response(self, response: Response) -> str ``` #### `target_work_pool` ```python target_work_pool(self, triggered_action: 'TriggeredAction') -> WorkPool ``` ### `PauseWorkPool` Pauses a Work Pool **Methods:** #### `act` ```python act(self, triggered_action: 'TriggeredAction') -> None ``` #### `command` ```python command(self, orchestration: 'OrchestrationClient', work_pool: WorkPool, triggered_action: 'TriggeredAction') -> Response ``` #### `command` ```python command(self, orchestration: 'OrchestrationClient', work_pool: WorkPool, triggered_action: 'TriggeredAction') -> Response ``` Issue the command to the Work Pool #### `target_work_pool` ```python target_work_pool(self, triggered_action: 'TriggeredAction') -> WorkPool ``` ### `ResumeWorkPool` Resumes a Work Pool **Methods:** #### `act` ```python act(self, triggered_action: 'TriggeredAction') -> None ``` #### `command` ```python command(self, orchestration: 'OrchestrationClient', work_pool: WorkPool, triggered_action: 'TriggeredAction') -> Response ``` #### `command` ```python command(self, orchestration: 'OrchestrationClient', work_pool: WorkPool, triggered_action: 'TriggeredAction') -> Response ``` Issue the command to the Work Pool #### `target_work_pool` ```python target_work_pool(self, triggered_action: 'TriggeredAction') -> WorkPool ``` ### `WorkQueueAction` Base class for Actions that operate on Work Queues and need to infer them from events **Methods:** #### `describe_for_cli` ```python describe_for_cli(self) -> str ``` A human-readable description of the action #### `selected_work_queue_requires_id` ```python selected_work_queue_requires_id(self) -> Self ``` #### `work_queue_id_to_use` ```python work_queue_id_to_use(self, triggered_action: 'TriggeredAction') -> UUID ``` ### `WorkQueueCommandAction` **Methods:** #### `act` ```python act(self, triggered_action: 'TriggeredAction') -> None ``` #### `command` ```python command(self, orchestration: 'OrchestrationClient', work_queue_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` Issue the command to the Work Queue #### `events_api_client` ```python events_api_client(self, triggered_action: 'TriggeredAction') -> PrefectServerEventsAPIClient ``` #### `orchestration_client` ```python orchestration_client(self, triggered_action: 'TriggeredAction') -> 'OrchestrationClient' ``` #### `reason_from_response` ```python reason_from_response(self, response: Response) -> str ``` #### `selected_work_queue_requires_id` ```python selected_work_queue_requires_id(self) -> Self ``` ### `PauseWorkQueue` Pauses a Work Queue **Methods:** #### `act` ```python act(self, triggered_action: 'TriggeredAction') -> None ``` #### `command` ```python command(self, orchestration: 'OrchestrationClient', work_queue_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` #### `command` ```python command(self, orchestration: 'OrchestrationClient', work_queue_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` Issue the command to the Work Queue ### `ResumeWorkQueue` Resumes a Work Queue **Methods:** #### `act` ```python act(self, triggered_action: 'TriggeredAction') -> None ``` #### `command` ```python command(self, orchestration: 'OrchestrationClient', work_queue_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` #### `command` ```python command(self, orchestration: 'OrchestrationClient', work_queue_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` Issue the command to the Work Queue ### `AutomationAction` Base class for Actions that operate on Automations and need to infer them from events **Methods:** #### `automation_id_to_use` ```python automation_id_to_use(self, triggered_action: 'TriggeredAction') -> UUID ``` #### `describe_for_cli` ```python describe_for_cli(self) -> str ``` A human-readable description of the action #### `selected_automation_requires_id` ```python selected_automation_requires_id(self) -> Self ``` ### `AutomationCommandAction` **Methods:** #### `act` ```python act(self, triggered_action: 'TriggeredAction') -> None ``` #### `command` ```python command(self, events: PrefectServerEventsAPIClient, automation_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` Issue the command to the Work Queue #### `events_api_client` ```python events_api_client(self, triggered_action: 'TriggeredAction') -> PrefectServerEventsAPIClient ``` #### `orchestration_client` ```python orchestration_client(self, triggered_action: 'TriggeredAction') -> 'OrchestrationClient' ``` #### `reason_from_response` ```python reason_from_response(self, response: Response) -> str ``` #### `selected_automation_requires_id` ```python selected_automation_requires_id(self) -> Self ``` ### `PauseAutomation` Pauses a Work Queue **Methods:** #### `act` ```python act(self, triggered_action: 'TriggeredAction') -> None ``` #### `command` ```python command(self, events: PrefectServerEventsAPIClient, automation_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` #### `command` ```python command(self, events: PrefectServerEventsAPIClient, automation_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` Issue the command to the Work Queue ### `ResumeAutomation` Resumes a Work Queue **Methods:** #### `act` ```python act(self, triggered_action: 'TriggeredAction') -> None ``` #### `command` ```python command(self, events: PrefectServerEventsAPIClient, automation_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` #### `command` ```python command(self, events: PrefectServerEventsAPIClient, automation_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` Issue the command to the Work Queue # clients Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-events-clients # `prefect.server.events.clients` ## Classes ### `EventsClient` The abstract interface for a Prefect Events client **Methods:** #### `emit` ```python emit(self, event: Event) -> Optional[Event] ``` ### `NullEventsClient` A no-op implementation of the Prefect Events client for testing **Methods:** #### `client_name` ```python client_name(self) -> str ``` #### `emit` ```python emit(self, event: Event) -> None ``` #### `emit` ```python emit(self, event: Event) -> None ``` Emit a single event ### `AssertingEventsClient` An implementation of the Prefect Events client that records all events sent to it for inspection during tests. **Methods:** #### `assert_emitted_event_count` ```python assert_emitted_event_count(cls, count: int) -> None ``` Assert that the given number of events were emitted. #### `assert_emitted_event_with` ```python assert_emitted_event_with(cls, event: Optional[str] = None, resource: Optional[Dict[str, LabelValue]] = None, related: Optional[List[Dict[str, LabelValue]]] = None, payload: Optional[Dict[str, Any]] = None) -> None ``` Assert that an event was emitted containing the given properties. #### `assert_no_emitted_event_with` ```python assert_no_emitted_event_with(cls, event: Optional[str] = None, resource: Optional[Dict[str, LabelValue]] = None, related: Optional[List[Dict[str, LabelValue]]] = None, payload: Optional[Dict[str, Any]] = None) -> None ``` #### `client_name` ```python client_name(self) -> str ``` #### `emit` ```python emit(self, event: Event) -> Event ``` #### `emit` ```python emit(self, event: Event) -> None ``` Emit a single event #### `emitted_events_count` ```python emitted_events_count(cls) -> int ``` #### `reset` ```python reset(cls) -> None ``` Reset all captured instances and their events. For use this between tests ### `PrefectServerEventsClient` **Methods:** #### `client_name` ```python client_name(self) -> str ``` #### `emit` ```python emit(self, event: Event) -> ReceivedEvent ``` #### `emit` ```python emit(self, event: Event) -> None ``` Emit a single event ### `PrefectServerEventsAPIClient` **Methods:** #### `pause_automation` ```python pause_automation(self, automation_id: UUID) -> httpx.Response ``` #### `resume_automation` ```python resume_automation(self, automation_id: UUID) -> httpx.Response ``` # counting Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-events-counting # `prefect.server.events.counting` ## Classes ### `InvalidEventCountParameters` Raised when the given parameters are invalid for counting events. ### `TimeUnit` **Methods:** #### `as_timedelta` ```python as_timedelta(self, interval: float) -> Duration ``` #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` #### `database_label_expression` ```python database_label_expression(self, db: PrefectDBInterface, time_interval: float) -> sa.Function[str] ``` Returns the SQL expression to label a time bucket #### `database_value_expression` ```python database_value_expression(self, time_interval: float) -> sa.Cast[str] ``` Returns the SQL expression to place an event in a time bucket #### `get_interval_spans` ```python get_interval_spans(self, start_datetime: datetime.datetime, end_datetime: datetime.datetime, interval: float) -> Generator[int | tuple[datetime.datetime, datetime.datetime], None, None] ``` Divide the given range of dates into evenly-sized spans of interval units #### `validate_buckets` ```python validate_buckets(self, start_datetime: datetime.datetime, end_datetime: datetime.datetime, interval: float) -> None ``` ### `Countable` **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` #### `get_database_query` ```python get_database_query(self, filter: 'EventFilter', time_unit: TimeUnit, time_interval: float) -> Select[tuple[str, str, DateTime, DateTime, int]] ``` # filters Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-events-filters # `prefect.server.events.filters` ## Classes ### `AutomationFilterCreated` Filter by `Automation.created`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `AutomationFilterName` Filter by `Automation.created`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `AutomationFilterTags` Filter by `Automation.tags`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` ### `AutomationFilter` **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` ### `EventDataFilter` A base class for filtering event data. **Methods:** #### `build_where_clauses` ```python build_where_clauses(self) -> Sequence['ColumnExpressionArgument[bool]'] ``` Convert the criteria to a WHERE clause. #### `excludes` ```python excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `EventOccurredFilter` **Methods:** #### `build_where_clauses` ```python build_where_clauses(self, db: PrefectDBInterface) -> Sequence['ColumnExpressionArgument[bool]'] ``` #### `clamp` ```python clamp(self, max_duration: timedelta) -> None ``` Limit how far the query can look back based on the given duration #### `excludes` ```python excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python includes(self, event: Event) -> bool ``` #### `includes` ```python includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? ### `EventNameFilter` **Methods:** #### `build_where_clauses` ```python build_where_clauses(self, db: PrefectDBInterface) -> Sequence['ColumnExpressionArgument[bool]'] ``` #### `excludes` ```python excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python includes(self, event: Event) -> bool ``` #### `includes` ```python includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? ### `LabelSet` ### `LabelOperations` ### `EventResourceFilter` **Methods:** #### `build_where_clauses` ```python build_where_clauses(self, db: PrefectDBInterface) -> Sequence['ColumnExpressionArgument[bool]'] ``` #### `excludes` ```python excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python includes(self, event: Event) -> bool ``` #### `includes` ```python includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? ### `EventRelatedFilter` **Methods:** #### `build_where_clauses` ```python build_where_clauses(self, db: PrefectDBInterface) -> Sequence['ColumnExpressionArgument[bool]'] ``` #### `excludes` ```python excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? ### `EventAnyResourceFilter` **Methods:** #### `build_where_clauses` ```python build_where_clauses(self, db: PrefectDBInterface) -> Sequence['ColumnExpressionArgument[bool]'] ``` #### `excludes` ```python excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python includes(self, event: Event) -> bool ``` #### `includes` ```python includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? ### `EventIDFilter` **Methods:** #### `build_where_clauses` ```python build_where_clauses(self, db: PrefectDBInterface) -> Sequence['ColumnExpressionArgument[bool]'] ``` #### `excludes` ```python excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python includes(self, event: Event) -> bool ``` #### `includes` ```python includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? ### `EventTextFilter` Filter by text search across event content. **Methods:** #### `build_where_clauses` ```python build_where_clauses(self, db: PrefectDBInterface) -> Sequence['ColumnExpressionArgument[bool]'] ``` Build SQLAlchemy WHERE clauses for text search #### `excludes` ```python excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python includes(self, event: Event) -> bool ``` Check if this text filter includes the given event. #### `includes` ```python includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? ### `EventOrder` **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `EventFilter` **Methods:** #### `build_where_clauses` ```python build_where_clauses(self, db: PrefectDBInterface) -> Sequence['ColumnExpressionArgument[bool]'] ``` #### `excludes` ```python excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? #### `logical_limit` ```python logical_limit(self) -> int ``` The logical limit for this query, which is a maximum number of rows that it *could* return (regardless of what the caller has requested). May be used as an optimization for DB queries # jinja_filters Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-events-jinja_filters # `prefect.server.events.jinja_filters` ## Functions ### `ui_url` ```python ui_url(ctx: Mapping[str, Any], obj: Any) -> Optional[str] ``` Return the UI URL for the given object. ### `ui_resource_events_url` ```python ui_resource_events_url(ctx: Mapping[str, Any], obj: Any) -> Optional[str] ``` Given a Resource or Model, return a UI link to the events page filtered for that resource. If an unsupported object is provided, return `None`. Currently supports Automation, Resource, Deployment, Flow, FlowRun, TaskRun, and WorkQueue objects. Within a Resource, deployment, flow, flow-run, task-run, and work-queue are supported. # messaging Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-events-messaging # `prefect.server.events.messaging` ## Functions ### `publish` ```python publish(events: Iterable[ReceivedEvent]) -> None ``` Send the given events as a batch via the default publisher ### `create_event_publisher` ```python create_event_publisher() -> EventPublisher ``` ### `create_actions_publisher` ```python create_actions_publisher() -> Publisher ``` ## Classes ### `EventPublisher` **Methods:** #### `publish_data` ```python publish_data(self, data: bytes, attributes: Mapping[str, str]) -> None ``` #### `publish_data` ```python publish_data(self, data: bytes, attributes: Mapping[str, str]) -> None ``` #### `publish_event` ```python publish_event(self, event: ReceivedEvent) -> None ``` Publishes the given events **Args:** * `event`: the event to publish # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-events-models-__init__ # `prefect.server.events.models` *This module is empty or contains only private/internal implementations.* # automations Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-events-models-automations # `prefect.server.events.models.automations` ## Functions ### `automations_session` ```python automations_session(db: PrefectDBInterface, begin_transaction: bool = False) -> AsyncGenerator[AsyncSession, None] ``` ### `read_automations_for_workspace` ```python read_automations_for_workspace(db: PrefectDBInterface, session: AsyncSession, sort: AutomationSort = AutomationSort.NAME_ASC, limit: Optional[int] = None, offset: Optional[int] = None, automation_filter: Optional[filters.AutomationFilter] = None) -> Sequence[Automation] ``` ### `count_automations_for_workspace` ```python count_automations_for_workspace(db: PrefectDBInterface, session: AsyncSession) -> int ``` ### `read_automation` ```python read_automation(db: PrefectDBInterface, session: AsyncSession, automation_id: UUID) -> Optional[Automation] ``` ### `read_automation_by_id` ```python read_automation_by_id(db: PrefectDBInterface, session: AsyncSession, automation_id: UUID) -> Optional[Automation] ``` ### `create_automation` ```python create_automation(db: PrefectDBInterface, session: AsyncSession, automation: Automation) -> Automation ``` ### `update_automation` ```python update_automation(db: PrefectDBInterface, session: AsyncSession, automation_update: Union[AutomationUpdate, AutomationPartialUpdate], automation_id: UUID) -> bool ``` ### `delete_automation` ```python delete_automation(db: PrefectDBInterface, session: AsyncSession, automation_id: UUID) -> bool ``` ### `delete_automations_for_workspace` ```python delete_automations_for_workspace(db: PrefectDBInterface, session: AsyncSession) -> bool ``` ### `disable_automations_for_workspace` ```python disable_automations_for_workspace(db: PrefectDBInterface, session: AsyncSession) -> bool ``` ### `disable_automation` ```python disable_automation(db: PrefectDBInterface, session: AsyncSession, automation_id: UUID) -> bool ``` ### `relate_automation_to_resource` ```python relate_automation_to_resource(db: PrefectDBInterface, session: AsyncSession, automation_id: UUID, resource_id: str, owned_by_resource: bool) -> None ``` ### `read_automations_related_to_resource` ```python read_automations_related_to_resource(db: PrefectDBInterface, session: AsyncSession, resource_id: str, owned_by_resource: Optional[bool] = None, automation_filter: Optional[filters.AutomationFilter] = None) -> Sequence[Automation] ``` ### `delete_automations_owned_by_resource` ```python delete_automations_owned_by_resource(db: PrefectDBInterface, session: AsyncSession, resource_id: str, automation_filter: Optional[filters.AutomationFilter] = None) -> Sequence[UUID] ``` # composite_trigger_child_firing Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-events-models-composite_trigger_child_firing # `prefect.server.events.models.composite_trigger_child_firing` ## Functions ### `upsert_child_firing` ```python upsert_child_firing(db: PrefectDBInterface, session: AsyncSession, firing: Firing) ``` ### `get_child_firings` ```python get_child_firings(db: PrefectDBInterface, session: AsyncSession, trigger: CompositeTrigger) -> Sequence['ORMCompositeTriggerChildFiring'] ``` ### `clear_old_child_firings` ```python clear_old_child_firings(db: PrefectDBInterface, session: AsyncSession, trigger: CompositeTrigger, fired_before: DateTime) -> None ``` ### `clear_child_firings` ```python clear_child_firings(db: PrefectDBInterface, session: AsyncSession, trigger: CompositeTrigger, firing_ids: Sequence[UUID]) -> None ``` # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-events-ordering-__init__ # `prefect.server.events.ordering` Manages the partial causal ordering of events for a particular consumer. This module maintains a buffer of events to be processed, aiming to process them in the order they occurred causally. ## Functions ### `get_triggers_causal_ordering` ```python get_triggers_causal_ordering() -> CausalOrdering ``` ### `get_task_run_recorder_causal_ordering` ```python get_task_run_recorder_causal_ordering() -> CausalOrdering ``` ## Classes ### `CausalOrderingModule` ### `EventArrivedEarly` ### `MaxDepthExceeded` ### `event_handler` ### `CausalOrdering` **Methods:** #### `event_has_been_seen` ```python event_has_been_seen(self, event: Union[UUID, Event]) -> bool ``` #### `forget_follower` ```python forget_follower(self, follower: ReceivedEvent) -> None ``` #### `get_followers` ```python get_followers(self, leader: ReceivedEvent) -> List[ReceivedEvent] ``` #### `get_lost_followers` ```python get_lost_followers(self) -> List[ReceivedEvent] ``` #### `preceding_event_confirmed` ```python preceding_event_confirmed(self, handler: event_handler, event: ReceivedEvent, depth: int = 0) -> AsyncContextManager[None] ``` #### `record_event_as_seen` ```python record_event_as_seen(self, event: ReceivedEvent) -> None ``` #### `record_follower` ```python record_follower(self, event: ReceivedEvent) -> None ``` # db Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-events-ordering-db # `prefect.server.events.ordering.db` ## Classes ### `CausalOrdering` **Methods:** #### `event_has_been_seen` ```python event_has_been_seen(self, event: Union[UUID, Event]) -> bool ``` #### `forget_follower` ```python forget_follower(self, db: PrefectDBInterface, follower: ReceivedEvent) -> None ``` Forget that this event is waiting on another event to arrive #### `get_followers` ```python get_followers(self, db: PrefectDBInterface, leader: ReceivedEvent) -> List[ReceivedEvent] ``` Returns events that were waiting on this leader event to arrive #### `get_lost_followers` ```python get_lost_followers(self, db: PrefectDBInterface) -> List[ReceivedEvent] ``` Returns events that were waiting on a leader event that never arrived #### `preceding_event_confirmed` ```python preceding_event_confirmed(self, handler: event_handler, event: ReceivedEvent, depth: int = 0) ``` Events may optionally declare that they logically follow another event, so that we can preserve important event orderings in the face of unreliable delivery and ordering of messages from the queues. This function keeps track of the ID of each event that this shard has successfully processed going back to the PRECEDING\_EVENT\_LOOKBACK period. If an event arrives that must follow another one, confirm that we have recently seen and processed that event before proceeding. event (ReceivedEvent): The event to be processed. This object should include metadata indicating if and what event it follows. depth (int, optional): The current recursion depth, used to prevent infinite recursion due to cyclic dependencies between events. Defaults to 0. Raises EventArrivedEarly if the current event shouldn't be processed yet. #### `record_event_as_seen` ```python record_event_as_seen(self, event: ReceivedEvent) -> None ``` #### `record_follower` ```python record_follower(self, db: PrefectDBInterface, event: ReceivedEvent) -> None ``` Remember that this event is waiting on another event to arrive # memory Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-events-ordering-memory # `prefect.server.events.ordering.memory` ## Classes ### `EventBeingProcessed` Indicates that an event is currently being processed and should not be processed until it is finished. This may happen due to concurrent processing. ### `CausalOrdering` **Methods:** #### `clear` ```python clear(self) -> None ``` Clear all data for this scope. #### `clear_all_scopes` ```python clear_all_scopes(cls) -> None ``` Clear all data for all scopes - useful for testing. #### `event_has_been_seen` ```python event_has_been_seen(self, event: UUID | Event) -> bool ``` #### `event_has_started_processing` ```python event_has_started_processing(self, event: UUID | Event) -> bool ``` #### `event_is_processing` ```python event_is_processing(self, event: ReceivedEvent) -> AsyncGenerator[None, None] ``` Mark an event as being processed for the duration of its lifespan through the ordering system. #### `followers_by_id` ```python followers_by_id(self, follower_ids: list[UUID]) -> list[ReceivedEvent] ``` Returns the events with the given IDs, in the order they occurred. #### `forget_event_is_processing` ```python forget_event_is_processing(self, event: ReceivedEvent) -> None ``` #### `forget_follower` ```python forget_follower(self, follower: ReceivedEvent) -> None ``` Forget that this event is waiting on another event to arrive. #### `get_followers` ```python get_followers(self, leader: ReceivedEvent) -> list[ReceivedEvent] ``` Returns events that were waiting on this leader event to arrive. #### `get_lost_followers` ```python get_lost_followers(self) -> list[ReceivedEvent] ``` Returns events that were waiting on a leader event that never arrived. #### `preceding_event_confirmed` ```python preceding_event_confirmed(self, handler: event_handler, event: ReceivedEvent, depth: int = 0) -> AsyncGenerator[None, None] ``` Events may optionally declare that they logically follow another event, so that we can preserve important event orderings in the face of unreliable delivery and ordering of messages from the queues. This function keeps track of the ID of each event that this shard has successfully processed going back to the PRECEDING\_EVENT\_LOOKBACK period. If an event arrives that must follow another one, confirm that we have recently seen and processed that event before proceeding. **Args:** * `handler`: The function to call when an out-of-order event is ready to be processed * `event`: The event to be processed. This object should include metadata indicating if and what event it follows. * `depth`: The current recursion depth, used to prevent infinite recursion due to cyclic dependencies between events. Defaults to 0. Raises EventArrivedEarly if the current event shouldn't be processed yet. #### `record_event_as_processing` ```python record_event_as_processing(self, event: ReceivedEvent) -> bool ``` Record that an event is being processed, returning False if already processing. #### `record_event_as_seen` ```python record_event_as_seen(self, event: ReceivedEvent) -> None ``` #### `record_follower` ```python record_follower(self, event: ReceivedEvent) -> None ``` Remember that this event is waiting on another event to arrive. #### `wait_for_leader` ```python wait_for_leader(self, event: ReceivedEvent) -> None ``` Given an event, wait for its leader to be processed before proceeding, or raise EventArrivedEarly if we would wait too long in this attempt. # pipeline Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-events-pipeline # `prefect.server.events.pipeline` ## Classes ### `EventsPipeline` **Methods:** #### `events_to_messages` ```python events_to_messages(events: list[Event]) -> list[MemoryMessage] ``` #### `process_events` ```python process_events(self, events: list[Event]) -> None ``` #### `process_message` ```python process_message(self, message: MemoryMessage) -> None ``` Process a single event message #### `process_messages` ```python process_messages(self, messages: list[MemoryMessage]) -> None ``` # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-events-schemas-__init__ # `prefect.server.events.schemas` *This module is empty or contains only private/internal implementations.* # automations Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-events-schemas-automations # `prefect.server.events.schemas.automations` ## Classes ### `Posture` **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `TriggerState` **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `Trigger` Base class describing a set of criteria that must be satisfied in order to trigger an automation. **Methods:** #### `all_triggers` ```python all_triggers(self) -> Sequence[Trigger] ``` Returns all triggers within this trigger #### `automation` ```python automation(self) -> 'Automation' ``` #### `create_automation_state_change_event` ```python create_automation_state_change_event(self, firing: 'Firing', trigger_state: TriggerState) -> ReceivedEvent ``` #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `parent` ```python parent(self) -> 'Union[Trigger, Automation]' ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `reset_ids` ```python reset_ids(self) -> None ``` Resets the ID of this trigger and all of its children ### `CompositeTrigger` Requires some number of triggers to have fired within the given time period. **Methods:** #### `actions` ```python actions(self) -> List[ActionTypes] ``` #### `all_triggers` ```python all_triggers(self) -> Sequence[Trigger] ``` #### `as_automation` ```python as_automation(self) -> 'AutomationCore' ``` #### `child_trigger_ids` ```python child_trigger_ids(self) -> List[UUID] ``` #### `create_automation_state_change_event` ```python create_automation_state_change_event(self, firing: Firing, trigger_state: TriggerState) -> ReceivedEvent ``` Returns a ReceivedEvent for an automation state change into a triggered or resolved state. #### `describe_for_cli` ```python describe_for_cli(self, indent: int = 0) -> str ``` Return a human-readable description of this trigger for the CLI #### `num_expected_firings` ```python num_expected_firings(self) -> int ``` #### `owner_resource` ```python owner_resource(self) -> Optional[str] ``` #### `ready_to_fire` ```python ready_to_fire(self, firings: Sequence['Firing']) -> bool ``` #### `set_deployment_id` ```python set_deployment_id(self, deployment_id: UUID) -> None ``` ### `CompoundTrigger` A composite trigger that requires some number of triggers to have fired within the given time period **Methods:** #### `num_expected_firings` ```python num_expected_firings(self) -> int ``` #### `ready_to_fire` ```python ready_to_fire(self, firings: Sequence['Firing']) -> bool ``` #### `validate_require` ```python validate_require(self) -> Self ``` ### `SequenceTrigger` A composite trigger that requires some number of triggers to have fired within the given time period in a specific order **Methods:** #### `expected_firing_order` ```python expected_firing_order(self) -> List[UUID] ``` #### `ready_to_fire` ```python ready_to_fire(self, firings: Sequence['Firing']) -> bool ``` ### `ResourceTrigger` Base class for triggers that may filter by the labels of resources. **Methods:** #### `actions` ```python actions(self) -> List[ActionTypes] ``` #### `as_automation` ```python as_automation(self) -> 'AutomationCore' ``` #### `covers_resources` ```python covers_resources(self, resource: Resource, related: Sequence[RelatedResource]) -> bool ``` #### `describe_for_cli` ```python describe_for_cli(self, indent: int = 0) -> str ``` Return a human-readable description of this trigger for the CLI #### `owner_resource` ```python owner_resource(self) -> Optional[str] ``` #### `set_deployment_id` ```python set_deployment_id(self, deployment_id: UUID) -> None ``` ### `EventTrigger` A trigger that fires based on the presence or absence of events within a given period of time. **Methods:** #### `bucketing_key` ```python bucketing_key(self, event: ReceivedEvent) -> Tuple[str, ...] ``` #### `covers` ```python covers(self, event: ReceivedEvent) -> bool ``` #### `create_automation_state_change_event` ```python create_automation_state_change_event(self, firing: Firing, trigger_state: TriggerState) -> ReceivedEvent ``` Returns a ReceivedEvent for an automation state change into a triggered or resolved state. #### `enforce_minimum_within_for_proactive_triggers` ```python enforce_minimum_within_for_proactive_triggers(cls, data: Dict[str, Any] | Any) -> Dict[str, Any] ``` #### `event_pattern` ```python event_pattern(self) -> re.Pattern[str] ``` A regular expression which may be evaluated against any event string to determine if this trigger would be interested in the event #### `expects` ```python expects(self, event: str) -> bool ``` #### `immediate` ```python immediate(self) -> bool ``` Does this reactive trigger fire immediately for all events? #### `meets_threshold` ```python meets_threshold(self, event_count: int) -> bool ``` #### `starts_after` ```python starts_after(self, event: str) -> bool ``` ### `AutomationCore` Defines an action a user wants to take when a certain number of events do or don't happen to the matching resources **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `prevent_run_deployment_loops` ```python prevent_run_deployment_loops(self) -> Self ``` Detects potential infinite loops in automations with RunDeployment actions #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `trigger_by_id` ```python trigger_by_id(self, trigger_id: UUID) -> Optional[Trigger] ``` Returns the trigger with the given ID, or None if no such trigger exists #### `triggers` ```python triggers(self) -> Sequence[Trigger] ``` Returns all triggers within this automation #### `triggers_of_type` ```python triggers_of_type(self, trigger_type: Type[T]) -> Sequence[T] ``` Returns all triggers of the specified type within this automation ### `Automation` **Methods:** #### `model_validate` ```python model_validate(cls: type[Self], obj: Any) -> Self ``` ### `AutomationCreate` ### `AutomationUpdate` ### `AutomationPartialUpdate` ### `AutomationSort` Defines automations sorting options. **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `Firing` Represents one instance of a trigger firing **Methods:** #### `all_events` ```python all_events(self) -> Sequence[ReceivedEvent] ``` #### `all_firings` ```python all_firings(self) -> Sequence[Firing] ``` #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `validate_trigger_states` ```python validate_trigger_states(cls, value: set[TriggerState]) -> set[TriggerState] ``` ### `TriggeredAction` An action caused as the result of an automation **Methods:** #### `all_events` ```python all_events(self) -> Sequence[ReceivedEvent] ``` #### `all_firings` ```python all_firings(self) -> Sequence[Firing] ``` #### `idempotency_key` ```python idempotency_key(self) -> str ``` Produce a human-friendly idempotency key for this action #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. # events Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-events-schemas-events # `prefect.server.events.schemas.events` ## Functions ### `matches` ```python matches(expected: str, value: Optional[str]) -> bool ``` Returns true if the given value matches the expected string, which may include a a negation prefix ("!this-value") or a wildcard suffix ("any-value-starting-with\*") ## Classes ### `Resource` An observable business object of interest to the user **Methods:** #### `as_label_value_array` ```python as_label_value_array(self) -> List[Dict[str, str]] ``` #### `enforce_maximum_labels` ```python enforce_maximum_labels(self) -> Self ``` #### `get` ```python get(self, label: str, default: Optional[str] = None) -> Optional[str] ``` #### `has_all_labels` ```python has_all_labels(self, labels: Dict[str, str]) -> bool ``` #### `id` ```python id(self) -> str ``` #### `items` ```python items(self) -> Iterable[Tuple[str, str]] ``` #### `keys` ```python keys(self) -> Iterable[str] ``` #### `labels` ```python labels(self) -> LabelDiver ``` #### `name` ```python name(self) -> Optional[str] ``` #### `prefect_object_id` ```python prefect_object_id(self, kind: str) -> UUID ``` Extracts the UUID from an event's resource ID if it's the expected kind of prefect resource #### `requires_resource_id` ```python requires_resource_id(self) -> Self ``` ### `RelatedResource` A Resource with a specific role in an Event **Methods:** #### `enforce_maximum_labels` ```python enforce_maximum_labels(self) -> Self ``` #### `id` ```python id(self) -> str ``` #### `name` ```python name(self) -> Optional[str] ``` #### `prefect_object_id` ```python prefect_object_id(self, kind: str) -> UUID ``` Extracts the UUID from an event's resource ID if it's the expected kind of prefect resource #### `requires_resource_id` ```python requires_resource_id(self) -> Self ``` #### `requires_resource_role` ```python requires_resource_role(self) -> Self ``` #### `role` ```python role(self) -> str ``` ### `Event` The client-side view of an event that has happened to a Resource **Methods:** #### `enforce_maximum_related_resources` ```python enforce_maximum_related_resources(cls, value: List[RelatedResource]) -> List[RelatedResource] ``` #### `find_resource_label` ```python find_resource_label(self, label: str) -> Optional[str] ``` Finds the value of the given label in this event's resource or one of its related resources. If the label starts with `related::`, search for the first matching label in a related resource with that role. #### `involved_resources` ```python involved_resources(self) -> Sequence[Resource] ``` #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `receive` ```python receive(self, received: Optional[prefect.types._datetime.DateTime] = None) -> 'ReceivedEvent' ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `resource_in_role` ```python resource_in_role(self) -> Mapping[str, RelatedResource] ``` Returns a mapping of roles to the first related resource in that role #### `resources_in_role` ```python resources_in_role(self) -> Mapping[str, Sequence[RelatedResource]] ``` Returns a mapping of roles to related resources in that role ### `ReceivedEvent` The server-side view of an event that has happened to a Resource after it has been received by the server **Methods:** #### `as_database_resource_rows` ```python as_database_resource_rows(self) -> List[Dict[str, Any]] ``` #### `as_database_row` ```python as_database_row(self) -> dict[str, Any] ``` #### `is_set` ```python is_set(self) ``` #### `set` ```python set(self) -> None ``` Set the flag, notifying all waiters. Unlike `asyncio.Event`, waiters may not be notified immediately when this is called; instead, notification will be placed on the owning loop of each waiter for thread safety. #### `wait` ```python wait(self) -> Literal[True] ``` Block until the internal flag is true. If the internal flag is true on entry, return True immediately. Otherwise, block until another `set()` is called, then return True. ### `ResourceSpecification` **Methods:** #### `deepcopy` ```python deepcopy(self) -> 'ResourceSpecification' ``` #### `get` ```python get(self, key: str, default: Optional[Union[str, List[str]]] = None) -> Optional[List[str]] ``` #### `includes` ```python includes(self, candidates: Iterable[Resource]) -> bool ``` #### `items` ```python items(self) -> Iterable[Tuple[str, List[str]]] ``` #### `matches` ```python matches(self, resource: Resource) -> bool ``` #### `matches_every_resource` ```python matches_every_resource(self) -> bool ``` #### `matches_every_resource_of_kind` ```python matches_every_resource_of_kind(self, prefix: str) -> bool ``` #### `pop` ```python pop(self, key: str, default: Optional[Union[str, List[str]]] = None) -> Optional[List[str]] ``` ### `EventPage` A single page of events returned from the API, with an optional link to the next page of results **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `EventCount` The count of events with the given filter value **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. # labelling Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-events-schemas-labelling # `prefect.server.events.schemas.labelling` ## Classes ### `LabelDiver` The LabelDiver supports templating use cases for any Labelled object, by presenting the labels as a graph of objects that may be accessed by attribute. For example: ```python diver = LabelDiver({ 'hello.world': 'foo', 'hello.world.again': 'bar' }) assert str(diver.hello.world) == 'foo' assert str(diver.hello.world.again) == 'bar' ``` ### `Labelled` **Methods:** #### `as_label_value_array` ```python as_label_value_array(self) -> List[Dict[str, str]] ``` #### `get` ```python get(self, label: str, default: Optional[str] = None) -> Optional[str] ``` #### `has_all_labels` ```python has_all_labels(self, labels: Dict[str, str]) -> bool ``` #### `items` ```python items(self) -> Iterable[Tuple[str, str]] ``` #### `keys` ```python keys(self) -> Iterable[str] ``` #### `labels` ```python labels(self) -> LabelDiver ``` # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-events-services-__init__ # `prefect.server.events.services` *This module is empty or contains only private/internal implementations.* # actions Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-events-services-actions # `prefect.server.events.services.actions` ## Classes ### `Actions` Runs the actions triggered by automations **Methods:** #### `all_services` ```python all_services(cls) -> Sequence[type[Self]] ``` Get list of all service classes #### `enabled` ```python enabled(cls) -> bool ``` Whether the service is enabled #### `enabled_services` ```python enabled_services(cls) -> list[type[Self]] ``` Get list of enabled service classes #### `environment_variable_name` ```python environment_variable_name(cls) -> str ``` #### `run_services` ```python run_services(cls) -> NoReturn ``` Run enabled services until cancelled. #### `running` ```python running(cls) -> AsyncGenerator[None, None] ``` A context manager that runs enabled services on entry and stops them on exit. #### `service_settings` ```python service_settings(cls) -> ServicesBaseSetting ``` #### `service_settings` ```python service_settings(cls) -> ServicesBaseSetting ``` The Prefect setting that controls whether the service is enabled #### `start` ```python start(self) -> NoReturn ``` #### `start` ```python start(self) -> NoReturn ``` Start running the service, which may run indefinitely #### `stop` ```python stop(self) -> None ``` #### `stop` ```python stop(self) -> None ``` Stop the service # event_logger Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-events-services-event_logger # `prefect.server.events.services.event_logger` ## Classes ### `EventLogger` A debugging service that logs events to the console as they arrive. **Methods:** #### `all_services` ```python all_services(cls) -> Sequence[type[Self]] ``` Get list of all service classes #### `enabled` ```python enabled(cls) -> bool ``` Whether the service is enabled #### `enabled_services` ```python enabled_services(cls) -> list[type[Self]] ``` Get list of enabled service classes #### `environment_variable_name` ```python environment_variable_name(cls) -> str ``` #### `run_services` ```python run_services(cls) -> NoReturn ``` Run enabled services until cancelled. #### `running` ```python running(cls) -> AsyncGenerator[None, None] ``` A context manager that runs enabled services on entry and stops them on exit. #### `service_settings` ```python service_settings(cls) -> ServicesBaseSetting ``` #### `service_settings` ```python service_settings(cls) -> ServicesBaseSetting ``` The Prefect setting that controls whether the service is enabled #### `start` ```python start(self) -> NoReturn ``` #### `start` ```python start(self) -> NoReturn ``` Start running the service, which may run indefinitely #### `stop` ```python stop(self) -> None ``` #### `stop` ```python stop(self) -> None ``` Stop the service # event_persister Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-events-services-event_persister # `prefect.server.events.services.event_persister` The event persister moves event messages from the event bus to storage storage as fast as it can. Never gets tired. ## Functions ### `batch_delete` ```python batch_delete(session: AsyncSession, model: type[T], condition: Any, batch_size: int = 10000) -> int ``` Perform a batch deletion of database records using a subquery with LIMIT. Works with both PostgreSQL and SQLite. Compared to a basic delete(...).where(...), a batch deletion is more robust against timeouts when handling large tables, which is especially the case if we first delete old entries from long existing tables. **Returns:** * Total number of deleted records ### `create_handler` ```python create_handler(batch_size: int = 20, flush_every: timedelta = timedelta(seconds=5), trim_every: timedelta = timedelta(minutes=15)) -> AsyncGenerator[MessageHandler, None] ``` Set up a message handler that will accumulate and send events to the database every `batch_size` messages, or every `flush_every` interval to flush any remaining messages ## Classes ### `EventPersister` A service that persists events to the database as they arrive. **Methods:** #### `all_services` ```python all_services(cls) -> Sequence[type[Self]] ``` Get list of all service classes #### `enabled` ```python enabled(cls) -> bool ``` Whether the service is enabled #### `enabled_services` ```python enabled_services(cls) -> list[type[Self]] ``` Get list of enabled service classes #### `environment_variable_name` ```python environment_variable_name(cls) -> str ``` #### `run_services` ```python run_services(cls) -> NoReturn ``` Run enabled services until cancelled. #### `running` ```python running(cls) -> AsyncGenerator[None, None] ``` A context manager that runs enabled services on entry and stops them on exit. #### `service_settings` ```python service_settings(cls) -> ServicesBaseSetting ``` #### `service_settings` ```python service_settings(cls) -> ServicesBaseSetting ``` The Prefect setting that controls whether the service is enabled #### `start` ```python start(self) -> NoReturn ``` #### `start` ```python start(self) -> NoReturn ``` Start running the service, which may run indefinitely #### `started_event` ```python started_event(self) -> asyncio.Event ``` #### `started_event` ```python started_event(self, value: asyncio.Event) -> None ``` #### `stop` ```python stop(self) -> None ``` #### `stop` ```python stop(self) -> None ``` Stop the service # triggers Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-events-services-triggers # `prefect.server.events.services.triggers` ## Classes ### `ReactiveTriggers` Evaluates reactive automation triggers **Methods:** #### `all_services` ```python all_services(cls) -> Sequence[type[Self]] ``` Get list of all service classes #### `enabled` ```python enabled(cls) -> bool ``` Whether the service is enabled #### `enabled_services` ```python enabled_services(cls) -> list[type[Self]] ``` Get list of enabled service classes #### `environment_variable_name` ```python environment_variable_name(cls) -> str ``` #### `run_services` ```python run_services(cls) -> NoReturn ``` Run enabled services until cancelled. #### `running` ```python running(cls) -> AsyncGenerator[None, None] ``` A context manager that runs enabled services on entry and stops them on exit. #### `service_settings` ```python service_settings(cls) -> ServicesBaseSetting ``` #### `service_settings` ```python service_settings(cls) -> ServicesBaseSetting ``` The Prefect setting that controls whether the service is enabled #### `start` ```python start(self) -> NoReturn ``` #### `start` ```python start(self) -> NoReturn ``` Start running the service, which may run indefinitely #### `stop` ```python stop(self) -> None ``` #### `stop` ```python stop(self) -> None ``` Stop the service ### `ProactiveTriggers` Evaluates proactive automation triggers **Methods:** #### `run_once` ```python run_once(self) -> None ``` #### `run_once` ```python run_once(self) -> None ``` Represents one loop of the service. Subclasses must override this method. To actually run the service once, call `LoopService().start(loops=1)` instead of `LoopService().run_once()`, because this method will not invoke setup and teardown methods properly. #### `service_settings` ```python service_settings(cls) -> ServicesBaseSetting ``` #### `start` ```python start(self, loops: None = None) -> NoReturn ``` Run the service indefinitely. #### `stop` ```python stop(self, block: bool = True) -> None ``` Gracefully stops a running LoopService and optionally blocks until the service stops. **Args:** * `block`: if True, blocks until the service is finished running. Otherwise it requests a stop and returns but the service may still be running a final loop. # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-events-storage-__init__ # `prefect.server.events.storage` ## Functions ### `to_page_token` ```python to_page_token(filter: 'EventFilter', count: int, page_size: int, current_offset: int) -> Optional[str] ``` ### `from_page_token` ```python from_page_token(page_token: str) -> Tuple['EventFilter', int, int, int] ``` ### `process_time_based_counts` ```python process_time_based_counts(filter: 'EventFilter', time_unit: TimeUnit, time_interval: float, counts: List[EventCount]) -> List[EventCount] ``` Common logic for processing time-based counts across different event backends. When doing time-based counting we want to do two things: 1. Backfill any missing intervals with 0 counts. 2. Update the start/end times that are emitted to match the beginning and end of the intervals rather than having them reflect the true max/min occurred time of the events themselves. ## Classes ### `InvalidTokenError` ### `QueryRangeTooLarge` # database Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-events-storage-database # `prefect.server.events.storage.database` ## Functions ### `build_distinct_queries` ```python build_distinct_queries(db: PrefectDBInterface, events_filter: EventFilter) -> list[sa.Column['ORMEvent']] ``` ### `query_events` ```python query_events(session: AsyncSession, filter: EventFilter, page_size: int = INTERACTIVE_PAGE_SIZE) -> tuple[list[ReceivedEvent], int, Optional[str]] ``` ### `query_next_page` ```python query_next_page(session: AsyncSession, page_token: str) -> tuple[list[ReceivedEvent], int, Optional[str]] ``` ### `count_events` ```python count_events(session: AsyncSession, filter: EventFilter, countable: Countable, time_unit: TimeUnit, time_interval: float) -> list[EventCount] ``` ### `raw_count_events` ```python raw_count_events(db: PrefectDBInterface, session: AsyncSession, events_filter: EventFilter) -> int ``` Count events from the database with the given filter. Only returns the count and does not return any addition metadata. For additional metadata, use `count_events`. **Args:** * `session`: a database session * `events_filter`: filter criteria for events **Returns:** * The count of events in the database that match the filter criteria. ### `read_events` ```python read_events(db: PrefectDBInterface, session: AsyncSession, events_filter: EventFilter, limit: Optional[int] = None, offset: Optional[int] = None) -> Sequence['ORMEvent'] ``` Read events from the Postgres database. **Args:** * `session`: a Postgres events session. * `filter`: filter criteria for events. * `limit`: limit for the query. * `offset`: offset for the query. **Returns:** * A list of events ORM objects. ### `write_events` ```python write_events(session: AsyncSession, events: list[ReceivedEvent]) -> None ``` Write events to the database. **Args:** * `session`: a database session * `events`: the events to insert ### `get_max_query_parameters` ```python get_max_query_parameters() -> int ``` ### `get_number_of_event_fields` ```python get_number_of_event_fields() -> int ``` ### `get_number_of_resource_fields` ```python get_number_of_resource_fields() -> int ``` # stream Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-events-stream # `prefect.server.events.stream` ## Functions ### `subscribed` ```python subscribed(filter: EventFilter) -> AsyncGenerator['Queue[ReceivedEvent]', None] ``` ### `events` ```python events(filter: EventFilter) -> AsyncGenerator[AsyncIterable[Optional[ReceivedEvent]], None] ``` ### `distributor` ```python distributor() -> AsyncGenerator[messaging.MessageHandler, None] ``` ### `start_distributor` ```python start_distributor() -> None ``` Starts the distributor consumer as a global background task ### `stop_distributor` ```python stop_distributor() -> None ``` Stops the distributor consumer global background task ### `run_distributor` ```python run_distributor(started: asyncio.Event) -> NoReturn ``` Runs the distributor consumer forever until it is cancelled ## Classes ### `Distributor` **Methods:** #### `all_services` ```python all_services(cls) -> Sequence[type[Self]] ``` Get list of all service classes #### `enabled` ```python enabled(cls) -> bool ``` #### `enabled` ```python enabled(cls) -> bool ``` Whether the service is enabled #### `enabled_services` ```python enabled_services(cls) -> list[type[Self]] ``` Get list of enabled service classes #### `environment_variable_name` ```python environment_variable_name(cls) -> dict[str, str] ``` #### `environment_variable_name` ```python environment_variable_name(cls) -> str ``` #### `run_services` ```python run_services(cls) -> NoReturn ``` Run enabled services until cancelled. #### `running` ```python running(cls) -> AsyncGenerator[None, None] ``` A context manager that runs enabled services on entry and stops them on exit. #### `service_settings` ```python service_settings(cls) -> ServicesBaseSetting ``` #### `service_settings` ```python service_settings(cls) -> ServicesBaseSetting ``` The Prefect setting that controls whether the service is enabled #### `start` ```python start(self) -> None ``` #### `start` ```python start(self) -> NoReturn ``` Start running the service, which may run indefinitely #### `stop` ```python stop(self) -> None ``` #### `stop` ```python stop(self) -> None ``` Stop the service # triggers Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-events-triggers # `prefect.server.events.triggers` The triggers consumer watches events streaming in from the event bus and decides whether to act on them based on the automations that users have set up. ## Functions ### `evaluate` ```python evaluate(session: AsyncSession, trigger: EventTrigger, bucket: 'ORMAutomationBucket', now: prefect.types._datetime.DateTime, triggering_event: Optional[ReceivedEvent]) -> 'ORMAutomationBucket | None' ``` Evaluates an Automation, either triggered by a specific event or proactively on a time interval. Evaluating a Automation updates the associated counters for each automation, and will fire the associated action if it has met the threshold. ### `fire` ```python fire(session: AsyncSession, firing: Firing) -> None ``` ### `evaluate_composite_trigger` ```python evaluate_composite_trigger(session: AsyncSession, firing: Firing) -> None ``` ### `act` ```python act(firing: Firing) -> None ``` Given a Automation that has been triggered, the triggering labels and event (if there was one), publish an action for the `actions` service to process. ### `update_events_clock` ```python update_events_clock(event: ReceivedEvent) -> None ``` ### `get_events_clock` ```python get_events_clock() -> Optional[float] ``` ### `get_events_clock_offset` ```python get_events_clock_offset() -> float ``` Calculate the current clock offset. This takes into account both the `occurred` of the last event, as well as the time we *saw* the last event. This helps to ensure that in low volume environments, we don't end up getting huge offsets. ### `reset_events_clock` ```python reset_events_clock() -> None ``` ### `reactive_evaluation` ```python reactive_evaluation(event: ReceivedEvent, depth: int = 0) -> None ``` Evaluate all automations that may apply to this event. event (ReceivedEvent): The event to evaluate. This object contains all the necessary information about the event, including its type, associated resources, and metadata. depth (int, optional): The current recursion depth. This is used to prevent infinite recursion due to cyclic event dependencies. Defaults to 0 and is incremented with each recursive call. ### `get_lost_followers` ```python get_lost_followers() -> List[ReceivedEvent] ``` Get followers that have been sitting around longer than our lookback ### `periodic_evaluation` ```python periodic_evaluation(now: prefect.types._datetime.DateTime) -> None ``` Periodic tasks that should be run regularly, but not as often as every event ### `evaluate_periodically` ```python evaluate_periodically(periodic_granularity: timedelta) -> None ``` Runs periodic evaluation on the given interval ### `find_interested_triggers` ```python find_interested_triggers(event: ReceivedEvent) -> Collection[EventTrigger] ``` ### `load_automation` ```python load_automation(automation: Optional[Automation]) -> None ``` Loads the given automation into memory so that it is available for evaluations ### `forget_automation` ```python forget_automation(automation_id: UUID) -> None ``` Unloads the given automation from memory ### `automation_changed` ```python automation_changed(automation_id: UUID, event: Literal['automation__created', 'automation__updated', 'automation__deleted']) -> None ``` ### `load_automations` ```python load_automations(db: PrefectDBInterface, session: AsyncSession) ``` Loads all automations for the given set of accounts ### `remove_buckets_exceeding_threshold` ```python remove_buckets_exceeding_threshold(db: PrefectDBInterface, session: AsyncSession, trigger: EventTrigger) ``` Deletes bucket where the count has already exceeded the threshold ### `read_buckets_for_automation` ```python read_buckets_for_automation(db: PrefectDBInterface, session: AsyncSession, trigger: Trigger, batch_size: int = AUTOMATION_BUCKET_BATCH_SIZE) -> AsyncGenerator['ORMAutomationBucket', None] ``` Yields buckets for the given automation and trigger in batches. ### `read_bucket` ```python read_bucket(db: PrefectDBInterface, session: AsyncSession, trigger: Trigger, bucketing_key: Tuple[str, ...]) -> Optional['ORMAutomationBucket'] ``` Gets the bucket this event would fall into for the given Automation, if there is one currently ### `read_bucket_by_trigger_id` ```python read_bucket_by_trigger_id(db: PrefectDBInterface, session: AsyncSession, automation_id: UUID, trigger_id: UUID, bucketing_key: Tuple[str, ...]) -> 'ORMAutomationBucket | None' ``` Gets the bucket this event would fall into for the given Automation, if there is one currently ### `increment_bucket` ```python increment_bucket(db: PrefectDBInterface, session: AsyncSession, bucket: 'ORMAutomationBucket', count: int, last_event: Optional[ReceivedEvent]) -> 'ORMAutomationBucket' ``` Adds the given count to the bucket, returning the new bucket ### `start_new_bucket` ```python start_new_bucket(db: PrefectDBInterface, session: AsyncSession, trigger: EventTrigger, bucketing_key: Tuple[str, ...], start: prefect.types._datetime.DateTime, end: prefect.types._datetime.DateTime, count: int, triggered_at: Optional[prefect.types._datetime.DateTime] = None, last_event: Optional[ReceivedEvent] = None) -> 'ORMAutomationBucket' ``` Ensures that a bucket with the given start and end exists with the given count, returning the new bucket ### `ensure_bucket` ```python ensure_bucket(db: PrefectDBInterface, session: AsyncSession, trigger: EventTrigger, bucketing_key: Tuple[str, ...], start: prefect.types._datetime.DateTime, end: prefect.types._datetime.DateTime, last_event: Optional[ReceivedEvent], initial_count: int = 0) -> 'ORMAutomationBucket' ``` Ensures that a bucket has been started for the given automation and key, returning the current bucket. Will not modify the existing bucket. ### `remove_bucket` ```python remove_bucket(db: PrefectDBInterface, session: AsyncSession, bucket: 'ORMAutomationBucket') ``` Removes the given bucket from the database ### `sweep_closed_buckets` ```python sweep_closed_buckets(db: PrefectDBInterface, session: AsyncSession, older_than: prefect.types._datetime.DateTime) -> None ``` ### `reset` ```python reset() -> None ``` Resets the in-memory state of the service ### `listen_for_automation_changes` ```python listen_for_automation_changes() -> None ``` Listens for any changes to automations via PostgreSQL NOTIFY/LISTEN, and applies those changes to the set of loaded automations. ### `consumer` ```python consumer(periodic_granularity: timedelta = timedelta(seconds=5)) -> AsyncGenerator[MessageHandler, None] ``` The `triggers.consumer` processes all Events arriving on the event bus to determine if they meet the automation criteria, queuing up a corresponding `TriggeredAction` for the `actions` service if the automation criteria is met. ### `proactive_evaluation` ```python proactive_evaluation(trigger: EventTrigger, as_of: prefect.types._datetime.DateTime) -> prefect.types._datetime.DateTime ``` The core proactive evaluation operation for a single Automation ### `evaluate_proactive_triggers` ```python evaluate_proactive_triggers() -> None ``` # exceptions Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-exceptions # `prefect.server.exceptions` ## Classes ### `ObjectNotFoundError` Error raised by the Prefect REST API when a requested object is not found. If thrown during a request, this exception will be caught and a 404 response will be returned. ### `OrchestrationError` An error raised while orchestrating a state transition ### `MissingVariableError` An error raised by the Prefect REST API when attempting to create or update a deployment with missing required variables. ### `FlowRunGraphTooLarge` Raised to indicate that a flow run's graph has more nodes that the configured maximum # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-logs-__init__ # `prefect.server.logs` *This module is empty or contains only private/internal implementations.* # messaging Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-logs-messaging # `prefect.server.logs.messaging` Log messaging for streaming logs through the messaging system. ## Functions ### `create_log_publisher` ```python create_log_publisher() -> AsyncGenerator[messaging.Publisher, None] ``` Creates a publisher for sending logs to the messaging system. **Returns:** * A messaging publisher configured for the "logs" topic ### `publish_logs` ```python publish_logs(logs: list[Log]) -> None ``` Publishes logs to the messaging system. **Args:** * `logs`: The logs to publish # stream Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-logs-stream # `prefect.server.logs.stream` Log streaming for live log distribution via websockets. ## Functions ### `subscribed` ```python subscribed(filter: LogFilter) -> AsyncGenerator['Queue[Log]', None] ``` Subscribe to a stream of logs matching the given filter. **Args:** * `filter`: The log filter to apply ### `logs` ```python logs(filter: LogFilter) -> AsyncGenerator[AsyncIterable[Log | None], None] ``` Create a stream of logs matching the given filter. **Args:** * `filter`: The log filter to apply ### `log_matches_filter` ```python log_matches_filter(log: Log, filter: LogFilter) -> bool ``` Check if a log matches the given filter criteria. **Args:** * `log`: The log to check * `filter`: The filter to apply **Returns:** * True if the log matches the filter, False otherwise ### `distributor` ```python distributor() -> AsyncGenerator[messaging.MessageHandler, None] ``` Create a message handler that distributes logs to subscribed clients. ### `start_distributor` ```python start_distributor() -> None ``` Starts the distributor consumer as a global background task ### `stop_distributor` ```python stop_distributor() -> None ``` Stops the distributor consumer global background task ### `run_distributor` ```python run_distributor(started: asyncio.Event) -> NoReturn ``` Runs the distributor consumer forever until it is cancelled ## Classes ### `LogDistributor` Service for distributing logs to websocket subscribers **Methods:** #### `all_services` ```python all_services(cls) -> Sequence[type[Self]] ``` Get list of all service classes #### `enabled` ```python enabled(cls) -> bool ``` #### `enabled` ```python enabled(cls) -> bool ``` Whether the service is enabled #### `enabled_services` ```python enabled_services(cls) -> list[type[Self]] ``` Get list of enabled service classes #### `environment_variable_name` ```python environment_variable_name(cls) -> str ``` #### `environment_variable_name` ```python environment_variable_name(cls) -> str ``` #### `run_services` ```python run_services(cls) -> NoReturn ``` Run enabled services until cancelled. #### `running` ```python running(cls) -> AsyncGenerator[None, None] ``` A context manager that runs enabled services on entry and stops them on exit. #### `service_settings` ```python service_settings(cls) -> ServicesBaseSetting ``` #### `service_settings` ```python service_settings(cls) -> ServicesBaseSetting ``` The Prefect setting that controls whether the service is enabled #### `start` ```python start(self) -> NoReturn ``` #### `start` ```python start(self) -> NoReturn ``` Start running the service, which may run indefinitely #### `stop` ```python stop(self) -> None ``` #### `stop` ```python stop(self) -> None ``` Stop the service # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-models-__init__ # `prefect.server.models` *This module is empty or contains only private/internal implementations.* # artifacts Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-models-artifacts # `prefect.server.models.artifacts` ## Functions ### `create_artifact` ```python create_artifact(session: AsyncSession, artifact: Artifact) -> orm_models.Artifact ``` ### `read_latest_artifact` ```python read_latest_artifact(db: PrefectDBInterface, session: AsyncSession, key: str) -> Union[orm_models.ArtifactCollection, None] ``` Reads the latest artifact by key. Args: session: A database session key: The artifact key Returns: Artifact: The latest artifact ### `read_artifact` ```python read_artifact(db: PrefectDBInterface, session: AsyncSession, artifact_id: UUID) -> Union[orm_models.Artifact, None] ``` Reads an artifact by id. ### `read_artifacts` ```python read_artifacts(db: PrefectDBInterface, session: AsyncSession, offset: Optional[int] = None, limit: Optional[int] = None, artifact_filter: Optional[filters.ArtifactFilter] = None, flow_run_filter: Optional[filters.FlowRunFilter] = None, task_run_filter: Optional[filters.TaskRunFilter] = None, deployment_filter: Optional[filters.DeploymentFilter] = None, flow_filter: Optional[filters.FlowFilter] = None, sort: sorting.ArtifactSort = sorting.ArtifactSort.ID_DESC) -> Sequence[orm_models.Artifact] ``` Reads artifacts. **Args:** * `session`: A database session * `offset`: Query offset * `limit`: Query limit * `artifact_filter`: Only select artifacts matching this filter * `flow_run_filter`: Only select artifacts whose flow runs matching this filter * `task_run_filter`: Only select artifacts whose task runs matching this filter * `deployment_filter`: Only select artifacts whose flow runs belong to deployments matching this filter * `flow_filter`: Only select artifacts whose flow runs belong to flows matching this filter * `work_pool_filter`: Only select artifacts whose flow runs belong to work pools matching this filter ### `read_latest_artifacts` ```python read_latest_artifacts(db: PrefectDBInterface, session: AsyncSession, offset: Optional[int] = None, limit: Optional[int] = None, artifact_filter: Optional[filters.ArtifactCollectionFilter] = None, flow_run_filter: Optional[filters.FlowRunFilter] = None, task_run_filter: Optional[filters.TaskRunFilter] = None, deployment_filter: Optional[filters.DeploymentFilter] = None, flow_filter: Optional[filters.FlowFilter] = None, sort: sorting.ArtifactCollectionSort = sorting.ArtifactCollectionSort.ID_DESC) -> Sequence[orm_models.ArtifactCollection] ``` Reads artifacts. **Args:** * `session`: A database session * `offset`: Query offset * `limit`: Query limit * `artifact_filter`: Only select artifacts matching this filter * `flow_run_filter`: Only select artifacts whose flow runs matching this filter * `task_run_filter`: Only select artifacts whose task runs matching this filter * `deployment_filter`: Only select artifacts whose flow runs belong to deployments matching this filter * `flow_filter`: Only select artifacts whose flow runs belong to flows matching this filter * `work_pool_filter`: Only select artifacts whose flow runs belong to work pools matching this filter ### `count_artifacts` ```python count_artifacts(db: PrefectDBInterface, session: AsyncSession, artifact_filter: Optional[filters.ArtifactFilter] = None, flow_run_filter: Optional[filters.FlowRunFilter] = None, task_run_filter: Optional[filters.TaskRunFilter] = None, deployment_filter: Optional[filters.DeploymentFilter] = None, flow_filter: Optional[filters.FlowFilter] = None) -> int ``` Counts artifacts. Args: session: A database session artifact\_filter: Only select artifacts matching this filter flow\_run\_filter: Only select artifacts whose flow runs matching this filter task\_run\_filter: Only select artifacts whose task runs matching this filter ### `count_latest_artifacts` ```python count_latest_artifacts(db: PrefectDBInterface, session: AsyncSession, artifact_filter: Optional[filters.ArtifactCollectionFilter] = None, flow_run_filter: Optional[filters.FlowRunFilter] = None, task_run_filter: Optional[filters.TaskRunFilter] = None, deployment_filter: Optional[filters.DeploymentFilter] = None, flow_filter: Optional[filters.FlowFilter] = None) -> int ``` Counts artifacts. Args: session: A database session artifact\_filter: Only select artifacts matching this filter flow\_run\_filter: Only select artifacts whose flow runs matching this filter task\_run\_filter: Only select artifacts whose task runs matching this filter ### `update_artifact` ```python update_artifact(db: PrefectDBInterface, session: AsyncSession, artifact_id: UUID, artifact: actions.ArtifactUpdate) -> bool ``` Updates an artifact by id. **Args:** * `session`: A database session * `artifact_id`: The artifact id to update * `artifact`: An artifact model **Returns:** * True if the update was successful, False otherwise ### `delete_artifact` ```python delete_artifact(db: PrefectDBInterface, session: AsyncSession, artifact_id: UUID) -> bool ``` Deletes an artifact by id. The ArtifactCollection table is used to track the latest version of an artifact by key. If we are deleting the latest version of an artifact from the Artifact table, we need to first update the latest version referenced in ArtifactCollection so that it points to the next latest version of the artifact. Example: If we have the following artifacts in Artifact: * key: "foo", id: 1, created: 2020-01-01 * key: "foo", id: 2, created: 2020-01-02 * key: "foo", id: 3, created: 2020-01-03 the ArtifactCollection table has the following entry: * key: "foo", latest\_id: 3 If we delete the artifact with id 3, we need to update the latest version of the artifact with key "foo" to be the artifact with id 2. **Args:** * `session`: A database session * `artifact_id`: The artifact id to delete **Returns:** * True if the delete was successful, False otherwise # block_documents Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-models-block_documents # `prefect.server.models.block_documents` Functions for interacting with block document ORM objects. Intended for internal use by the Prefect REST API. ## Functions ### `create_block_document` ```python create_block_document(db: PrefectDBInterface, session: AsyncSession, block_document: schemas.actions.BlockDocumentCreate) -> BlockDocument ``` ### `block_document_with_unique_values_exists` ```python block_document_with_unique_values_exists(db: PrefectDBInterface, session: AsyncSession, block_type_id: UUID, name: str) -> bool ``` ### `read_block_document_by_id` ```python read_block_document_by_id(session: AsyncSession, block_document_id: UUID, include_secrets: bool = False) -> Union[BlockDocument, None] ``` ### `read_block_document_by_name` ```python read_block_document_by_name(session: AsyncSession, name: str, block_type_slug: str, include_secrets: bool = False) -> Union[BlockDocument, None] ``` Read a block document with the given name and block type slug. ### `read_block_documents` ```python read_block_documents(db: PrefectDBInterface, session: AsyncSession, block_document_filter: Optional[schemas.filters.BlockDocumentFilter] = None, block_type_filter: Optional[schemas.filters.BlockTypeFilter] = None, block_schema_filter: Optional[schemas.filters.BlockSchemaFilter] = None, include_secrets: bool = False, sort: schemas.sorting.BlockDocumentSort = schemas.sorting.BlockDocumentSort.NAME_ASC, offset: Optional[int] = None, limit: Optional[int] = None) -> List[BlockDocument] ``` Read block documents with an optional limit and offset ### `count_block_documents` ```python count_block_documents(db: PrefectDBInterface, session: AsyncSession, block_document_filter: Optional[schemas.filters.BlockDocumentFilter] = None, block_type_filter: Optional[schemas.filters.BlockTypeFilter] = None, block_schema_filter: Optional[schemas.filters.BlockSchemaFilter] = None) -> int ``` Count block documents that match the filters. ### `delete_block_document` ```python delete_block_document(db: PrefectDBInterface, session: AsyncSession, block_document_id: UUID) -> bool ``` ### `update_block_document` ```python update_block_document(db: PrefectDBInterface, session: AsyncSession, block_document_id: UUID, block_document: schemas.actions.BlockDocumentUpdate) -> bool ``` ### `create_block_document_reference` ```python create_block_document_reference(db: PrefectDBInterface, session: AsyncSession, block_document_reference: schemas.actions.BlockDocumentReferenceCreate) -> Union[orm_models.BlockDocumentReference, None] ``` ### `delete_block_document_reference` ```python delete_block_document_reference(db: PrefectDBInterface, session: AsyncSession, block_document_reference_id: UUID) -> bool ``` # block_registration Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-models-block_registration # `prefect.server.models.block_registration` ## Functions ### `register_block_schema` ```python register_block_schema(session: AsyncSession, block_schema: Union[schemas.core.BlockSchema, 'ClientBlockSchema']) -> UUID ``` Stores the provided block schema in the Prefect REST API database. If a block schema with a matching checksum and version is already saved, then the ID of the existing block schema will be returned. **Args:** * `session`: A database session. * `block_schema`: A block schema object. **Returns:** * The ID of the registered block schema. ### `register_block_type` ```python register_block_type(session: AsyncSession, block_type: Union[schemas.core.BlockType, 'ClientBlockType']) -> UUID ``` Stores the provided block type in the Prefect REST API database. If a block type with a matching slug is already saved, then the block type will be updated to match the passed in block type. **Args:** * `session`: A database session. * `block_type`: A block type object. **Returns:** * The ID of the registered block type. ### `run_block_auto_registration` ```python run_block_auto_registration(session: AsyncSession) -> None ``` Registers all blocks in the client block registry and any blocks from Prefect Collections that are configured for auto-registration. **Args:** * `session`: A database session. # block_schemas Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-models-block_schemas # `prefect.server.models.block_schemas` Functions for interacting with block schema ORM objects. Intended for internal use by the Prefect REST API. ## Functions ### `create_block_schema` ```python create_block_schema(db: PrefectDBInterface, session: AsyncSession, block_schema: Union[schemas.actions.BlockSchemaCreate, schemas.core.BlockSchema, 'ClientBlockSchemaCreate', 'ClientBlockSchema'], override: bool = False, definitions: Optional[dict[str, Any]] = None) -> Union[BlockSchema, orm_models.BlockSchema] ``` Create a new block schema. **Args:** * `session`: A database session * `block_schema`: a block schema object * `definitions`: Definitions of fields from block schema fields attribute. Used when recursively creating nested block schemas **Returns:** * an ORM block schema model ### `delete_block_schema` ```python delete_block_schema(db: PrefectDBInterface, session: AsyncSession, block_schema_id: UUID) -> bool ``` Delete a block schema by id. **Args:** * `session`: A database session * `block_schema_id`: a block schema id **Returns:** * whether or not the block schema was deleted ### `read_block_schema` ```python read_block_schema(db: PrefectDBInterface, session: AsyncSession, block_schema_id: UUID) -> Union[BlockSchema, None] ``` Reads a block schema by id. Will reconstruct the block schema's fields attribute to include block schema references. **Args:** * `session`: A database session * `block_schema_id`: a block\_schema id **Returns:** * orm\_models..BlockSchema: the block\_schema ### `read_block_schemas` ```python read_block_schemas(db: PrefectDBInterface, session: AsyncSession, block_schema_filter: Optional[schemas.filters.BlockSchemaFilter] = None, limit: Optional[int] = None, offset: Optional[int] = None) -> List[BlockSchema] ``` Reads block schemas, optionally filtered by type or name. **Args:** * `session`: A database session * `block_schema_filter`: a block schema filter object * `limit`: query limit * `offset`: query offset **Returns:** * List\[orm\_models.BlockSchema]: the block\_schemas ### `read_block_schema_by_checksum` ```python read_block_schema_by_checksum(db: PrefectDBInterface, session: AsyncSession, checksum: str, version: Optional[str] = None) -> Optional[BlockSchema] ``` Reads a block\_schema by checksum. Will reconstruct the block schema's fields attribute to include block schema references. **Args:** * `session`: A database session * `checksum`: a block\_schema checksum * `version`: A block\_schema version **Returns:** * orm\_models.BlockSchema: the block\_schema ### `read_available_block_capabilities` ```python read_available_block_capabilities(db: PrefectDBInterface, session: AsyncSession) -> List[str] ``` Retrieves a list of all available block capabilities. **Args:** * `session`: A database session. **Returns:** * List\[str]: List of all available block capabilities. ### `create_block_schema_reference` ```python create_block_schema_reference(db: PrefectDBInterface, session: AsyncSession, block_schema_reference: schemas.core.BlockSchemaReference) -> Union[orm_models.BlockSchemaReference, None] ``` Retrieves a list of all available block capabilities. **Args:** * `session`: A database session. * `block_schema_reference`: A block schema reference object. **Returns:** * orm\_models.BlockSchemaReference: The created BlockSchemaReference ## Classes ### `MissingBlockTypeException` Raised when the block type corresponding to a block schema cannot be found # block_types Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-models-block_types # `prefect.server.models.block_types` Functions for interacting with block type ORM objects. Intended for internal use by the Prefect REST API. ## Functions ### `create_block_type` ```python create_block_type(db: PrefectDBInterface, session: AsyncSession, block_type: Union[schemas.core.BlockType, 'ClientBlockType'], override: bool = False) -> Union[BlockType, None] ``` Create a new block type. **Args:** * `session`: A database session * `block_type`: a block type object **Returns:** * an ORM block type model ### `read_block_type` ```python read_block_type(db: PrefectDBInterface, session: AsyncSession, block_type_id: UUID) -> Union[BlockType, None] ``` Reads a block type by id. **Args:** * `session`: A database session * `block_type_id`: a block\_type id **Returns:** * an ORM block type model ### `read_block_type_by_slug` ```python read_block_type_by_slug(db: PrefectDBInterface, session: AsyncSession, block_type_slug: str) -> Union[BlockType, None] ``` Reads a block type by slug. **Args:** * `session`: A database session * `block_type_slug`: a block type slug **Returns:** * an ORM block type model ### `read_block_types` ```python read_block_types(db: PrefectDBInterface, session: AsyncSession, block_type_filter: Optional[schemas.filters.BlockTypeFilter] = None, block_schema_filter: Optional[schemas.filters.BlockSchemaFilter] = None, limit: Optional[int] = None, offset: Optional[int] = None) -> Sequence[BlockType] ``` Reads block types with an optional limit and offset Args: **Returns:** * List\[BlockType]: List of ### `update_block_type` ```python update_block_type(db: PrefectDBInterface, session: AsyncSession, block_type_id: Union[str, UUID], block_type: Union[schemas.actions.BlockTypeUpdate, schemas.core.BlockType, 'ClientBlockTypeUpdate', 'ClientBlockType']) -> bool ``` Update a block type by id. **Args:** * `session`: A database session * `block_type_id`: Data to update block type with * `block_type`: A block type id **Returns:** * True if the block type was updated ### `delete_block_type` ```python delete_block_type(db: PrefectDBInterface, session: AsyncSession, block_type_id: str) -> bool ``` Delete a block type by id. **Args:** * `session`: A database session * `block_type_id`: A block type id **Returns:** * True if the block type was updated # concurrency_limits Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-models-concurrency_limits # `prefect.server.models.concurrency_limits` Functions for interacting with concurrency limit ORM objects. Intended for internal use by the Prefect REST API. ## Functions ### `create_concurrency_limit` ```python create_concurrency_limit(db: PrefectDBInterface, session: AsyncSession, concurrency_limit: schemas.core.ConcurrencyLimit) -> orm_models.ConcurrencyLimit ``` ### `read_concurrency_limit` ```python read_concurrency_limit(db: PrefectDBInterface, session: AsyncSession, concurrency_limit_id: UUID) -> Union[orm_models.ConcurrencyLimit, None] ``` Reads a concurrency limit by id. If used for orchestration, simultaneous read race conditions might allow the concurrency limit to be temporarily exceeded. ### `read_concurrency_limit_by_tag` ```python read_concurrency_limit_by_tag(db: PrefectDBInterface, session: AsyncSession, tag: str) -> Union[orm_models.ConcurrencyLimit, None] ``` Reads a concurrency limit by tag. If used for orchestration, simultaneous read race conditions might allow the concurrency limit to be temporarily exceeded. ### `reset_concurrency_limit_by_tag` ```python reset_concurrency_limit_by_tag(db: PrefectDBInterface, session: AsyncSession, tag: str, slot_override: Optional[List[UUID]] = None) -> Union[orm_models.ConcurrencyLimit, None] ``` Resets a concurrency limit by tag. ### `filter_concurrency_limits_for_orchestration` ```python filter_concurrency_limits_for_orchestration(db: PrefectDBInterface, session: AsyncSession, tags: List[str]) -> Sequence[orm_models.ConcurrencyLimit] ``` Filters concurrency limits by tag. This will apply a "select for update" lock on these rows to prevent simultaneous read race conditions from enabling the the concurrency limit on these tags from being temporarily exceeded. ### `delete_concurrency_limit` ```python delete_concurrency_limit(db: PrefectDBInterface, session: AsyncSession, concurrency_limit_id: UUID) -> bool ``` ### `delete_concurrency_limit_by_tag` ```python delete_concurrency_limit_by_tag(db: PrefectDBInterface, session: AsyncSession, tag: str) -> bool ``` ### `read_concurrency_limits` ```python read_concurrency_limits(db: PrefectDBInterface, session: AsyncSession, limit: Optional[int] = None, offset: Optional[int] = None) -> Sequence[orm_models.ConcurrencyLimit] ``` Reads a concurrency limits. If used for orchestration, simultaneous read race conditions might allow the concurrency limit to be temporarily exceeded. **Args:** * `session`: A database session * `offset`: Query offset * `limit`: Query limit **Returns:** * List\[orm\_models.ConcurrencyLimit]: concurrency limits # concurrency_limits_v2 Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-models-concurrency_limits_v2 # `prefect.server.models.concurrency_limits_v2` ## Functions ### `active_slots_after_decay` ```python active_slots_after_decay(db: PrefectDBInterface) -> ColumnElement[float] ``` ### `denied_slots_after_decay` ```python denied_slots_after_decay(db: PrefectDBInterface) -> ColumnElement[float] ``` ### `create_concurrency_limit` ```python create_concurrency_limit(db: PrefectDBInterface, session: AsyncSession, concurrency_limit: Union[schemas.actions.ConcurrencyLimitV2Create, schemas.core.ConcurrencyLimitV2]) -> orm_models.ConcurrencyLimitV2 ``` ### `read_concurrency_limit` ```python read_concurrency_limit(db: PrefectDBInterface, session: AsyncSession, concurrency_limit_id: Optional[UUID] = None, name: Optional[str] = None) -> Union[orm_models.ConcurrencyLimitV2, None] ``` ### `read_all_concurrency_limits` ```python read_all_concurrency_limits(db: PrefectDBInterface, session: AsyncSession, limit: int, offset: int) -> Sequence[orm_models.ConcurrencyLimitV2] ``` ### `update_concurrency_limit` ```python update_concurrency_limit(db: PrefectDBInterface, session: AsyncSession, concurrency_limit: schemas.actions.ConcurrencyLimitV2Update, concurrency_limit_id: Optional[UUID] = None, name: Optional[str] = None) -> bool ``` ### `delete_concurrency_limit` ```python delete_concurrency_limit(db: PrefectDBInterface, session: AsyncSession, concurrency_limit_id: Optional[UUID] = None, name: Optional[str] = None) -> bool ``` ### `bulk_read_concurrency_limits` ```python bulk_read_concurrency_limits(db: PrefectDBInterface, session: AsyncSession, names: List[str]) -> List[orm_models.ConcurrencyLimitV2] ``` ### `bulk_increment_active_slots` ```python bulk_increment_active_slots(db: PrefectDBInterface, session: AsyncSession, concurrency_limit_ids: List[UUID], slots: int) -> bool ``` ### `bulk_decrement_active_slots` ```python bulk_decrement_active_slots(db: PrefectDBInterface, session: AsyncSession, concurrency_limit_ids: List[UUID], slots: int, occupancy_seconds: Optional[float] = None) -> bool ``` ### `bulk_update_denied_slots` ```python bulk_update_denied_slots(db: PrefectDBInterface, session: AsyncSession, concurrency_limit_ids: List[UUID], slots: int) -> bool ``` # configuration Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-models-configuration # `prefect.server.models.configuration` ## Functions ### `write_configuration` ```python write_configuration(db: PrefectDBInterface, session: AsyncSession, configuration: schemas.core.Configuration) -> orm_models.Configuration ``` ### `read_configuration` ```python read_configuration(db: PrefectDBInterface, session: AsyncSession, key: str) -> Optional[schemas.core.Configuration] ``` # csrf_token Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-models-csrf_token # `prefect.server.models.csrf_token` ## Functions ### `create_or_update_csrf_token` ```python create_or_update_csrf_token(db: PrefectDBInterface, session: AsyncSession, client: str) -> core.CsrfToken ``` Create or update a CSRF token for a client. If the client already has a token, it will be updated. **Args:** * `session`: The database session * `client`: The client identifier **Returns:** * core.CsrfToken: The CSRF token ### `read_token_for_client` ```python read_token_for_client(db: PrefectDBInterface, session: AsyncSession, client: str) -> Optional[core.CsrfToken] ``` Read a CSRF token for a client. **Args:** * `session`: The database session * `client`: The client identifier **Returns:** * Optional\[core.CsrfToken]: The CSRF token, if it exists and is not expired. ### `delete_expired_tokens` ```python delete_expired_tokens(db: PrefectDBInterface, session: AsyncSession) -> int ``` Delete expired CSRF tokens. **Args:** * `session`: The database session **Returns:** * The number of tokens deleted # deployments Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-models-deployments # `prefect.server.models.deployments` Functions for interacting with deployment ORM objects. Intended for internal use by the Prefect REST API. ## Functions ### `create_deployment` ```python create_deployment(db: PrefectDBInterface, session: AsyncSession, deployment: schemas.core.Deployment | schemas.actions.DeploymentCreate) -> Optional[orm_models.Deployment] ``` Upserts a deployment. **Args:** * `session`: a database session * `deployment`: a deployment model **Returns:** * orm\_models.Deployment: the newly-created or updated deployment ### `update_deployment` ```python update_deployment(db: PrefectDBInterface, session: AsyncSession, deployment_id: UUID, deployment: schemas.actions.DeploymentUpdate) -> bool ``` Updates a deployment. **Args:** * `session`: a database session * `deployment_id`: the ID of the deployment to modify * `deployment`: changes to a deployment model **Returns:** * whether the deployment was updated ### `read_deployment` ```python read_deployment(db: PrefectDBInterface, session: AsyncSession, deployment_id: UUID) -> Optional[orm_models.Deployment] ``` Reads a deployment by id. **Args:** * `session`: A database session * `deployment_id`: a deployment id **Returns:** * orm\_models.Deployment: the deployment ### `read_deployment_by_name` ```python read_deployment_by_name(db: PrefectDBInterface, session: AsyncSession, name: str, flow_name: str) -> Optional[orm_models.Deployment] ``` Reads a deployment by name. **Args:** * `session`: A database session * `name`: a deployment name * `flow_name`: the name of the flow the deployment belongs to **Returns:** * orm\_models.Deployment: the deployment ### `read_deployments` ```python read_deployments(db: PrefectDBInterface, session: AsyncSession, offset: Optional[int] = None, limit: Optional[int] = None, flow_filter: Optional[schemas.filters.FlowFilter] = None, flow_run_filter: Optional[schemas.filters.FlowRunFilter] = None, task_run_filter: Optional[schemas.filters.TaskRunFilter] = None, deployment_filter: Optional[schemas.filters.DeploymentFilter] = None, work_pool_filter: Optional[schemas.filters.WorkPoolFilter] = None, work_queue_filter: Optional[schemas.filters.WorkQueueFilter] = None, sort: schemas.sorting.DeploymentSort = schemas.sorting.DeploymentSort.NAME_ASC) -> Sequence[orm_models.Deployment] ``` Read deployments. **Args:** * `session`: A database session * `offset`: Query offset * `limit`: Query limit * `flow_filter`: only select deployments whose flows match these criteria * `flow_run_filter`: only select deployments whose flow runs match these criteria * `task_run_filter`: only select deployments whose task runs match these criteria * `deployment_filter`: only select deployment that match these filters * `work_pool_filter`: only select deployments whose work pools match these criteria * `work_queue_filter`: only select deployments whose work pool queues match these criteria * `sort`: the sort criteria for selected deployments. Defaults to `name` ASC. **Returns:** * list\[orm\_models.Deployment]: deployments ### `count_deployments` ```python count_deployments(db: PrefectDBInterface, session: AsyncSession, flow_filter: Optional[schemas.filters.FlowFilter] = None, flow_run_filter: Optional[schemas.filters.FlowRunFilter] = None, task_run_filter: Optional[schemas.filters.TaskRunFilter] = None, deployment_filter: Optional[schemas.filters.DeploymentFilter] = None, work_pool_filter: Optional[schemas.filters.WorkPoolFilter] = None, work_queue_filter: Optional[schemas.filters.WorkQueueFilter] = None) -> int ``` Count deployments. **Args:** * `session`: A database session * `flow_filter`: only count deployments whose flows match these criteria * `flow_run_filter`: only count deployments whose flow runs match these criteria * `task_run_filter`: only count deployments whose task runs match these criteria * `deployment_filter`: only count deployment that match these filters * `work_pool_filter`: only count deployments that match these work pool filters * `work_queue_filter`: only count deployments that match these work pool queue filters **Returns:** * the number of deployments matching filters ### `delete_deployment` ```python delete_deployment(db: PrefectDBInterface, session: AsyncSession, deployment_id: UUID) -> bool ``` Delete a deployment by id. **Args:** * `session`: A database session * `deployment_id`: a deployment id **Returns:** * whether or not the deployment was deleted ### `schedule_runs` ```python schedule_runs(db: PrefectDBInterface, session: AsyncSession, deployment_id: UUID, start_time: Optional[datetime.datetime] = None, end_time: Optional[datetime.datetime] = None, min_time: Optional[datetime.timedelta] = None, min_runs: Optional[int] = None, max_runs: Optional[int] = None, auto_scheduled: bool = True) -> Sequence[UUID] ``` Schedule flow runs for a deployment **Args:** * `session`: a database session * `deployment_id`: the id of the deployment to schedule * `start_time`: the time from which to start scheduling runs * `end_time`: runs will be scheduled until at most this time * `min_time`: runs will be scheduled until at least this far in the future * `min_runs`: a minimum amount of runs to schedule * `max_runs`: a maximum amount of runs to schedule This function will generate the minimum number of runs that satisfy the min and max times, and the min and max counts. Specifically, the following order will be respected. * Runs will be generated starting on or after the `start_time` * No more than `max_runs` runs will be generated * No runs will be generated after `end_time` is reached * At least `min_runs` runs will be generated * Runs will be generated until at least `start_time` + `min_time` is reached **Returns:** * a list of flow run ids scheduled for the deployment ### `check_work_queues_for_deployment` ```python check_work_queues_for_deployment(db: PrefectDBInterface, session: AsyncSession, deployment_id: UUID) -> Sequence[orm_models.WorkQueue] ``` Get work queues that can pick up the specified deployment. Work queues will pick up a deployment when all of the following are met. * The deployment has ALL tags that the work queue has (i.e. the work queue's tags must be a subset of the deployment's tags). * The work queue's specified deployment IDs match the deployment's ID, or the work queue does NOT have specified deployment IDs. * The work queue's specified flow runners match the deployment's flow runner or the work queue does NOT have a specified flow runner. Notes on the query: * Our database currently allows either "null" and empty lists as null values in filters, so we need to catch both cases with "or". * `A.contains(B)` should be interpreted as "True if A contains B". **Returns:** * List\[orm\_models.WorkQueue]: WorkQueues ### `create_deployment_schedules` ```python create_deployment_schedules(db: PrefectDBInterface, session: AsyncSession, deployment_id: UUID, schedules: list[schemas.actions.DeploymentScheduleCreate]) -> list[schemas.core.DeploymentSchedule] ``` Creates a deployment's schedules. **Args:** * `session`: A database session * `deployment_id`: a deployment id * `schedules`: a list of deployment schedule create actions ### `read_deployment_schedules` ```python read_deployment_schedules(db: PrefectDBInterface, session: AsyncSession, deployment_id: UUID, deployment_schedule_filter: Optional[schemas.filters.DeploymentScheduleFilter] = None) -> list[schemas.core.DeploymentSchedule] ``` Reads a deployment's schedules. **Args:** * `session`: A database session * `deployment_id`: a deployment id **Returns:** * list\[schemas.core.DeploymentSchedule]: the deployment's schedules ### `update_deployment_schedule` ```python update_deployment_schedule(db: PrefectDBInterface, session: AsyncSession, deployment_id: UUID, schedule: schemas.actions.DeploymentScheduleUpdate, deployment_schedule_id: UUID | None = None, deployment_schedule_slug: str | None = None) -> bool ``` Updates a deployment's schedules. **Args:** * `session`: A database session * `deployment_schedule_id`: a deployment schedule id * `schedule`: a deployment schedule update action ### `delete_schedules_for_deployment` ```python delete_schedules_for_deployment(db: PrefectDBInterface, session: AsyncSession, deployment_id: UUID) -> bool ``` Deletes a deployment schedule. **Args:** * `session`: A database session * `deployment_id`: a deployment id ### `delete_deployment_schedule` ```python delete_deployment_schedule(db: PrefectDBInterface, session: AsyncSession, deployment_id: UUID, deployment_schedule_id: UUID) -> bool ``` Deletes a deployment schedule. **Args:** * `session`: A database session * `deployment_schedule_id`: a deployment schedule id ### `mark_deployments_ready` ```python mark_deployments_ready(db: PrefectDBInterface, deployment_ids: Optional[Iterable[UUID]] = None, work_queue_ids: Optional[Iterable[UUID]] = None) -> None ``` ### `mark_deployments_not_ready` ```python mark_deployments_not_ready(db: PrefectDBInterface, deployment_ids: Optional[Iterable[UUID]] = None, work_queue_ids: Optional[Iterable[UUID]] = None) -> None ``` ### `with_system_labels_for_deployment` ```python with_system_labels_for_deployment(session: AsyncSession, deployment: schemas.core.Deployment) -> schemas.core.KeyValueLabels ``` Augment user supplied labels with system default labels for a deployment. ### `with_system_labels_for_deployment_flow_run` ```python with_system_labels_for_deployment_flow_run(session: AsyncSession, deployment: orm_models.Deployment, user_supplied_labels: Optional[schemas.core.KeyValueLabels] = None) -> schemas.core.KeyValueLabels ``` Generate system labels for a flow run created from a deployment. **Args:** * `session`: Database session * `deployment`: The deployment the flow run is created from * `user_supplied_labels`: Optional user-supplied labels to include **Returns:** * Complete set of labels for the flow run # events Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-models-events # `prefect.server.models.events` ## Functions ### `flow_run_state_change_event` ```python flow_run_state_change_event(session: AsyncSession, occurred: datetime, flow_run: ORMFlowRun, initial_state_id: Optional[UUID], initial_state: Optional[schemas.states.State], validated_state_id: Optional[UUID], validated_state: schemas.states.State) -> Event ``` ### `state_payload` ```python state_payload(state: Optional[schemas.states.State]) -> Optional[Dict[str, str]] ``` Given a State, return the essential string parts of it for use in an event payload ### `deployment_status_event` ```python deployment_status_event(session: AsyncSession, deployment_id: UUID, status: DeploymentStatus, occurred: DateTime) -> Event ``` ### `work_queue_status_event` ```python work_queue_status_event(session: AsyncSession, work_queue: 'ORMWorkQueue', occurred: DateTime) -> Event ``` ### `work_pool_status_event` ```python work_pool_status_event(event_id: UUID, occurred: DateTime, pre_update_work_pool: Optional['ORMWorkPool'], work_pool: 'ORMWorkPool') -> Event ``` # flow_run_input Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-models-flow_run_input # `prefect.server.models.flow_run_input` ## Functions ### `create_flow_run_input` ```python create_flow_run_input(db: PrefectDBInterface, session: AsyncSession, flow_run_input: schemas.core.FlowRunInput) -> schemas.core.FlowRunInput ``` ### `filter_flow_run_input` ```python filter_flow_run_input(db: PrefectDBInterface, session: AsyncSession, flow_run_id: uuid.UUID, prefix: str, limit: int, exclude_keys: List[str]) -> List[schemas.core.FlowRunInput] ``` ### `read_flow_run_input` ```python read_flow_run_input(db: PrefectDBInterface, session: AsyncSession, flow_run_id: uuid.UUID, key: str) -> Optional[schemas.core.FlowRunInput] ``` ### `delete_flow_run_input` ```python delete_flow_run_input(db: PrefectDBInterface, session: AsyncSession, flow_run_id: uuid.UUID, key: str) -> bool ``` # flow_run_states Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-models-flow_run_states # `prefect.server.models.flow_run_states` Functions for interacting with flow run state ORM objects. Intended for internal use by the Prefect REST API. ## Functions ### `read_flow_run_state` ```python read_flow_run_state(db: PrefectDBInterface, session: AsyncSession, flow_run_state_id: UUID) -> Union[orm_models.FlowRunState, None] ``` Reads a flow run state by id. **Args:** * `session`: A database session * `flow_run_state_id`: a flow run state id **Returns:** * orm\_models.FlowRunState: the flow state ### `read_flow_run_states` ```python read_flow_run_states(db: PrefectDBInterface, session: AsyncSession, flow_run_id: UUID) -> Sequence[orm_models.FlowRunState] ``` Reads flow runs states for a flow run. **Args:** * `session`: A database session * `flow_run_id`: the flow run id **Returns:** * List\[orm\_models.FlowRunState]: the flow run states ### `delete_flow_run_state` ```python delete_flow_run_state(db: PrefectDBInterface, session: AsyncSession, flow_run_state_id: UUID) -> bool ``` Delete a flow run state by id. **Args:** * `session`: A database session * `flow_run_state_id`: a flow run state id **Returns:** * whether or not the flow run state was deleted # flow_runs Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-models-flow_runs # `prefect.server.models.flow_runs` Functions for interacting with flow run ORM objects. Intended for internal use by the Prefect REST API. ## Functions ### `create_flow_run` ```python create_flow_run(db: PrefectDBInterface, session: AsyncSession, flow_run: schemas.core.FlowRun, orchestration_parameters: Optional[dict[str, Any]] = None) -> orm_models.FlowRun ``` Creates a new flow run. If the provided flow run has a state attached, it will also be created. **Args:** * `session`: a database session * `flow_run`: a flow run model **Returns:** * orm\_models.FlowRun: the newly-created flow run ### `update_flow_run` ```python update_flow_run(db: PrefectDBInterface, session: AsyncSession, flow_run_id: UUID, flow_run: schemas.actions.FlowRunUpdate) -> bool ``` Updates a flow run. **Args:** * `session`: a database session * `flow_run_id`: the flow run id to update * `flow_run`: a flow run model **Returns:** * whether or not matching rows were found to update ### `read_flow_run` ```python read_flow_run(db: PrefectDBInterface, session: AsyncSession, flow_run_id: UUID, for_update: bool = False) -> Optional[orm_models.FlowRun] ``` Reads a flow run by id. **Args:** * `session`: A database session * `flow_run_id`: a flow run id **Returns:** * orm\_models.FlowRun: the flow run ### `read_flow_runs` ```python read_flow_runs(db: PrefectDBInterface, session: AsyncSession, columns: Optional[list[str]] = None, flow_filter: Optional[schemas.filters.FlowFilter] = None, flow_run_filter: Optional[schemas.filters.FlowRunFilter] = None, task_run_filter: Optional[schemas.filters.TaskRunFilter] = None, deployment_filter: Optional[schemas.filters.DeploymentFilter] = None, work_pool_filter: Optional[schemas.filters.WorkPoolFilter] = None, work_queue_filter: Optional[schemas.filters.WorkQueueFilter] = None, offset: Optional[int] = None, limit: Optional[int] = None, sort: schemas.sorting.FlowRunSort = schemas.sorting.FlowRunSort.ID_DESC) -> Sequence[orm_models.FlowRun] ``` Read flow runs. **Args:** * `session`: a database session * `columns`: a list of the flow run ORM columns to load, for performance * `flow_filter`: only select flow runs whose flows match these filters * `flow_run_filter`: only select flow runs match these filters * `task_run_filter`: only select flow runs whose task runs match these filters * `deployment_filter`: only select flow runs whose deployments match these filters * `offset`: Query offset * `limit`: Query limit * `sort`: Query sort **Returns:** * List\[orm\_models.FlowRun]: flow runs ### `cleanup_flow_run_concurrency_slots` ```python cleanup_flow_run_concurrency_slots(session: AsyncSession, flow_run: orm_models.FlowRun) -> None ``` Cleanup flow run related resources, such as releasing concurrency slots. All operations should be idempotent and safe to call multiple times. IMPORTANT: This run may no longer exist in the database when this operation occurs. ### `read_task_run_dependencies` ```python read_task_run_dependencies(session: AsyncSession, flow_run_id: UUID) -> List[DependencyResult] ``` Get a task run dependency map for a given flow run. ### `count_flow_runs` ```python count_flow_runs(db: PrefectDBInterface, session: AsyncSession, flow_filter: Optional[schemas.filters.FlowFilter] = None, flow_run_filter: Optional[schemas.filters.FlowRunFilter] = None, task_run_filter: Optional[schemas.filters.TaskRunFilter] = None, deployment_filter: Optional[schemas.filters.DeploymentFilter] = None, work_pool_filter: Optional[schemas.filters.WorkPoolFilter] = None, work_queue_filter: Optional[schemas.filters.WorkQueueFilter] = None) -> int ``` Count flow runs. **Args:** * `session`: a database session * `flow_filter`: only count flow runs whose flows match these filters * `flow_run_filter`: only count flow runs that match these filters * `task_run_filter`: only count flow runs whose task runs match these filters * `deployment_filter`: only count flow runs whose deployments match these filters **Returns:** * count of flow runs ### `delete_flow_run` ```python delete_flow_run(db: PrefectDBInterface, session: AsyncSession, flow_run_id: UUID) -> bool ``` Delete a flow run by flow\_run\_id, handling concurrency limits if applicable. **Args:** * `session`: A database session * `flow_run_id`: a flow run id **Returns:** * whether or not the flow run was deleted ### `set_flow_run_state` ```python set_flow_run_state(session: AsyncSession, flow_run_id: UUID, state: schemas.states.State, force: bool = False, flow_policy: Optional[Type[FlowRunOrchestrationPolicy]] = None, orchestration_parameters: Optional[Dict[str, Any]] = None, client_version: Optional[str] = None) -> OrchestrationResult ``` Creates a new orchestrated flow run state. Setting a new state on a run is the one of the principal actions that is governed by Prefect's orchestration logic. Setting a new run state will not guarantee creation, but instead trigger orchestration rules to govern the proposed `state` input. If the state is considered valid, it will be written to the database. Otherwise, a it's possible a different state, or no state, will be created. A `force` flag is supplied to bypass a subset of orchestration logic. **Args:** * `session`: a database session * `flow_run_id`: the flow run id * `state`: a flow run state model * `force`: if False, orchestration rules will be applied that may alter or prevent the state transition. If True, orchestration rules are not applied. **Returns:** * OrchestrationResult object ### `read_flow_run_graph` ```python read_flow_run_graph(db: PrefectDBInterface, session: AsyncSession, flow_run_id: UUID, since: datetime.datetime = earliest_possible_datetime()) -> Graph ``` Given a flow run, return the graph of it's task and subflow runs. If a `since` datetime is provided, only return items that may have changed since that time. ### `with_system_labels_for_flow_run` ```python with_system_labels_for_flow_run(session: AsyncSession, flow_run: Union[schemas.core.FlowRun, schemas.actions.FlowRunCreate]) -> schemas.core.KeyValueLabels ``` Augment user supplied labels with system default labels for a flow run. ### `update_flow_run_labels` ```python update_flow_run_labels(db: PrefectDBInterface, session: AsyncSession, flow_run_id: UUID, labels: KeyValueLabels) -> bool ``` Update flow run labels by patching existing labels with new values. Args: session: A database session flow\_run\_id: the flow run id to update labels: the new labels to patch into existing labels Returns: bool: whether the update was successful ## Classes ### `DependencyResult` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. # flows Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-models-flows # `prefect.server.models.flows` Functions for interacting with flow ORM objects. Intended for internal use by the Prefect REST API. ## Functions ### `create_flow` ```python create_flow(db: PrefectDBInterface, session: AsyncSession, flow: schemas.core.Flow) -> orm_models.Flow ``` Creates a new flow. If a flow with the same name already exists, the existing flow is returned. **Args:** * `session`: a database session * `flow`: a flow model **Returns:** * orm\_models.Flow: the newly-created or existing flow ### `update_flow` ```python update_flow(db: PrefectDBInterface, session: AsyncSession, flow_id: UUID, flow: schemas.actions.FlowUpdate) -> bool ``` Updates a flow. **Args:** * `session`: a database session * `flow_id`: the flow id to update * `flow`: a flow update model **Returns:** * whether or not matching rows were found to update ### `read_flow` ```python read_flow(db: PrefectDBInterface, session: AsyncSession, flow_id: UUID) -> Optional[orm_models.Flow] ``` Reads a flow by id. **Args:** * `session`: A database session * `flow_id`: a flow id **Returns:** * orm\_models.Flow: the flow ### `read_flow_by_name` ```python read_flow_by_name(db: PrefectDBInterface, session: AsyncSession, name: str) -> Optional[orm_models.Flow] ``` Reads a flow by name. **Args:** * `session`: A database session * `name`: a flow name **Returns:** * orm\_models.Flow: the flow ### `read_flows` ```python read_flows(db: PrefectDBInterface, session: AsyncSession, flow_filter: Union[schemas.filters.FlowFilter, None] = None, flow_run_filter: Union[schemas.filters.FlowRunFilter, None] = None, task_run_filter: Union[schemas.filters.TaskRunFilter, None] = None, deployment_filter: Union[schemas.filters.DeploymentFilter, None] = None, work_pool_filter: Union[schemas.filters.WorkPoolFilter, None] = None, sort: schemas.sorting.FlowSort = schemas.sorting.FlowSort.NAME_ASC, offset: Union[int, None] = None, limit: Union[int, None] = None) -> Sequence[orm_models.Flow] ``` Read multiple flows. **Args:** * `session`: A database session * `flow_filter`: only select flows that match these filters * `flow_run_filter`: only select flows whose flow runs match these filters * `task_run_filter`: only select flows whose task runs match these filters * `deployment_filter`: only select flows whose deployments match these filters * `work_pool_filter`: only select flows whose work pools match these filters * `offset`: Query offset * `limit`: Query limit **Returns:** * List\[orm\_models.Flow]: flows ### `count_flows` ```python count_flows(db: PrefectDBInterface, session: AsyncSession, flow_filter: Union[schemas.filters.FlowFilter, None] = None, flow_run_filter: Union[schemas.filters.FlowRunFilter, None] = None, task_run_filter: Union[schemas.filters.TaskRunFilter, None] = None, deployment_filter: Union[schemas.filters.DeploymentFilter, None] = None, work_pool_filter: Union[schemas.filters.WorkPoolFilter, None] = None) -> int ``` Count flows. **Args:** * `session`: A database session * `flow_filter`: only count flows that match these filters * `flow_run_filter`: only count flows whose flow runs match these filters * `task_run_filter`: only count flows whose task runs match these filters * `deployment_filter`: only count flows whose deployments match these filters * `work_pool_filter`: only count flows whose work pools match these filters **Returns:** * count of flows ### `delete_flow` ```python delete_flow(db: PrefectDBInterface, session: AsyncSession, flow_id: UUID) -> bool ``` Delete a flow by id. **Args:** * `session`: A database session * `flow_id`: a flow id **Returns:** * whether or not the flow was deleted ### `read_flow_labels` ```python read_flow_labels(db: PrefectDBInterface, session: AsyncSession, flow_id: UUID) -> Union[schemas.core.KeyValueLabels, None] ``` # logs Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-models-logs # `prefect.server.models.logs` Functions for interacting with log ORM objects. Intended for internal use by the Prefect REST API. ## Functions ### `split_logs_into_batches` ```python split_logs_into_batches(logs: Sequence[schemas.actions.LogCreate]) -> Generator[Tuple[LogCreate, ...], None, None] ``` ### `create_logs` ```python create_logs(db: PrefectDBInterface, session: AsyncSession, logs: Sequence[LogCreate]) -> None ``` Creates new logs **Args:** * `session`: a database session * `logs`: a list of log schemas **Returns:** * None ### `read_logs` ```python read_logs(db: PrefectDBInterface, session: AsyncSession, log_filter: Optional[schemas.filters.LogFilter], offset: Optional[int] = None, limit: Optional[int] = None, sort: schemas.sorting.LogSort = schemas.sorting.LogSort.TIMESTAMP_ASC) -> Sequence[orm_models.Log] ``` Read logs. **Args:** * `session`: a database session * `db`: the database interface * `log_filter`: only select logs that match these filters * `offset`: Query offset * `limit`: Query limit * `sort`: Query sort **Returns:** * List\[orm\_models.Log]: the matching logs ### `delete_logs` ```python delete_logs(db: PrefectDBInterface, session: AsyncSession, log_filter: schemas.filters.LogFilter) -> int ``` # saved_searches Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-models-saved_searches # `prefect.server.models.saved_searches` Functions for interacting with saved search ORM objects. Intended for internal use by the Prefect REST API. ## Functions ### `create_saved_search` ```python create_saved_search(db: PrefectDBInterface, session: AsyncSession, saved_search: schemas.core.SavedSearch) -> orm_models.SavedSearch ``` Upserts a SavedSearch. If a SavedSearch with the same name exists, all properties will be updated. **Args:** * `session`: a database session * `saved_search`: a SavedSearch model **Returns:** * orm\_models.SavedSearch: the newly-created or updated SavedSearch ### `read_saved_search` ```python read_saved_search(db: PrefectDBInterface, session: AsyncSession, saved_search_id: UUID) -> Union[orm_models.SavedSearch, None] ``` Reads a SavedSearch by id. **Args:** * `session`: A database session * `saved_search_id`: a SavedSearch id **Returns:** * orm\_models.SavedSearch: the SavedSearch ### `read_saved_search_by_name` ```python read_saved_search_by_name(db: PrefectDBInterface, session: AsyncSession, name: str) -> Union[orm_models.SavedSearch, None] ``` Reads a SavedSearch by name. **Args:** * `session`: A database session * `name`: a SavedSearch name **Returns:** * orm\_models.SavedSearch: the SavedSearch ### `read_saved_searches` ```python read_saved_searches(db: PrefectDBInterface, session: AsyncSession, offset: Optional[int] = None, limit: Optional[int] = None) -> Sequence[orm_models.SavedSearch] ``` Read SavedSearches. **Args:** * `session`: A database session * `offset`: Query offset * `limit`: Query limit **Returns:** * List\[orm\_models.SavedSearch]: SavedSearches ### `delete_saved_search` ```python delete_saved_search(db: PrefectDBInterface, session: AsyncSession, saved_search_id: UUID) -> bool ``` Delete a SavedSearch by id. **Args:** * `session`: A database session * `saved_search_id`: a SavedSearch id **Returns:** * whether or not the SavedSearch was deleted # task_run_states Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-models-task_run_states # `prefect.server.models.task_run_states` Functions for interacting with task run state ORM objects. Intended for internal use by the Prefect REST API. ## Functions ### `read_task_run_state` ```python read_task_run_state(db: PrefectDBInterface, session: AsyncSession, task_run_state_id: UUID) -> Union[orm_models.TaskRunState, None] ``` Reads a task run state by id. **Args:** * `session`: A database session * `task_run_state_id`: a task run state id **Returns:** * orm\_models.TaskRunState: the task state ### `read_task_run_states` ```python read_task_run_states(db: PrefectDBInterface, session: AsyncSession, task_run_id: UUID) -> Sequence[orm_models.TaskRunState] ``` Reads task runs states for a task run. **Args:** * `session`: A database session * `task_run_id`: the task run id **Returns:** * List\[orm\_models.TaskRunState]: the task run states ### `delete_task_run_state` ```python delete_task_run_state(db: PrefectDBInterface, session: AsyncSession, task_run_state_id: UUID) -> bool ``` Delete a task run state by id. **Args:** * `session`: A database session * `task_run_state_id`: a task run state id **Returns:** * whether or not the task run state was deleted # task_runs Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-models-task_runs # `prefect.server.models.task_runs` Functions for interacting with task run ORM objects. Intended for internal use by the Prefect REST API. ## Functions ### `create_task_run` ```python create_task_run(db: PrefectDBInterface, session: AsyncSession, task_run: schemas.core.TaskRun, orchestration_parameters: Optional[Dict[str, Any]] = None) -> orm_models.TaskRun ``` Creates a new task run. If a task run with the same flow\_run\_id, task\_key, and dynamic\_key already exists, the existing task run will be returned. If the provided task run has a state attached, it will also be created. **Args:** * `session`: a database session * `task_run`: a task run model **Returns:** * orm\_models.TaskRun: the newly-created or existing task run ### `update_task_run` ```python update_task_run(db: PrefectDBInterface, session: AsyncSession, task_run_id: UUID, task_run: schemas.actions.TaskRunUpdate) -> bool ``` Updates a task run. **Args:** * `session`: a database session * `task_run_id`: the task run id to update * `task_run`: a task run model **Returns:** * whether or not matching rows were found to update ### `read_task_run` ```python read_task_run(db: PrefectDBInterface, session: AsyncSession, task_run_id: UUID) -> Union[orm_models.TaskRun, None] ``` Read a task run by id. **Args:** * `session`: a database session * `task_run_id`: the task run id **Returns:** * orm\_models.TaskRun: the task run ### `read_task_run_with_flow_run_name` ```python read_task_run_with_flow_run_name(db: PrefectDBInterface, session: AsyncSession, task_run_id: UUID) -> Union[orm_models.TaskRun, None] ``` Read a task run by id. **Args:** * `session`: a database session * `task_run_id`: the task run id **Returns:** * orm\_models.TaskRun: the task run with the flow run name ### `read_task_runs` ```python read_task_runs(db: PrefectDBInterface, session: AsyncSession, flow_filter: Optional[schemas.filters.FlowFilter] = None, flow_run_filter: Optional[schemas.filters.FlowRunFilter] = None, task_run_filter: Optional[schemas.filters.TaskRunFilter] = None, deployment_filter: Optional[schemas.filters.DeploymentFilter] = None, offset: Optional[int] = None, limit: Optional[int] = None, sort: schemas.sorting.TaskRunSort = schemas.sorting.TaskRunSort.ID_DESC) -> Sequence[orm_models.TaskRun] ``` Read task runs. **Args:** * `session`: a database session * `flow_filter`: only select task runs whose flows match these filters * `flow_run_filter`: only select task runs whose flow runs match these filters * `task_run_filter`: only select task runs that match these filters * `deployment_filter`: only select task runs whose deployments match these filters * `offset`: Query offset * `limit`: Query limit * `sort`: Query sort **Returns:** * List\[orm\_models.TaskRun]: the task runs ### `count_task_runs` ```python count_task_runs(db: PrefectDBInterface, session: AsyncSession, flow_filter: Optional[schemas.filters.FlowFilter] = None, flow_run_filter: Optional[schemas.filters.FlowRunFilter] = None, task_run_filter: Optional[schemas.filters.TaskRunFilter] = None, deployment_filter: Optional[schemas.filters.DeploymentFilter] = None) -> int ``` Count task runs. **Args:** * `session`: a database session * `flow_filter`: only count task runs whose flows match these filters * `flow_run_filter`: only count task runs whose flow runs match these filters * `task_run_filter`: only count task runs that match these filters * `deployment_filter`: only count task runs whose deployments match these filters Returns: int: count of task runs ### `count_task_runs_by_state` ```python count_task_runs_by_state(db: PrefectDBInterface, session: AsyncSession, flow_filter: Optional[schemas.filters.FlowFilter] = None, flow_run_filter: Optional[schemas.filters.FlowRunFilter] = None, task_run_filter: Optional[schemas.filters.TaskRunFilter] = None, deployment_filter: Optional[schemas.filters.DeploymentFilter] = None) -> schemas.states.CountByState ``` Count task runs by state. **Args:** * `session`: a database session * `flow_filter`: only count task runs whose flows match these filters * `flow_run_filter`: only count task runs whose flow runs match these filters * `task_run_filter`: only count task runs that match these filters * `deployment_filter`: only count task runs whose deployments match these filters Returns: schemas.states.CountByState: count of task runs by state ### `delete_task_run` ```python delete_task_run(db: PrefectDBInterface, session: AsyncSession, task_run_id: UUID) -> bool ``` Delete a task run by id. **Args:** * `session`: a database session * `task_run_id`: the task run id to delete **Returns:** * whether or not the task run was deleted ### `set_task_run_state` ```python set_task_run_state(session: AsyncSession, task_run_id: UUID, state: schemas.states.State, force: bool = False, task_policy: Optional[Type[TaskRunOrchestrationPolicy]] = None, orchestration_parameters: Optional[Dict[str, Any]] = None) -> OrchestrationResult ``` Creates a new orchestrated task run state. Setting a new state on a run is the one of the principal actions that is governed by Prefect's orchestration logic. Setting a new run state will not guarantee creation, but instead trigger orchestration rules to govern the proposed `state` input. If the state is considered valid, it will be written to the database. Otherwise, a it's possible a different state, or no state, will be created. A `force` flag is supplied to bypass a subset of orchestration logic. **Args:** * `session`: a database session * `task_run_id`: the task run id * `state`: a task run state model * `force`: if False, orchestration rules will be applied that may alter or prevent the state transition. If True, orchestration rules are not applied. **Returns:** * OrchestrationResult object ### `with_system_labels_for_task_run` ```python with_system_labels_for_task_run(session: AsyncSession, task_run: schemas.core.TaskRun) -> schemas.core.KeyValueLabels ``` Augment user supplied labels with system default labels for a task run. # task_workers Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-models-task_workers # `prefect.server.models.task_workers` ## Functions ### `observe_worker` ```python observe_worker(task_keys: List[TaskKey], worker_id: WorkerId) -> None ``` ### `forget_worker` ```python forget_worker(worker_id: WorkerId) -> None ``` ### `get_workers_for_task_keys` ```python get_workers_for_task_keys(task_keys: List[TaskKey]) -> List[TaskWorkerResponse] ``` ### `get_all_workers` ```python get_all_workers() -> List[TaskWorkerResponse] ``` ## Classes ### `TaskWorkerResponse` ### `InMemoryTaskWorkerTracker` **Methods:** #### `forget_worker` ```python forget_worker(self, worker_id: WorkerId) -> None ``` #### `get_all_workers` ```python get_all_workers(self) -> List[TaskWorkerResponse] ``` #### `get_workers_for_task_keys` ```python get_workers_for_task_keys(self, task_keys: List[TaskKey]) -> List[TaskWorkerResponse] ``` #### `observe_worker` ```python observe_worker(self, task_keys: List[TaskKey], worker_id: WorkerId) -> None ``` #### `reset` ```python reset(self) -> None ``` Testing utility to reset the state of the task worker tracker # variables Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-models-variables # `prefect.server.models.variables` ## Functions ### `create_variable` ```python create_variable(db: PrefectDBInterface, session: AsyncSession, variable: VariableCreate) -> orm_models.Variable ``` Create a variable **Args:** * `session`: async database session * `variable`: variable to create **Returns:** * orm\_models.Variable ### `read_variable` ```python read_variable(db: PrefectDBInterface, session: AsyncSession, variable_id: UUID) -> Optional[orm_models.Variable] ``` Reads a variable by id. ### `read_variable_by_name` ```python read_variable_by_name(db: PrefectDBInterface, session: AsyncSession, name: str) -> Optional[orm_models.Variable] ``` Reads a variable by name. ### `read_variables` ```python read_variables(db: PrefectDBInterface, session: AsyncSession, variable_filter: Optional[filters.VariableFilter] = None, sort: sorting.VariableSort = sorting.VariableSort.NAME_ASC, offset: Optional[int] = None, limit: Optional[int] = None) -> Sequence[orm_models.Variable] ``` Read variables, applying filers. ### `count_variables` ```python count_variables(db: PrefectDBInterface, session: AsyncSession, variable_filter: Optional[filters.VariableFilter] = None) -> int ``` Count variables, applying filters. ### `update_variable` ```python update_variable(db: PrefectDBInterface, session: AsyncSession, variable_id: UUID, variable: VariableUpdate) -> bool ``` Updates a variable by id. ### `update_variable_by_name` ```python update_variable_by_name(db: PrefectDBInterface, session: AsyncSession, name: str, variable: VariableUpdate) -> bool ``` Updates a variable by name. ### `delete_variable` ```python delete_variable(db: PrefectDBInterface, session: AsyncSession, variable_id: UUID) -> bool ``` Delete a variable by id. ### `delete_variable_by_name` ```python delete_variable_by_name(db: PrefectDBInterface, session: AsyncSession, name: str) -> bool ``` Delete a variable by name. # work_queues Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-models-work_queues # `prefect.server.models.work_queues` Functions for interacting with work queue ORM objects. Intended for internal use by the Prefect REST API. ## Functions ### `create_work_queue` ```python create_work_queue(db: PrefectDBInterface, session: AsyncSession, work_queue: Union[schemas.core.WorkQueue, schemas.actions.WorkQueueCreate]) -> orm_models.WorkQueue ``` Inserts a WorkQueue. If a WorkQueue with the same name exists, an error will be thrown. **Args:** * `session`: a database session * `work_queue`: a WorkQueue model **Returns:** * orm\_models.WorkQueue: the newly-created or updated WorkQueue ### `read_work_queue` ```python read_work_queue(db: PrefectDBInterface, session: AsyncSession, work_queue_id: Union[UUID, PrefectUUID]) -> Optional[orm_models.WorkQueue] ``` Reads a WorkQueue by id. **Args:** * `session`: A database session * `work_queue_id`: a WorkQueue id **Returns:** * orm\_models.WorkQueue: the WorkQueue ### `read_work_queue_by_name` ```python read_work_queue_by_name(db: PrefectDBInterface, session: AsyncSession, name: str) -> Optional[orm_models.WorkQueue] ``` Reads a WorkQueue by id. **Args:** * `session`: A database session * `work_queue_id`: a WorkQueue id **Returns:** * orm\_models.WorkQueue: the WorkQueue ### `read_work_queues` ```python read_work_queues(db: PrefectDBInterface, session: AsyncSession, offset: Optional[int] = None, limit: Optional[int] = None, work_queue_filter: Optional[schemas.filters.WorkQueueFilter] = None) -> Sequence[orm_models.WorkQueue] ``` Read WorkQueues. **Args:** * `session`: A database session * `offset`: Query offset * `limit`: Query limit * `work_queue_filter`: only select work queues matching these filters Returns: Sequence\[orm\_models.WorkQueue]: WorkQueues ### `is_last_polled_recent` ```python is_last_polled_recent(last_polled: Optional[DateTime]) -> bool ``` ### `update_work_queue` ```python update_work_queue(db: PrefectDBInterface, session: AsyncSession, work_queue_id: UUID, work_queue: schemas.actions.WorkQueueUpdate, emit_status_change: Optional[Callable[[orm_models.WorkQueue], Awaitable[None]]] = None) -> bool ``` Update a WorkQueue by id. **Args:** * `session`: A database session * `work_queue`: the work queue data * `work_queue_id`: a WorkQueue id **Returns:** * whether or not the WorkQueue was updated ### `delete_work_queue` ```python delete_work_queue(db: PrefectDBInterface, session: AsyncSession, work_queue_id: UUID) -> bool ``` Delete a WorkQueue by id. **Args:** * `session`: A database session * `work_queue_id`: a WorkQueue id **Returns:** * whether or not the WorkQueue was deleted ### `get_runs_in_work_queue` ```python get_runs_in_work_queue(db: PrefectDBInterface, session: AsyncSession, work_queue_id: UUID, limit: Optional[int] = None, scheduled_before: Optional[datetime.datetime] = None) -> Tuple[orm_models.WorkQueue, Sequence[orm_models.FlowRun]] ``` Get runs from a work queue. **Args:** * `session`: A database session. work\_queue\_id: The work queue id. * `scheduled_before`: Only return runs scheduled to start before this time. * `limit`: An optional limit for the number of runs to return from the queue. This limit applies to the request only. It does not affect the work queue's concurrency limit. If `limit` exceeds the work queue's concurrency limit, it will be ignored. ### `ensure_work_queue_exists` ```python ensure_work_queue_exists(session: AsyncSession, name: str) -> orm_models.WorkQueue ``` Checks if a work queue exists and creates it if it does not. Useful when working with deployments, agents, and flow runs that automatically create work queues. Will also create a work pool queue in the default agent pool to facilitate migration to work pools. ### `read_work_queue_status` ```python read_work_queue_status(session: AsyncSession, work_queue_id: UUID) -> schemas.core.WorkQueueStatusDetail ``` Get work queue status by id. **Args:** * `session`: A database session * `work_queue_id`: a WorkQueue id **Returns:** * Information about the status of the work queue. ### `record_work_queue_polls` ```python record_work_queue_polls(db: PrefectDBInterface, session: AsyncSession, polled_work_queue_ids: Sequence[UUID], ready_work_queue_ids: Sequence[UUID]) -> None ``` Record that the given work queues were polled, and also update the given ready\_work\_queue\_ids to READY. ### `mark_work_queues_ready` ```python mark_work_queues_ready(db: PrefectDBInterface, polled_work_queue_ids: Sequence[UUID], ready_work_queue_ids: Sequence[UUID]) -> None ``` ### `mark_work_queues_not_ready` ```python mark_work_queues_not_ready(db: PrefectDBInterface, work_queue_ids: Iterable[UUID]) -> None ``` ### `emit_work_queue_status_event` ```python emit_work_queue_status_event(db: PrefectDBInterface, work_queue: orm_models.WorkQueue) -> None ``` # workers Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-models-workers # `prefect.server.models.workers` Functions for interacting with worker ORM objects. Intended for internal use by the Prefect REST API. ## Functions ### `create_work_pool` ```python create_work_pool(db: PrefectDBInterface, session: AsyncSession, work_pool: Union[schemas.core.WorkPool, schemas.actions.WorkPoolCreate]) -> orm_models.WorkPool ``` Creates a work pool. If a WorkPool with the same name exists, an error will be thrown. **Args:** * `session`: a database session * `work_pool`: a WorkPool model **Returns:** * orm\_models.WorkPool: the newly-created WorkPool ### `read_work_pool` ```python read_work_pool(db: PrefectDBInterface, session: AsyncSession, work_pool_id: UUID) -> Optional[orm_models.WorkPool] ``` Reads a WorkPool by id. **Args:** * `session`: A database session * `work_pool_id`: a WorkPool id **Returns:** * orm\_models.WorkPool: the WorkPool ### `read_work_pool_by_name` ```python read_work_pool_by_name(db: PrefectDBInterface, session: AsyncSession, work_pool_name: str) -> Optional[orm_models.WorkPool] ``` Reads a WorkPool by name. **Args:** * `session`: A database session * `work_pool_name`: a WorkPool name **Returns:** * orm\_models.WorkPool: the WorkPool ### `read_work_pools` ```python read_work_pools(db: PrefectDBInterface, session: AsyncSession, work_pool_filter: Optional[schemas.filters.WorkPoolFilter] = None, offset: Optional[int] = None, limit: Optional[int] = None) -> Sequence[orm_models.WorkPool] ``` Read worker configs. **Args:** * `session`: A database session * `offset`: Query offset * `limit`: Query limit Returns: List\[orm\_models.WorkPool]: worker configs ### `count_work_pools` ```python count_work_pools(db: PrefectDBInterface, session: AsyncSession, work_pool_filter: Optional[schemas.filters.WorkPoolFilter] = None) -> int ``` Read worker configs. **Args:** * `session`: A database session * `work_pool_filter`: filter criteria to apply to the count Returns: int: the count of work pools matching the criteria ### `update_work_pool` ```python update_work_pool(db: PrefectDBInterface, session: AsyncSession, work_pool_id: UUID, work_pool: schemas.actions.WorkPoolUpdate, emit_status_change: Optional[Callable[[UUID, DateTime, orm_models.WorkPool, orm_models.WorkPool], Awaitable[None]]] = None) -> bool ``` Update a WorkPool by id. **Args:** * `session`: A database session * `work_pool_id`: a WorkPool id * `worker`: the work queue data * `emit_status_change`: function to call when work pool status is changed **Returns:** * whether or not the worker was updated ### `delete_work_pool` ```python delete_work_pool(db: PrefectDBInterface, session: AsyncSession, work_pool_id: UUID) -> bool ``` Delete a WorkPool by id. **Args:** * `session`: A database session * `work_pool_id`: a work pool id **Returns:** * whether or not the WorkPool was deleted ### `get_scheduled_flow_runs` ```python get_scheduled_flow_runs(db: PrefectDBInterface, session: AsyncSession, work_pool_ids: Optional[List[UUID]] = None, work_queue_ids: Optional[List[UUID]] = None, scheduled_before: Optional[datetime.datetime] = None, scheduled_after: Optional[datetime.datetime] = None, limit: Optional[int] = None, respect_queue_priorities: Optional[bool] = None) -> Sequence[schemas.responses.WorkerFlowRunResponse] ``` Get runs from queues in a specific work pool. **Args:** * `session`: a database session * `work_pool_ids`: a list of work pool ids * `work_queue_ids`: a list of work pool queue ids * `scheduled_before`: a datetime to filter runs scheduled before * `scheduled_after`: a datetime to filter runs scheduled after * `respect_queue_priorities`: whether or not to respect queue priorities * `limit`: the maximum number of runs to return * `db`: a database interface **Returns:** * List\[WorkerFlowRunResponse]: the runs, as well as related work pool details ### `create_work_queue` ```python create_work_queue(db: PrefectDBInterface, session: AsyncSession, work_pool_id: UUID, work_queue: schemas.actions.WorkQueueCreate) -> orm_models.WorkQueue ``` Creates a work pool queue. **Args:** * `session`: a database session * `work_pool_id`: a work pool id * `work_queue`: a WorkQueue action model **Returns:** * orm\_models.WorkQueue: the newly-created WorkQueue ### `bulk_update_work_queue_priorities` ```python bulk_update_work_queue_priorities(db: PrefectDBInterface, session: AsyncSession, work_pool_id: UUID, new_priorities: Dict[UUID, int]) -> None ``` This is a brute force update of all work pool queue priorities for a given work pool. It loads all queues fully into memory, sorts them, and flushes the update to the orm\_models. The algorithm ensures that priorities are unique integers > 0, and makes the minimum number of changes required to satisfy the provided `new_priorities`. For example, if no queues currently have the provided `new_priorities`, then they are assigned without affecting other queues. If they are held by other queues, then those queues' priorities are incremented as necessary. Updating queue priorities is not a common operation (happens on the same scale as queue modification, which is significantly less than reading from queues), so while this implementation is slow, it may suffice and make up for that with extreme simplicity. ### `read_work_queues` ```python read_work_queues(db: PrefectDBInterface, session: AsyncSession, work_pool_id: UUID, work_queue_filter: Optional[schemas.filters.WorkQueueFilter] = None, offset: Optional[int] = None, limit: Optional[int] = None) -> Sequence[orm_models.WorkQueue] ``` Read all work pool queues for a work pool. Results are ordered by ascending priority. **Args:** * `session`: a database session * `work_pool_id`: a work pool id * `work_queue_filter`: Filter criteria for work pool queues * `offset`: Query offset * `limit`: Query limit **Returns:** * List\[orm\_models.WorkQueue]: the WorkQueues ### `read_work_queue` ```python read_work_queue(db: PrefectDBInterface, session: AsyncSession, work_queue_id: Union[UUID, PrefectUUID]) -> Optional[orm_models.WorkQueue] ``` Read a specific work pool queue. **Args:** * `session`: a database session * `work_queue_id`: a work pool queue id **Returns:** * orm\_models.WorkQueue: the WorkQueue ### `read_work_queue_by_name` ```python read_work_queue_by_name(db: PrefectDBInterface, session: AsyncSession, work_pool_name: str, work_queue_name: str) -> Optional[orm_models.WorkQueue] ``` Reads a WorkQueue by name. **Args:** * `session`: A database session * `work_pool_name`: a WorkPool name * `work_queue_name`: a WorkQueue name **Returns:** * orm\_models.WorkQueue: the WorkQueue ### `update_work_queue` ```python update_work_queue(db: PrefectDBInterface, session: AsyncSession, work_queue_id: UUID, work_queue: schemas.actions.WorkQueueUpdate, emit_status_change: Optional[Callable[[orm_models.WorkQueue], Awaitable[None]]] = None, default_status: WorkQueueStatus = WorkQueueStatus.NOT_READY) -> bool ``` Update a work pool queue. **Args:** * `session`: a database session * `work_queue_id`: a work pool queue ID * `work_queue`: a WorkQueue model * `emit_status_change`: function to call when work queue status is changed **Returns:** * whether or not the WorkQueue was updated ### `delete_work_queue` ```python delete_work_queue(db: PrefectDBInterface, session: AsyncSession, work_queue_id: UUID) -> bool ``` Delete a work pool queue. **Args:** * `session`: a database session * `work_queue_id`: a work pool queue ID **Returns:** * whether or not the WorkQueue was deleted ### `read_workers` ```python read_workers(db: PrefectDBInterface, session: AsyncSession, work_pool_id: UUID, worker_filter: Optional[schemas.filters.WorkerFilter] = None, limit: Optional[int] = None, offset: Optional[int] = None) -> Sequence[orm_models.Worker] ``` ### `worker_heartbeat` ```python worker_heartbeat(db: PrefectDBInterface, session: AsyncSession, work_pool_id: UUID, worker_name: str, heartbeat_interval_seconds: Optional[int] = None) -> bool ``` Record a worker process heartbeat. **Args:** * `session`: a database session * `work_pool_id`: a work pool ID * `worker_name`: a worker name **Returns:** * whether or not the worker was updated ### `delete_worker` ```python delete_worker(db: PrefectDBInterface, session: AsyncSession, work_pool_id: UUID, worker_name: str) -> bool ``` Delete a work pool's worker. **Args:** * `session`: a database session * `work_pool_id`: a work pool ID * `worker_name`: a worker name **Returns:** * whether or not the Worker was deleted ### `emit_work_pool_status_event` ```python emit_work_pool_status_event(event_id: UUID, occurred: DateTime, pre_update_work_pool: Optional[orm_models.WorkPool], work_pool: orm_models.WorkPool) -> None ``` # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-orchestration-__init__ # `prefect.server.orchestration` *This module is empty or contains only private/internal implementations.* # core_policy Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-orchestration-core_policy # `prefect.server.orchestration.core_policy` Orchestration logic that fires on state transitions. `CoreFlowPolicy` and `CoreTaskPolicy` contain all default orchestration rules that Prefect enforces on a state transition. ## Classes ### `CoreFlowPolicy` Orchestration rules that run against flow-run-state transitions in priority order. **Methods:** #### `priority` ```python priority() -> list[Union[type[BaseUniversalTransform[orm_models.FlowRun, core.FlowRunPolicy]], type[BaseOrchestrationRule[orm_models.FlowRun, core.FlowRunPolicy]]]] ``` ### `CoreTaskPolicy` Orchestration rules that run against task-run-state transitions in priority order. **Methods:** #### `priority` ```python priority() -> list[Union[type[BaseUniversalTransform[orm_models.TaskRun, core.TaskRunPolicy]], type[BaseOrchestrationRule[orm_models.TaskRun, core.TaskRunPolicy]]]] ``` ### `ClientSideTaskOrchestrationPolicy` Orchestration rules that run against task-run-state transitions in priority order, specifically for clients doing client-side orchestration. **Methods:** #### `priority` ```python priority() -> list[Union[type[BaseUniversalTransform[orm_models.TaskRun, core.TaskRunPolicy]], type[BaseOrchestrationRule[orm_models.TaskRun, core.TaskRunPolicy]]]] ``` ### `BackgroundTaskPolicy` Orchestration rules that run against task-run-state transitions in priority order. **Methods:** #### `priority` ```python priority() -> list[type[BaseUniversalTransform[orm_models.TaskRun, core.TaskRunPolicy]] | type[BaseOrchestrationRule[orm_models.TaskRun, core.TaskRunPolicy]]] ``` ### `MinimalFlowPolicy` **Methods:** #### `priority` ```python priority() -> list[Union[type[BaseUniversalTransform[orm_models.FlowRun, core.FlowRunPolicy]], type[BaseOrchestrationRule[orm_models.FlowRun, core.FlowRunPolicy]]]] ``` ### `MarkLateRunsPolicy` **Methods:** #### `priority` ```python priority() -> list[Union[type[BaseUniversalTransform[orm_models.FlowRun, core.FlowRunPolicy]], type[BaseOrchestrationRule[orm_models.FlowRun, core.FlowRunPolicy]]]] ``` ### `MinimalTaskPolicy` **Methods:** #### `priority` ```python priority() -> list[Union[type[BaseUniversalTransform[orm_models.TaskRun, core.TaskRunPolicy]], type[BaseOrchestrationRule[orm_models.TaskRun, core.TaskRunPolicy]]]] ``` ### `SecureTaskConcurrencySlots` Checks relevant concurrency slots are available before entering a Running state. This rule checks if concurrency limits have been set on the tags associated with a TaskRun. If so, a concurrency slot will be secured against each concurrency limit before being allowed to transition into a running state. If a concurrency limit has been reached, the client will be instructed to delay the transition for the duration specified by the "PREFECT\_TASK\_RUN\_TAG\_CONCURRENCY\_SLOT\_WAIT\_SECONDS" setting before trying again. If the concurrency limit set on a tag is 0, the transition will be aborted to prevent deadlocks. **Methods:** #### `before_transition` ```python before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.TaskRun, core.TaskRunPolicy]) -> None ``` #### `cleanup` ```python cleanup(self, initial_state: states.State[Any] | None, validated_state: states.State[Any] | None, context: OrchestrationContext[orm_models.TaskRun, core.TaskRunPolicy]) -> None ``` ### `ReleaseTaskConcurrencySlots` Releases any concurrency slots held by a run upon exiting a Running or Cancelling state. **Methods:** #### `after_transition` ```python after_transition(self, context: OrchestrationContext[orm_models.TaskRun, core.TaskRunPolicy]) -> None ``` ### `SecureFlowConcurrencySlots` Enforce deployment concurrency limits. This rule enforces concurrency limits on deployments. If a deployment has a concurrency limit, this rule will prevent more than that number of flow runs from being submitted concurrently based on the concurrency limit behavior configured for the deployment. We use the PENDING state as the target transition because this allows workers to secure a slot before provisioning dynamic infrastructure to run a flow. If a slot isn't available, the worker won't provision infrastructure. A lease is created for the concurrency limit. The client will be responsible for maintaining the lease. **Methods:** #### `before_transition` ```python before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: FlowOrchestrationContext) -> None ``` #### `cleanup` ```python cleanup(self, initial_state: states.State[Any] | None, validated_state: states.State[Any] | None, context: FlowOrchestrationContext) -> None ``` ### `CopyDeploymentConcurrencyLeaseID` Copies the deployment concurrency lease ID to the proposed state. **Methods:** #### `before_transition` ```python before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` ### `RemoveDeploymentConcurrencyLeaseForOldClientVersions` Removes a deployment concurrency lease if the client version is less than the minimum version for leasing. **Methods:** #### `after_transition` ```python after_transition(self, initial_state: states.State[Any] | None, validated_state: states.State[Any] | None, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` ### `ReleaseFlowConcurrencySlots` Releases deployment concurrency slots held by a flow run. This rule releases a concurrency slot for a deployment when a flow run transitions out of the Running or Cancelling state. **Methods:** #### `after_transition` ```python after_transition(self, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` ### `CacheInsertion` Caches completed states with cache keys after they are validated. **Methods:** #### `after_transition` ```python after_transition(self, db: PrefectDBInterface, initial_state: states.State[Any] | None, validated_state: states.State[Any] | None, context: OrchestrationContext[orm_models.TaskRun, core.TaskRunPolicy]) -> None ``` #### `before_transition` ```python before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.TaskRun, core.TaskRunPolicy]) -> None ``` ### `CacheRetrieval` Rejects running states if a completed state has been cached. This rule rejects transitions into a running state with a cache key if the key has already been associated with a completed state in the cache table. The client will be instructed to transition into the cached completed state instead. **Methods:** #### `before_transition` ```python before_transition(self, db: PrefectDBInterface, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.TaskRun, core.TaskRunPolicy]) -> None ``` ### `RetryFailedFlows` Rejects failed states and schedules a retry if the retry limit has not been reached. This rule rejects transitions into a failed state if `retries` has been set and the run count has not reached the specified limit. The client will be instructed to transition into a scheduled state to retry flow execution. **Methods:** #### `before_transition` ```python before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` ### `RetryFailedTasks` Rejects failed states and schedules a retry if the retry limit has not been reached. This rule rejects transitions into a failed state if `retries` has been set, the run count has not reached the specified limit, and the client asserts it is a retriable task run. The client will be instructed to transition into a scheduled state to retry task execution. **Methods:** #### `before_transition` ```python before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.TaskRun, core.TaskRunPolicy]) -> None ``` ### `EnqueueScheduledTasks` Enqueues background task runs when they are scheduled **Methods:** #### `after_transition` ```python after_transition(self, initial_state: states.State[Any] | None, validated_state: states.State[Any] | None, context: OrchestrationContext[orm_models.TaskRun, core.TaskRunPolicy]) -> None ``` ### `RenameReruns` Name the states if they have run more than once. In the special case where the initial state is an "AwaitingRetry" scheduled state, the proposed state will be renamed to "Retrying" instead. **Methods:** #### `before_transition` ```python before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.Run, core.TaskRunPolicy | core.FlowRunPolicy]) -> None ``` ### `CopyScheduledTime` Ensures scheduled time is copied from scheduled states to pending states. If a new scheduled time has been proposed on the pending state, the scheduled time on the scheduled state will be ignored. **Methods:** #### `before_transition` ```python before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.Run, core.TaskRunPolicy | core.FlowRunPolicy]) -> None ``` ### `WaitForScheduledTime` Prevents transitions to running states from happening too early. This rule enforces that all scheduled states will only start with the machine clock used by the Prefect REST API instance. This rule will identify transitions from scheduled states that are too early and nullify them. Instead, no state will be written to the database and the client will be sent an instruction to wait for `delay_seconds` before attempting the transition again. **Methods:** #### `before_transition` ```python before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.Run, core.TaskRunPolicy | core.FlowRunPolicy]) -> None ``` ### `CopyTaskParametersID` Ensures a task's parameters ID is copied from Scheduled to Pending and from Pending to Running states. If a parameters ID has been included on the proposed state, the parameters ID on the initial state will be ignored. **Methods:** #### `before_transition` ```python before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.TaskRun, core.TaskRunPolicy]) -> None ``` ### `HandlePausingFlows` Governs runs attempting to enter a Paused/Suspended state **Methods:** #### `after_transition` ```python after_transition(self, initial_state: states.State[Any] | None, validated_state: states.State[Any] | None, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` #### `before_transition` ```python before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` ### `HandleResumingPausedFlows` Governs runs attempting to leave a Paused state **Methods:** #### `after_transition` ```python after_transition(self, initial_state: states.State[Any] | None, validated_state: states.State[Any] | None, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` #### `before_transition` ```python before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` ### `UpdateFlowRunTrackerOnTasks` Tracks the flow run attempt a task run state is associated with. **Methods:** #### `after_transition` ```python after_transition(self, initial_state: states.State[Any] | None, validated_state: states.State[Any] | None, context: OrchestrationContext[orm_models.TaskRun, core.TaskRunPolicy]) -> None ``` ### `HandleTaskTerminalStateTransitions` We do not allow tasks to leave terminal states if: * The task is completed and has a persisted result * The task is going to CANCELLING / PAUSED / CRASHED We reset the run count when a task leaves a terminal state for a non-terminal state which resets task run retries; this is particularly relevant for flow run retries. **Methods:** #### `before_transition` ```python before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.TaskRun, core.TaskRunPolicy]) -> None ``` #### `cleanup` ```python cleanup(self, initial_state: states.State[Any] | None, validated_state: states.State[Any] | None, context: OrchestrationContext[orm_models.TaskRun, core.TaskRunPolicy]) -> None ``` ### `HandleFlowTerminalStateTransitions` We do not allow flows to leave terminal states if: * The flow is completed and has a persisted result * The flow is going to CANCELLING / PAUSED / CRASHED * The flow is going to scheduled and has no deployment We reset the pause metadata when a flow leaves a terminal state for a non-terminal state. This resets pause behavior during manual flow run retries. **Methods:** #### `before_transition` ```python before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` #### `cleanup` ```python cleanup(self, initial_state: states.State[Any] | None, validated_state: states.State[Any] | None, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` ### `PreventPendingTransitions` Prevents transitions to PENDING. This rule is only used for flow runs. This is intended to prevent race conditions during duplicate submissions of runs. Before a run is submitted to its execution environment, it should be placed in a PENDING state. If two workers attempt to submit the same run, one of them should encounter a PENDING -> PENDING transition and abort orchestration of the run. Similarly, if the execution environment starts quickly the run may be in a RUNNING state when the second worker attempts the PENDING transition. We deny these state changes as well to prevent duplicate submission. If a run has transitioned to a RUNNING state a worker should not attempt to submit it again unless it has moved into a terminal state. CANCELLING and CANCELLED runs should not be allowed to transition to PENDING. For re-runs of deployed runs, they should transition to SCHEDULED first. For re-runs of ad-hoc runs, they should transition directly to RUNNING. **Methods:** #### `before_transition` ```python before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.Run, Union[core.FlowRunPolicy, core.TaskRunPolicy]]) -> None ``` ### `EnsureOnlyScheduledFlowsMarkedLate` **Methods:** #### `before_transition` ```python before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` ### `PreventRunningTasksFromStoppedFlows` Prevents running tasks from stopped flows. A running state implies execution, but also the converse. This rule ensures that a flow's tasks cannot be run unless the flow is also running. **Methods:** #### `before_transition` ```python before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.TaskRun, core.TaskRunPolicy]) -> None ``` ### `EnforceCancellingToCancelledTransition` Rejects transitions from Cancelling to any terminal state except for Cancelled. **Methods:** #### `before_transition` ```python before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.TaskRun, core.TaskRunPolicy]) -> None ``` ### `BypassCancellingFlowRunsWithNoInfra` Rejects transitions from Scheduled to Cancelling, and instead sets the state to Cancelled, if the flow run has no associated infrastructure process ID. Also Rejects transitions from Paused to Cancelling if the Paused state's details indicates the flow run has been suspended, exiting the flow and tearing down infra. The `Cancelling` state is used to clean up infrastructure. If there is not infrastructure to clean up, we can transition directly to `Cancelled`. Runs that are `Resuming` are in a `Scheduled` state that were previously `Suspended` and do not yet have infrastructure. Runs that are `AwaitingRetry` are a `Scheduled` state that may have associated infrastructure. **Methods:** #### `before_transition` ```python before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` ### `PreventDuplicateTransitions` Prevent duplicate transitions from being made right after one another. This rule allows for clients to set an optional transition\_id on a state. If the run's next transition has the same transition\_id, the transition will be rejected and the existing state will be returned. This allows for clients to make state transition requests without worrying about the following case: * A client making a state transition request * The server accepts transition and commits the transition * The client is unable to receive the response and retries the request **Methods:** #### `before_transition` ```python before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` # dependencies Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-orchestration-dependencies # `prefect.server.orchestration.dependencies` Injected orchestration dependencies ## Functions ### `provide_task_policy` ```python provide_task_policy() -> type[TaskRunOrchestrationPolicy] ``` ### `provide_flow_policy` ```python provide_flow_policy() -> type[FlowRunOrchestrationPolicy] ``` ### `provide_task_orchestration_parameters` ```python provide_task_orchestration_parameters() -> dict[str, Any] ``` ### `provide_flow_orchestration_parameters` ```python provide_flow_orchestration_parameters() -> dict[str, Any] ``` ### `temporary_task_policy` ```python temporary_task_policy(tmp_task_policy: type[TaskRunOrchestrationPolicy]) ``` ### `temporary_flow_policy` ```python temporary_flow_policy(tmp_flow_policy: type[FlowRunOrchestrationPolicy]) ``` ### `temporary_task_orchestration_parameters` ```python temporary_task_orchestration_parameters(tmp_orchestration_parameters: dict[str, Any]) ``` ### `temporary_flow_orchestration_parameters` ```python temporary_flow_orchestration_parameters(tmp_orchestration_parameters: dict[str, Any]) ``` ## Classes ### `OrchestrationDependencies` # global_policy Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-orchestration-global_policy # `prefect.server.orchestration.global_policy` Bookkeeping logic that fires on every state transition. For clarity, `GlobalFlowpolicy` and `GlobalTaskPolicy` contain all transition logic implemented using `BaseUniversalTransform`. None of these operations modify state, and regardless of what orchestration Prefect REST API might enforce on a transition, the global policies contain Prefect's necessary bookkeeping. Because these transforms record information about the validated state committed to the state database, they should be the most deeply nested contexts in orchestration loop. ## Functions ### `COMMON_GLOBAL_TRANSFORMS` ```python COMMON_GLOBAL_TRANSFORMS() -> list[type[BaseUniversalTransform[orm_models.Run, Union[core.FlowRunPolicy, core.TaskRunPolicy]]]] ``` ## Classes ### `GlobalFlowPolicy` Global transforms that run against flow-run-state transitions in priority order. These transforms are intended to run immediately before and after a state transition is validated. **Methods:** #### `priority` ```python priority() -> list[Union[type[BaseUniversalTransform[orm_models.FlowRun, core.FlowRunPolicy]], type[BaseOrchestrationRule[orm_models.FlowRun, core.FlowRunPolicy]]]] ``` ### `GlobalTaskPolicy` Global transforms that run against task-run-state transitions in priority order. These transforms are intended to run immediately before and after a state transition is validated. **Methods:** #### `priority` ```python priority() -> list[Union[type[BaseUniversalTransform[orm_models.TaskRun, core.TaskRunPolicy]], type[BaseOrchestrationRule[orm_models.TaskRun, core.TaskRunPolicy]]]] ``` ### `SetRunStateType` Updates the state type of a run on a state transition. **Methods:** #### `before_transition` ```python before_transition(self, context: GenericOrchestrationContext[orm_models.Run, Any]) -> None ``` ### `SetRunStateName` Updates the state name of a run on a state transition. **Methods:** #### `before_transition` ```python before_transition(self, context: GenericOrchestrationContext[orm_models.Run, Any]) -> None ``` ### `SetStartTime` Records the time a run enters a running state for the first time. **Methods:** #### `before_transition` ```python before_transition(self, context: GenericOrchestrationContext[orm_models.Run, Any]) -> None ``` ### `SetRunStateTimestamp` Records the time a run changes states. **Methods:** #### `before_transition` ```python before_transition(self, context: GenericOrchestrationContext[orm_models.Run, Any]) -> None ``` ### `SetEndTime` Records the time a run enters a terminal state. With normal client usage, a run will not transition out of a terminal state. However, it's possible to force these transitions manually via the API. While leaving a terminal state, the end time will be unset. **Methods:** #### `before_transition` ```python before_transition(self, context: GenericOrchestrationContext[orm_models.Run, Any]) -> None ``` ### `IncrementRunTime` Records the amount of time a run spends in the running state. **Methods:** #### `before_transition` ```python before_transition(self, context: GenericOrchestrationContext[orm_models.Run, Any]) -> None ``` ### `IncrementFlowRunCount` Records the number of times a run enters a running state. For use with retries. **Methods:** #### `before_transition` ```python before_transition(self, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` ### `RemoveResumingIndicator` Removes the indicator on a flow run that marks it as resuming. **Methods:** #### `before_transition` ```python before_transition(self, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` ### `IncrementTaskRunCount` Records the number of times a run enters a running state. For use with retries. **Methods:** #### `before_transition` ```python before_transition(self, context: OrchestrationContext[orm_models.TaskRun, core.TaskRunPolicy]) -> None ``` ### `SetExpectedStartTime` Estimates the time a state is expected to start running if not set. For scheduled states, this estimate is simply the scheduled time. For other states, this is set to the time the proposed state was created by Prefect. **Methods:** #### `before_transition` ```python before_transition(self, context: GenericOrchestrationContext[orm_models.Run, Any]) -> None ``` ### `SetNextScheduledStartTime` Records the scheduled time on a run. When a run enters a scheduled state, `run.next_scheduled_start_time` is set to the state's scheduled time. When leaving a scheduled state, `run.next_scheduled_start_time` is unset. **Methods:** #### `before_transition` ```python before_transition(self, context: GenericOrchestrationContext[orm_models.Run, Any]) -> None ``` ### `UpdateSubflowParentTask` Whenever a subflow changes state, it must update its parent task run's state. **Methods:** #### `after_transition` ```python after_transition(self, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` ### `UpdateSubflowStateDetails` Update a child subflow state's references to a corresponding tracking task run id in the parent flow run **Methods:** #### `before_transition` ```python before_transition(self, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` ### `UpdateStateDetails` Update a state's references to a corresponding flow- or task- run. **Methods:** #### `before_transition` ```python before_transition(self, context: GenericOrchestrationContext) -> None ``` # instrumentation_policies Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-orchestration-instrumentation_policies # `prefect.server.orchestration.instrumentation_policies` Orchestration rules related to instrumenting the orchestration engine for Prefect Observability ## Classes ### `InstrumentFlowRunStateTransitions` When a Flow Run changes states, fire a Prefect Event for the state change **Methods:** #### `after_transition` ```python after_transition(self, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` # policies Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-orchestration-policies # `prefect.server.orchestration.policies` Policies are collections of orchestration rules and transforms. Prefect implements (most) orchestration with logic that governs a Prefect flow or task changing state. Policies organize of orchestration logic both to provide an ordering mechanism as well as provide observability into the orchestration process. While Prefect's orchestration rules can gracefully run independently of one another, ordering can still have an impact on the observed behavior of the system. For example, it makes no sense to secure a concurrency slot for a run if a cached state exists. Furthermore, policies, provide a mechanism to configure and observe exactly what logic will fire against a transition. ## Classes ### `BaseOrchestrationPolicy` An abstract base class used to organize orchestration rules in priority order. Different collections of orchestration rules might be used to govern various kinds of transitions. For example, flow-run states and task-run states might require different orchestration logic. **Methods:** #### `compile_transition_rules` ```python compile_transition_rules(cls, from_state: states.StateType | None = None, to_state: states.StateType | None = None) -> list[type[BaseUniversalTransform[T, RP] | BaseOrchestrationRule[T, RP]]] ``` Returns rules in policy that are valid for the specified state transition. #### `priority` ```python priority() -> list[type[BaseUniversalTransform[T, RP] | BaseOrchestrationRule[T, RP]]] ``` A list of orchestration rules in priority order. ### `TaskRunOrchestrationPolicy` ### `FlowRunOrchestrationPolicy` ### `GenericOrchestrationPolicy` # rules Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-orchestration-rules # `prefect.server.orchestration.rules` Prefect's flow and task-run orchestration machinery. This module contains all the core concepts necessary to implement Prefect's state orchestration engine. These states correspond to intuitive descriptions of all the points that a Prefect flow or task can observe executing user code and intervene, if necessary. A detailed description of states can be found in our concept [documentation](https://docs.prefect.io/v3/concepts/states). Prefect's orchestration engine operates under the assumption that no governed user code will execute without first requesting Prefect REST API validate a change in state and record metadata about the run. With all attempts to run user code being checked against a Prefect instance, the Prefect REST API database becomes the unambiguous source of truth for managing the execution of complex interacting workflows. Orchestration rules can be implemented as discrete units of logic that operate against each state transition and can be fully observable, extensible, and customizable -- all without needing to store or parse a single line of user code. ## Classes ### `OrchestrationContext` A container for a state transition, governed by orchestration rules. When a flow- or task- run attempts to change state, Prefect REST API has an opportunity to decide whether this transition can proceed. All the relevant information associated with the state transition is stored in an `OrchestrationContext`, which is subsequently governed by nested orchestration rules implemented using the `BaseOrchestrationRule` ABC. `OrchestrationContext` introduces the concept of a state being `None` in the context of an intended state transition. An initial state can be `None` if a run is is attempting to set a state for the first time. The proposed state might be `None` if a rule governing the transition determines that no state change should occur at all and nothing is written to the database. **Args:** * `session`: a SQLAlchemy database session * `initial_state`: the initial state of a run * `proposed_state`: the proposed state a run is transitioning into **Methods:** #### `entry_context` ```python entry_context(self) -> tuple[Optional[states.State], Optional[states.State], Self] ``` A convenience method that generates input parameters for orchestration rules. An `OrchestrationContext` defines a state transition that is managed by orchestration rules which can fire hooks before a transition has been committed to the database. These hooks have a consistent interface which can be generated with this method. #### `exit_context` ```python exit_context(self) -> tuple[Optional[states.State], Optional[states.State], Self] ``` A convenience method that generates input parameters for orchestration rules. An `OrchestrationContext` defines a state transition that is managed by orchestration rules which can fire hooks after a transition has been committed to the database. These hooks have a consistent interface which can be generated with this method. #### `flow_run` ```python flow_run(self) -> orm_models.FlowRun | None ``` #### `initial_state_type` ```python initial_state_type(self) -> Optional[states.StateType] ``` The state type of `self.initial_state` if it exists. #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `proposed_state_type` ```python proposed_state_type(self) -> Optional[states.StateType] ``` The state type of `self.proposed_state` if it exists. #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `run_settings` ```python run_settings(self) -> RP ``` Run-level settings used to orchestrate the state transition. #### `safe_copy` ```python safe_copy(self) -> Self ``` Creates a mostly-mutation-safe copy for use in orchestration rules. Orchestration rules govern state transitions using information stored in an `OrchestrationContext`. However, mutating objects stored on the context directly can have unintended side-effects. To guard against this, `self.safe_copy` can be used to pass information to orchestration rules without risking mutation. **Returns:** * A mutation-safe copy of the `OrchestrationContext` #### `validated_state_type` ```python validated_state_type(self) -> Optional[states.StateType] ``` The state type of `self.validated_state` if it exists. ### `FlowOrchestrationContext` A container for a flow run state transition, governed by orchestration rules. When a flow- run attempts to change state, Prefect REST API has an opportunity to decide whether this transition can proceed. All the relevant information associated with the state transition is stored in an `OrchestrationContext`, which is subsequently governed by nested orchestration rules implemented using the `BaseOrchestrationRule` ABC. `FlowOrchestrationContext` introduces the concept of a state being `None` in the context of an intended state transition. An initial state can be `None` if a run is is attempting to set a state for the first time. The proposed state might be `None` if a rule governing the transition determines that no state change should occur at all and nothing is written to the database. **Args:** * `session`: a SQLAlchemy database session * `run`: the flow run attempting to change state * `initial_state`: the initial state of a run * `proposed_state`: the proposed state a run is transitioning into **Methods:** #### `flow_run` ```python flow_run(self) -> orm_models.FlowRun ``` #### `run_settings` ```python run_settings(self) -> core.FlowRunPolicy ``` Run-level settings used to orchestrate the state transition. #### `safe_copy` ```python safe_copy(self) -> Self ``` Creates a mostly-mutation-safe copy for use in orchestration rules. Orchestration rules govern state transitions using information stored in an `OrchestrationContext`. However, mutating objects stored on the context directly can have unintended side-effects. To guard against this, `self.safe_copy` can be used to pass information to orchestration rules without risking mutation. **Returns:** * A mutation-safe copy of `FlowOrchestrationContext` #### `task_run` ```python task_run(self) -> None ``` #### `validate_proposed_state` ```python validate_proposed_state(self, db: PrefectDBInterface) ``` Validates a proposed state by committing it to the database. After the `FlowOrchestrationContext` is governed by orchestration rules, the proposed state can be validated: the proposed state is added to the current SQLAlchemy session and is flushed. `self.validated_state` set to the flushed state. The state on the run is set to the validated state as well. If the proposed state is `None` when this method is called, no state will be written and `self.validated_state` will be set to the run's current state. **Returns:** * None ### `TaskOrchestrationContext` A container for a task run state transition, governed by orchestration rules. When a task- run attempts to change state, Prefect REST API has an opportunity to decide whether this transition can proceed. All the relevant information associated with the state transition is stored in an `OrchestrationContext`, which is subsequently governed by nested orchestration rules implemented using the `BaseOrchestrationRule` ABC. `TaskOrchestrationContext` introduces the concept of a state being `None` in the context of an intended state transition. An initial state can be `None` if a run is is attempting to set a state for the first time. The proposed state might be `None` if a rule governing the transition determines that no state change should occur at all and nothing is written to the database. **Args:** * `session`: a SQLAlchemy database session * `run`: the task run attempting to change state * `initial_state`: the initial state of a run * `proposed_state`: the proposed state a run is transitioning into **Methods:** #### `flow_run` ```python flow_run(self) -> orm_models.FlowRun | None ``` #### `run_settings` ```python run_settings(self) -> core.TaskRunPolicy ``` Run-level settings used to orchestrate the state transition. #### `safe_copy` ```python safe_copy(self) -> Self ``` Creates a mostly-mutation-safe copy for use in orchestration rules. Orchestration rules govern state transitions using information stored in an `OrchestrationContext`. However, mutating objects stored on the context directly can have unintended side-effects. To guard against this, `self.safe_copy` can be used to pass information to orchestration rules without risking mutation. **Returns:** * A mutation-safe copy of `TaskOrchestrationContext` #### `task_run` ```python task_run(self) -> orm_models.TaskRun ``` #### `validate_proposed_state` ```python validate_proposed_state(self, db: PrefectDBInterface) ``` Validates a proposed state by committing it to the database. After the `TaskOrchestrationContext` is governed by orchestration rules, the proposed state can be validated: the proposed state is added to the current SQLAlchemy session and is flushed. `self.validated_state` set to the flushed state. The state on the run is set to the validated state as well. If the proposed state is `None` when this method is called, no state will be written and `self.validated_state` will be set to the run's current state. **Returns:** * None ### `BaseOrchestrationRule` An abstract base class used to implement a discrete piece of orchestration logic. An `OrchestrationRule` is a stateful context manager that directly governs a state transition. Complex orchestration is achieved by nesting multiple rules. Each rule runs against an `OrchestrationContext` that contains the transition details; this context is then passed to subsequent rules. The context can be modified by hooks that fire before and after a new state is validated and committed to the database. These hooks will fire as long as the state transition is considered "valid" and govern a transition by either modifying the proposed state before it is validated or by producing a side-effect. A state transition occurs whenever a flow- or task- run changes state, prompting Prefect REST API to decide whether or not this transition can proceed. The current state of the run is referred to as the "initial state", and the state a run is attempting to transition into is the "proposed state". Together, the initial state transitioning into the proposed state is the intended transition that is governed by these orchestration rules. After using rules to enter a runtime context, the `OrchestrationContext` will contain a proposed state that has been governed by each rule, and at that point can validate the proposed state and commit it to the database. The validated state will be set on the context as `context.validated_state`, and rules will call the `self.after_transition` hook upon exiting the managed context. Examples: Create a rule: ```python class BasicRule(BaseOrchestrationRule): # allowed initial state types FROM_STATES = [StateType.RUNNING] # allowed proposed state types TO_STATES = [StateType.COMPLETED, StateType.FAILED] async def before_transition(initial_state, proposed_state, ctx): # side effects and proposed state mutation can happen here ... async def after_transition(initial_state, validated_state, ctx): # operations on states that have been validated can happen here ... async def cleanup(intitial_state, validated_state, ctx): # reverts side effects generated by `before_transition` if necessary ... ``` Use a rule: ```python intended_transition = (StateType.RUNNING, StateType.COMPLETED) async with BasicRule(context, *intended_transition): # context.proposed_state has been governed by BasicRule ... ``` Use multiple rules: ```python rules = [BasicRule, BasicRule] intended_transition = (StateType.RUNNING, StateType.COMPLETED) async with contextlib.AsyncExitStack() as stack: for rule in rules: stack.enter_async_context(rule(context, *intended_transition)) # context.proposed_state has been governed by all rules ... ``` **Args:** * `context`: A `FlowOrchestrationContext` or `TaskOrchestrationContext` that is passed between rules * `from_state_type`: The state type of the initial state of a run, if this state type is not contained in `FROM_STATES`, no hooks will fire * `to_state_type`: The state type of the proposed state before orchestration, if this state type is not contained in `TO_STATES`, no hooks will fire **Methods:** #### `abort_transition` ```python abort_transition(self, reason: str) -> None ``` Aborts a proposed transition before the transition is validated. This method will abort a proposed transition, expecting no further action to occur for this run. The proposed state is set to `None`, signaling to the `OrchestrationContext` that no state should be written to the database. A reason for aborting the transition is also provided. Rules that abort the transition will not fizzle, despite the proposed state type changing. **Args:** * `reason`: The reason for aborting the transition #### `after_transition` ```python after_transition(self, initial_state: Optional[states.State], validated_state: Optional[states.State], context: OrchestrationContext[T, RP]) -> None ``` Implements a hook that can fire after a state is committed to the database. **Args:** * `initial_state`: The initial state of a transition * `validated_state`: The governed state that has been committed to the database * `context`: A safe copy of the `OrchestrationContext`, with the exception of `context.run`, mutating this context will have no effect on the broader orchestration environment. **Returns:** * None #### `before_transition` ```python before_transition(self, initial_state: Optional[states.State], proposed_state: Optional[states.State], context: OrchestrationContext[T, RP]) -> None ``` Implements a hook that can fire before a state is committed to the database. This hook may produce side-effects or mutate the proposed state of a transition using one of four methods: `self.reject_transition`, `self.delay_transition`, `self.abort_transition`, and `self.rename_state`. **Args:** * `initial_state`: The initial state of a transition * `proposed_state`: The proposed state of a transition * `context`: A safe copy of the `OrchestrationContext`, with the exception of `context.run`, mutating this context will have no effect on the broader orchestration environment. **Returns:** * None #### `cleanup` ```python cleanup(self, initial_state: Optional[states.State], validated_state: Optional[states.State], context: OrchestrationContext[T, RP]) -> None ``` Implements a hook that can fire after a state is committed to the database. The intended use of this method is to revert side-effects produced by `self.before_transition` when the transition is found to be invalid on exit. This allows multiple rules to be gracefully run in sequence, without logic that keeps track of all other rules that might govern a transition. **Args:** * `initial_state`: The initial state of a transition * `validated_state`: The governed state that has been committed to the database * `context`: A safe copy of the `OrchestrationContext`, with the exception of `context.run`, mutating this context will have no effect on the broader orchestration environment. **Returns:** * None #### `delay_transition` ```python delay_transition(self, delay_seconds: int, reason: str) -> None ``` Delays a proposed transition before the transition is validated. This method will delay a proposed transition, setting the proposed state to `None`, signaling to the `OrchestrationContext` that no state should be written to the database. The number of seconds a transition should be delayed is passed to the `OrchestrationContext`. A reason for delaying the transition is also provided. Rules that delay the transition will not fizzle, despite the proposed state type changing. **Args:** * `delay_seconds`: The number of seconds the transition should be delayed * `reason`: The reason for delaying the transition #### `fizzled` ```python fizzled(self) -> bool ``` Determines if a rule is fizzled and side-effects need to be reverted. Rules are fizzled if the transitions were valid on entry (thus firing `self.before_transition`) but are invalid upon exiting the governed context, most likely caused by another rule mutating the transition. **Returns:** * True if the rule is fizzled, False otherwise. #### `invalid` ```python invalid(self) -> bool ``` Determines if a rule is invalid. Invalid rules do nothing and no hooks fire upon entering or exiting a governed context. Rules are invalid if the transition states types are not contained in `self.FROM_STATES` and `self.TO_STATES`, or if the context is proposing a transition that differs from the transition the rule was instantiated with. **Returns:** * True if the rules in invalid, False otherwise. #### `invalid_transition` ```python invalid_transition(self) -> bool ``` Determines if the transition proposed by the `OrchestrationContext` is invalid. If the `OrchestrationContext` is attempting to manage a transition with this rule that differs from the transition the rule was instantiated with, the transition is considered to be invalid. Depending on the context, a rule with an invalid transition is either "invalid" or "fizzled". **Returns:** * True if the transition is invalid, False otherwise. #### `reject_transition` ```python reject_transition(self, state: Optional[states.State], reason: str) -> None ``` Rejects a proposed transition before the transition is validated. This method will reject a proposed transition, mutating the proposed state to the provided `state`. A reason for rejecting the transition is also passed on to the `OrchestrationContext`. Rules that reject the transition will not fizzle, despite the proposed state type changing. **Args:** * `state`: The new proposed state. If `None`, the current run state will be returned in the result instead. * `reason`: The reason for rejecting the transition #### `rename_state` ```python rename_state(self, state_name: str) -> None ``` Sets the "name" attribute on a proposed state. The name of a state is an annotation intended to provide rich, human-readable context for how a run is progressing. This method only updates the name and not the canonical state TYPE, and will not fizzle or invalidate any other rules that might govern this state transition. #### `update_context_parameters` ```python update_context_parameters(self, key: str, value: Any) -> None ``` Updates the "parameters" dictionary attribute with the specified key-value pair. This mechanism streamlines the process of passing messages and information between orchestration rules if necessary and is simpler and more ephemeral than message-passing via the database or some other side-effect. This mechanism can be used to break up large rules for ease of testing or comprehension, but note that any rules coupled this way (or any other way) are no longer independent and the order in which they appear in the orchestration policy priority will matter. ### `FlowRunOrchestrationRule` ### `TaskRunOrchestrationRule` ### `GenericOrchestrationRule` ### `BaseUniversalTransform` An abstract base class used to implement privileged bookkeeping logic. Beyond the orchestration rules implemented with the `BaseOrchestrationRule` ABC, Universal transforms are not stateful, and fire their before- and after- transition hooks on every state transition unless the proposed state is `None`, indicating that no state should be written to the database. Because there are no guardrails in place to prevent directly mutating state or other parts of the orchestration context, universal transforms should only be used with care. **Args:** * `context`: A `FlowOrchestrationContext` or `TaskOrchestrationContext` that is passed between transforms **Methods:** #### `after_transition` ```python after_transition(self, context: OrchestrationContext[T, RP]) -> None ``` Implements a hook that can fire after a state is committed to the database. **Args:** * `context`: the `OrchestrationContext` that contains transition details **Returns:** * None #### `before_transition` ```python before_transition(self, context: OrchestrationContext[T, RP]) -> None ``` Implements a hook that fires before a state is committed to the database. **Args:** * `context`: the `OrchestrationContext` that contains transition details **Returns:** * None #### `exception_in_transition` ```python exception_in_transition(self) -> bool ``` Determines if the transition has encountered an exception. **Returns:** * True if the transition is encountered an exception, False otherwise. #### `nullified_transition` ```python nullified_transition(self) -> bool ``` Determines if the transition has been nullified. Transitions are nullified if the proposed state is `None`, indicating that nothing should be written to the database. **Returns:** * True if the transition is nullified, False otherwise. ### `TaskRunUniversalTransform` ### `FlowRunUniversalTransform` ### `GenericUniversalTransform` # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-schemas-__init__ # `prefect.server.schemas` *This module is empty or contains only private/internal implementations.* # actions Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-schemas-actions # `prefect.server.schemas.actions` Reduced schemas for accepting API actions. ## Functions ### `validate_base_job_template` ```python validate_base_job_template(v: dict[str, Any]) -> dict[str, Any] ``` ## Classes ### `ActionBaseModel` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowCreate` Data used by the Prefect REST API to create a flow. ### `FlowUpdate` Data used by the Prefect REST API to update a flow. ### `DeploymentScheduleCreate` **Methods:** #### `validate_max_scheduled_runs` ```python validate_max_scheduled_runs(cls, v: PositiveInteger | None) -> PositiveInteger | None ``` ### `DeploymentScheduleUpdate` **Methods:** #### `validate_max_scheduled_runs` ```python validate_max_scheduled_runs(cls, v: PositiveInteger | None) -> PositiveInteger | None ``` ### `DeploymentCreate` Data used by the Prefect REST API to create a deployment. **Methods:** #### `check_valid_configuration` ```python check_valid_configuration(self, base_job_template: dict[str, Any]) -> None ``` Check that the combination of base\_job\_template defaults and job\_variables conforms to the specified schema. NOTE: This method does not hydrate block references in default values within the base job template to validate them. Failing to do this can cause user-facing errors. Instead of this method, use `validate_job_variables_for_deployment` function from `prefect_cloud.orion.api.validation`. #### `remove_old_fields` ```python remove_old_fields(cls, values: dict[str, Any]) -> dict[str, Any] ``` ### `DeploymentUpdate` Data used by the Prefect REST API to update a deployment. **Methods:** #### `check_valid_configuration` ```python check_valid_configuration(self, base_job_template: dict[str, Any]) -> None ``` Check that the combination of base\_job\_template defaults and job\_variables conforms to the schema specified in the base\_job\_template. NOTE: This method does not hydrate block references in default values within the base job template to validate them. Failing to do this can cause user-facing errors. Instead of this method, use `validate_job_variables_for_deployment` function from `prefect_cloud.orion.api.validation`. #### `remove_old_fields` ```python remove_old_fields(cls, values: dict[str, Any]) -> dict[str, Any] ``` ### `FlowRunUpdate` Data used by the Prefect REST API to update a flow run. **Methods:** #### `set_name` ```python set_name(cls, name: str) -> str ``` ### `StateCreate` Data used by the Prefect REST API to create a new state. **Methods:** #### `default_name_from_type` ```python default_name_from_type(self) ``` If a name is not provided, use the type #### `default_scheduled_start_time` ```python default_scheduled_start_time(self) ``` ### `TaskRunCreate` Data used by the Prefect REST API to create a task run **Methods:** #### `set_name` ```python set_name(cls, name: str) -> str ``` #### `validate_cache_key` ```python validate_cache_key(cls, cache_key: str | None) -> str | None ``` ### `TaskRunUpdate` Data used by the Prefect REST API to update a task run **Methods:** #### `set_name` ```python set_name(cls, name: str) -> str ``` ### `FlowRunCreate` Data used by the Prefect REST API to create a flow run. **Methods:** #### `set_name` ```python set_name(cls, name: str) -> str ``` ### `DeploymentFlowRunCreate` Data used by the Prefect REST API to create a flow run from a deployment. **Methods:** #### `set_name` ```python set_name(cls, name: str) -> str ``` ### `SavedSearchCreate` Data used by the Prefect REST API to create a saved search. ### `ConcurrencyLimitCreate` Data used by the Prefect REST API to create a concurrency limit. ### `ConcurrencyLimitV2Create` Data used by the Prefect REST API to create a v2 concurrency limit. ### `ConcurrencyLimitV2Update` Data used by the Prefect REST API to update a v2 concurrency limit. ### `BlockTypeCreate` Data used by the Prefect REST API to create a block type. ### `BlockTypeUpdate` Data used by the Prefect REST API to update a block type. **Methods:** #### `updatable_fields` ```python updatable_fields(cls) -> set[str] ``` ### `BlockSchemaCreate` Data used by the Prefect REST API to create a block schema. ### `BlockDocumentCreate` Data used by the Prefect REST API to create a block document. **Methods:** #### `validate_name_is_present_if_not_anonymous` ```python validate_name_is_present_if_not_anonymous(cls, values: dict[str, Any]) -> dict[str, Any] ``` ### `BlockDocumentUpdate` Data used by the Prefect REST API to update a block document. ### `BlockDocumentReferenceCreate` Data used to create block document reference. **Methods:** #### `validate_parent_and_ref_are_different` ```python validate_parent_and_ref_are_different(cls, values) ``` ### `LogCreate` Data used by the Prefect REST API to create a log. ### `WorkPoolCreate` Data used by the Prefect REST API to create a work pool. ### `WorkPoolUpdate` Data used by the Prefect REST API to update a work pool. ### `WorkQueueCreate` Data used by the Prefect REST API to create a work queue. ### `WorkQueueUpdate` Data used by the Prefect REST API to update a work queue. ### `ArtifactCreate` Data used by the Prefect REST API to create an artifact. **Methods:** #### `from_result` ```python from_result(cls, data: Any | dict[str, Any]) -> 'ArtifactCreate' ``` ### `ArtifactUpdate` Data used by the Prefect REST API to update an artifact. ### `VariableCreate` Data used by the Prefect REST API to create a Variable. ### `VariableUpdate` Data used by the Prefect REST API to update a Variable. # core Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-schemas-core # `prefect.server.schemas.core` Full schemas of Prefect REST API objects. ## Classes ### `Flow` An ORM representation of flow data. ### `FlowRunPolicy` Defines of how a flow run should retry. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `populate_deprecated_fields` ```python populate_deprecated_fields(cls, values: dict[str, Any]) -> dict[str, Any] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `CreatedBy` ### `UpdatedBy` ### `ConcurrencyLimitStrategy` Enumeration of concurrency collision strategies. **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `ConcurrencyOptions` Class for storing the concurrency config in database. ### `FlowRun` An ORM representation of flow run data. **Methods:** #### `set_name` ```python set_name(cls, name: str) -> str ``` ### `TaskRunPolicy` Defines of how a task run should retry. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `populate_deprecated_fields` ```python populate_deprecated_fields(cls, values: dict[str, Any]) -> dict[str, Any] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `validate_configured_retry_delays` ```python validate_configured_retry_delays(cls, v: int | float | list[int] | list[float] | None) -> int | float | list[int] | list[float] | None ``` #### `validate_jitter_factor` ```python validate_jitter_factor(cls, v: float | None) -> float | None ``` ### `RunInput` Base class for classes that represent inputs to runs, which could include, constants, parameters, task runs or flow runs. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `TaskRunResult` Represents a task run result input to another task run. ### `FlowRunResult` ### `Parameter` Represents a parameter input to a task run. ### `Constant` Represents constant input value to a task run. ### `TaskRun` An ORM representation of task run data. **Methods:** #### `set_name` ```python set_name(cls, name: str) -> str ``` #### `validate_cache_key` ```python validate_cache_key(cls, cache_key: str) -> str ``` ### `DeploymentSchedule` **Methods:** #### `validate_max_scheduled_runs` ```python validate_max_scheduled_runs(cls, v: int) -> int ``` ### `VersionInfo` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `Deployment` An ORM representation of deployment data. ### `ConcurrencyLimit` An ORM representation of a concurrency limit. ### `ConcurrencyLimitV2` An ORM representation of a v2 concurrency limit. ### `BlockType` An ORM representation of a block type ### `BlockSchema` An ORM representation of a block schema. ### `BlockSchemaReference` An ORM representation of a block schema reference. ### `BlockDocument` An ORM representation of a block document. **Methods:** #### `from_orm_model` ```python from_orm_model(cls: type[Self], session: AsyncSession, orm_block_document: 'orm_models.ORMBlockDocument', include_secrets: bool = False) -> Self ``` #### `validate_name_is_present_if_not_anonymous` ```python validate_name_is_present_if_not_anonymous(cls, values: dict[str, Any]) -> dict[str, Any] ``` ### `BlockDocumentReference` An ORM representation of a block document reference. **Methods:** #### `validate_parent_and_ref_are_different` ```python validate_parent_and_ref_are_different(cls, values: dict[str, Any]) -> dict[str, Any] ``` ### `Configuration` An ORM representation of account info. ### `SavedSearchFilter` A filter for a saved search model. Intended for use by the Prefect UI. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `SavedSearch` An ORM representation of saved search data. Represents a set of filter criteria. ### `Log` An ORM representation of log data. ### `QueueFilter` Filter criteria definition for a work queue. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkQueue` An ORM representation of a work queue ### `WorkQueueHealthPolicy` **Methods:** #### `evaluate_health_status` ```python evaluate_health_status(self, late_runs_count: int, last_polled: Optional[DateTime] = None) -> bool ``` Given empirical information about the state of the work queue, evaluate its health status. **Args:** * `late_runs`: the count of late runs for the work queue. * `last_polled`: the last time the work queue was polled, if available. **Returns:** * whether or not the work queue is healthy. #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkQueueStatusDetail` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `Agent` An ORM representation of an agent ### `WorkPoolStorageConfiguration` A representation of a work pool's storage configuration **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkPool` An ORM representation of a work pool **Methods:** #### `helpful_error_for_missing_default_queue_id` ```python helpful_error_for_missing_default_queue_id(cls, v: UUID | None) -> UUID ``` #### `model_validate` ```python model_validate(cls: Type[Self], obj: Any) -> Self ``` ### `Worker` An ORM representation of a worker ### `Artifact` **Methods:** #### `from_result` ```python from_result(cls, data: Any | dict[str, Any]) -> 'Artifact' ``` #### `validate_metadata_length` ```python validate_metadata_length(cls, v: dict[str, str]) -> dict[str, str] ``` ### `ArtifactCollection` ### `Variable` ### `FlowRunInput` ### `CsrfToken` # filters Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-schemas-filters # `prefect.server.schemas.filters` Schemas that define Prefect REST API filtering operations. Each filter schema includes logic for transforming itself into a SQL `where` clause. ## Classes ### `Operator` Operators for combining filter criteria. **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `PrefectFilterBaseModel` Base model for Prefect filters **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `PrefectOperatorFilterBaseModel` Base model for Prefect filters that combines criteria with a user-provided operator **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `FlowFilterId` Filter by `Flow.id`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `FlowFilterDeployment` Filter by flows by deployment **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` ### `FlowFilterName` Filter by `Flow.name`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `FlowFilterTags` Filter by `Flow.tags`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` ### `FlowFilter` Filter for flows. Only flows matching all criteria will be returned. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` ### `FlowRunFilterId` Filter by `FlowRun.id`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `FlowRunFilterName` Filter by `FlowRun.name`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `FlowRunFilterTags` Filter by `FlowRun.tags`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` ### `FlowRunFilterDeploymentId` Filter by `FlowRun.deployment_id`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` ### `FlowRunFilterWorkQueueName` Filter by `FlowRun.work_queue_name`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` ### `FlowRunFilterStateType` Filter by `FlowRun.state_type`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `FlowRunFilterStateName` Filter by `FlowRun.state_name`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `FlowRunFilterState` Filter by `FlowRun.state_type` and `FlowRun.state_name`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` ### `FlowRunFilterFlowVersion` Filter by `FlowRun.flow_version`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `FlowRunFilterStartTime` Filter by `FlowRun.start_time`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `FlowRunFilterEndTime` Filter by `FlowRun.end_time`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `FlowRunFilterExpectedStartTime` Filter by `FlowRun.expected_start_time`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `FlowRunFilterNextScheduledStartTime` Filter by `FlowRun.next_scheduled_start_time`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `FlowRunFilterParentFlowRunId` Filter for subflows of a given flow run **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` ### `FlowRunFilterParentTaskRunId` Filter by `FlowRun.parent_task_run_id`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` ### `FlowRunFilterIdempotencyKey` Filter by FlowRun.idempotency\_key. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `FlowRunFilter` Filter flow runs. Only flow runs matching all criteria will be returned **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` #### `only_filters_on_id` ```python only_filters_on_id(self) -> bool ``` ### `TaskRunFilterFlowRunId` Filter by `TaskRun.flow_run_id`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` ### `TaskRunFilterId` Filter by `TaskRun.id`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `TaskRunFilterName` Filter by `TaskRun.name`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `TaskRunFilterTags` Filter by `TaskRun.tags`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` ### `TaskRunFilterStateType` Filter by `TaskRun.state_type`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `TaskRunFilterStateName` Filter by `TaskRun.state_name`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `TaskRunFilterState` Filter by `TaskRun.type` and `TaskRun.name`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` ### `TaskRunFilterSubFlowRuns` Filter by `TaskRun.subflow_run`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `TaskRunFilterStartTime` Filter by `TaskRun.start_time`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `TaskRunFilterExpectedStartTime` Filter by `TaskRun.expected_start_time`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `TaskRunFilter` Filter task runs. Only task runs matching all criteria will be returned **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` ### `DeploymentFilterId` Filter by `Deployment.id`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `DeploymentFilterName` Filter by `Deployment.name`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `DeploymentOrFlowNameFilter` Filter by `Deployment.name` or `Flow.name` with a single input string for ilike filtering. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `DeploymentFilterPaused` Filter by `Deployment.paused`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `DeploymentFilterWorkQueueName` Filter by `Deployment.work_queue_name`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `DeploymentFilterConcurrencyLimit` DEPRECATED: Prefer `Deployment.concurrency_limit_id` over `Deployment.concurrency_limit`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `DeploymentFilterTags` Filter by `Deployment.tags`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` ### `DeploymentFilter` Filter for deployments. Only deployments matching all criteria will be returned. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` ### `DeploymentScheduleFilterActive` Filter by `DeploymentSchedule.active`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `DeploymentScheduleFilter` Filter for deployments. Only deployments matching all criteria will be returned. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` ### `LogFilterName` Filter by `Log.name`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `LogFilterLevel` Filter by `Log.level`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `LogFilterTimestamp` Filter by `Log.timestamp`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `LogFilterFlowRunId` Filter by `Log.flow_run_id`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `LogFilterTaskRunId` Filter by `Log.task_run_id`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `LogFilterTextSearch` Filter by text search across log content. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. #### `includes` ```python includes(self, log: 'Log') -> bool ``` Check if this text filter includes the given log. ### `LogFilter` Filter logs. Only logs matching all criteria will be returned **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` ### `FilterSet` A collection of filters for common objects **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `BlockTypeFilterName` Filter by `BlockType.name` **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `BlockTypeFilterSlug` Filter by `BlockType.slug` **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `BlockTypeFilter` Filter BlockTypes **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `BlockSchemaFilterBlockTypeId` Filter by `BlockSchema.block_type_id`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `BlockSchemaFilterId` Filter by BlockSchema.id **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `BlockSchemaFilterCapabilities` Filter by `BlockSchema.capabilities` **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `BlockSchemaFilterVersion` Filter by `BlockSchema.capabilities` **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `BlockSchemaFilter` Filter BlockSchemas **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` ### `BlockDocumentFilterIsAnonymous` Filter by `BlockDocument.is_anonymous`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `BlockDocumentFilterBlockTypeId` Filter by `BlockDocument.block_type_id`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `BlockDocumentFilterId` Filter by `BlockDocument.id`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `BlockDocumentFilterName` Filter by `BlockDocument.name`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `BlockDocumentFilter` Filter BlockDocuments. Only BlockDocuments matching all criteria will be returned **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` ### `WorkQueueFilterId` Filter by `WorkQueue.id`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `WorkQueueFilterName` Filter by `WorkQueue.name`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `WorkQueueFilter` Filter work queues. Only work queues matching all criteria will be returned **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` ### `WorkPoolFilterId` Filter by `WorkPool.id`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `WorkPoolFilterName` Filter by `WorkPool.name`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `WorkPoolFilterType` Filter by `WorkPool.type`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `WorkPoolFilter` Filter work pools. Only work pools matching all criteria will be returned **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` ### `WorkerFilterWorkPoolId` Filter by `Worker.worker_config_id`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `WorkerFilterStatus` Filter by `Worker.status`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `WorkerFilterLastHeartbeatTime` Filter by `Worker.last_heartbeat_time`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `WorkerFilter` Filter by `Worker.last_heartbeat_time`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` ### `ArtifactFilterId` Filter by `Artifact.id`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `ArtifactFilterKey` Filter by `Artifact.key`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `ArtifactFilterFlowRunId` Filter by `Artifact.flow_run_id`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `ArtifactFilterTaskRunId` Filter by `Artifact.task_run_id`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `ArtifactFilterType` Filter by `Artifact.type`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `ArtifactFilter` Filter artifacts. Only artifacts matching all criteria will be returned **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` ### `ArtifactCollectionFilterLatestId` Filter by `ArtifactCollection.latest_id`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `ArtifactCollectionFilterKey` Filter by `ArtifactCollection.key`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `ArtifactCollectionFilterFlowRunId` Filter by `ArtifactCollection.flow_run_id`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `ArtifactCollectionFilterTaskRunId` Filter by `ArtifactCollection.task_run_id`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `ArtifactCollectionFilterType` Filter by `ArtifactCollection.type`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `ArtifactCollectionFilter` Filter artifact collections. Only artifact collections matching all criteria will be returned **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` ### `VariableFilterId` Filter by `Variable.id`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `VariableFilterName` Filter by `Variable.name`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `VariableFilterTags` Filter by `Variable.tags`. **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` ### `VariableFilter` Filter variables. Only variables matching all criteria will be returned **Methods:** #### `as_sql_filter` ```python as_sql_filter(self, db: 'PrefectDBInterface') -> sa.ColumnElement[bool] ``` # graph Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-schemas-graph # `prefect.server.schemas.graph` ## Classes ### `GraphState` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `GraphArtifact` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `Edge` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `Node` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `Graph` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. # internal Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-schemas-internal # `prefect.server.schemas.internal` Schemas for *internal* use within the Prefect server, but that would not be appropriate for use on the API itself. ## Classes ### `InternalWorkPoolUpdate` # responses Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-schemas-responses # `prefect.server.schemas.responses` Schemas for special responses from the Prefect REST API. ## Classes ### `SetStateStatus` Enumerates return statuses for setting run states. **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `StateAcceptDetails` Details associated with an ACCEPT state transition. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `StateRejectDetails` Details associated with a REJECT state transition. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `StateAbortDetails` Details associated with an ABORT state transition. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `StateWaitDetails` Details associated with a WAIT state transition. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `HistoryResponseState` Represents a single state's history over an interval. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `HistoryResponse` Represents a history of aggregation states over an interval **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `validate_timestamps` ```python validate_timestamps(cls, values: dict) -> dict ``` ### `OrchestrationResult` A container for the output of state orchestration. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkerFlowRunResponse` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunResponse` **Methods:** #### `model_validate` ```python model_validate(cls: Type[Self], obj: Any) -> Self ``` ### `TaskRunResponse` ### `DeploymentResponse` **Methods:** #### `model_validate` ```python model_validate(cls: Type[Self], obj: Any) -> Self ``` ### `WorkQueueResponse` **Methods:** #### `model_validate` ```python model_validate(cls: Type[Self], obj: Any) -> Self ``` ### `WorkQueueWithStatus` Combines a work queue and its status details into a single object **Methods:** #### `model_validate` ```python model_validate(cls: Type[Self], obj: Any) -> Self ``` ### `WorkerResponse` **Methods:** #### `model_validate` ```python model_validate(cls: Type[Self], obj: Any) -> Self ``` ### `GlobalConcurrencyLimitResponse` A response object for global concurrency limits. ### `FlowPaginationResponse` ### `FlowRunPaginationResponse` ### `TaskRunPaginationResponse` ### `DeploymentPaginationResponse` ### `SchemaValuePropertyError` ### `SchemaValueIndexError` ### `SchemaValuesValidationResponse` # schedules Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-schemas-schedules # `prefect.server.schemas.schedules` Schedule schemas ## Classes ### `IntervalSchedule` A schedule formed by adding `interval` increments to an `anchor_date`. If no `anchor_date` is supplied, the current UTC time is used. If a timezone-naive datetime is provided for `anchor_date`, it is assumed to be in the schedule's timezone (or UTC). Even if supplied with an IANA timezone, anchor dates are always stored as UTC offsets, so a `timezone` can be provided to determine localization behaviors like DST boundary handling. If none is provided it will be inferred from the anchor date. NOTE: If the `IntervalSchedule` `anchor_date` or `timezone` is provided in a DST-observing timezone, then the schedule will adjust itself appropriately. Intervals greater than 24 hours will follow DST conventions, while intervals of less than 24 hours will follow UTC intervals. For example, an hourly schedule will fire every UTC hour, even across DST boundaries. When clocks are set back, this will result in two runs that *appear* to both be scheduled for 1am local time, even though they are an hour apart in UTC time. For longer intervals, like a daily schedule, the interval schedule will adjust for DST boundaries so that the clock-hour remains constant. This means that a daily schedule that always fires at 9am will observe DST and continue to fire at 9am in the local time zone. **Args:** * `interval`: an interval to schedule on. * `anchor_date`: an anchor date to schedule increments against; if not provided, the current timestamp will be used. * `timezone`: a valid timezone string. **Methods:** #### `get_dates` ```python get_dates(self, n: Optional[int] = None, start: Optional[datetime.datetime] = None, end: Optional[datetime.datetime] = None) -> List[DateTime] ``` Retrieves dates from the schedule. Up to 1,000 candidate dates are checked following the start date. **Args:** * `n`: The number of dates to generate * `start`: The first returned date will be on or after this date. Defaults to None. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone. * `end`: The maximum scheduled date to return. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone. **Returns:** * List\[DateTime]: A list of dates #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `validate_timezone` ```python validate_timezone(self) ``` ### `CronSchedule` Cron schedule NOTE: If the timezone is a DST-observing one, then the schedule will adjust itself appropriately. Cron's rules for DST are based on schedule times, not intervals. This means that an hourly cron schedule will fire on every new schedule hour, not every elapsed hour; for example, when clocks are set back this will result in a two-hour pause as the schedule will fire *the first time* 1am is reached and *the first time* 2am is reached, 120 minutes later. Longer schedules, such as one that fires at 9am every morning, will automatically adjust for DST. **Args:** * `cron`: a valid cron string * `timezone`: a valid timezone string in IANA tzdata format (for example, America/New\_York). * `day_or`: Control how croniter handles `day` and `day_of_week` entries. Defaults to True, matching cron which connects those values using OR. If the switch is set to False, the values are connected using AND. This behaves like fcron and enables you to e.g. define a job that executes each 2nd friday of a month by setting the days of month and the weekday. **Methods:** #### `get_dates` ```python get_dates(self, n: Optional[int] = None, start: Optional[datetime.datetime] = None, end: Optional[datetime.datetime] = None) -> List[DateTime] ``` Retrieves dates from the schedule. Up to 1,000 candidate dates are checked following the start date. **Args:** * `n`: The number of dates to generate * `start`: The first returned date will be on or after this date. Defaults to None. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone. * `end`: The maximum scheduled date to return. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone. **Returns:** * List\[DateTime]: A list of dates #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `valid_cron_string` ```python valid_cron_string(cls, v: str) -> str ``` #### `validate_timezone` ```python validate_timezone(self) ``` ### `RRuleSchedule` RRule schedule, based on the iCalendar standard ([RFC 5545](https://datatracker.ietf.org/doc/html/rfc5545)) as implemented in `dateutils.rrule`. RRules are appropriate for any kind of calendar-date manipulation, including irregular intervals, repetition, exclusions, week day or day-of-month adjustments, and more. Note that as a calendar-oriented standard, `RRuleSchedules` are sensitive to to the initial timezone provided. A 9am daily schedule with a daylight saving time-aware start date will maintain a local 9am time through DST boundaries; a 9am daily schedule with a UTC start date will maintain a 9am UTC time. **Args:** * `rrule`: a valid RRule string * `timezone`: a valid timezone string **Methods:** #### `from_rrule` ```python from_rrule(cls, rrule: dateutil.rrule.rrule | dateutil.rrule.rruleset) -> 'RRuleSchedule' ``` #### `get_dates` ```python get_dates(self, n: Optional[int] = None, start: datetime.datetime = None, end: datetime.datetime = None) -> List[DateTime] ``` Retrieves dates from the schedule. Up to 1,000 candidate dates are checked following the start date. **Args:** * `n`: The number of dates to generate * `start`: The first returned date will be on or after this date. Defaults to None. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone. * `end`: The maximum scheduled date to return. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone. **Returns:** * List\[DateTime]: A list of dates #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `to_rrule` ```python to_rrule(self) -> dateutil.rrule.rrule ``` Since rrule doesn't properly serialize/deserialize timezones, we localize dates here #### `validate_rrule_str` ```python validate_rrule_str(cls, v: str) -> str ``` # sorting Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-schemas-sorting # `prefect.server.schemas.sorting` Schemas for sorting Prefect REST API objects. ## Classes ### `FlowRunSort` Defines flow run sorting options. **Methods:** #### `as_sql_sort` ```python as_sql_sort(self, db: 'PrefectDBInterface') -> Iterable[sa.ColumnElement[Any]] ``` Return an expression used to sort flow runs #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `TaskRunSort` Defines task run sorting options. **Methods:** #### `as_sql_sort` ```python as_sql_sort(self, db: 'PrefectDBInterface') -> Iterable[sa.ColumnElement[Any]] ``` Return an expression used to sort task runs #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `LogSort` Defines log sorting options. **Methods:** #### `as_sql_sort` ```python as_sql_sort(self, db: 'PrefectDBInterface') -> Iterable[sa.ColumnElement[Any]] ``` Return an expression used to sort task runs #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `FlowSort` Defines flow sorting options. **Methods:** #### `as_sql_sort` ```python as_sql_sort(self, db: 'PrefectDBInterface') -> Iterable[sa.ColumnElement[Any]] ``` Return an expression used to sort task runs #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `DeploymentSort` Defines deployment sorting options. **Methods:** #### `as_sql_sort` ```python as_sql_sort(self, db: 'PrefectDBInterface') -> Iterable[sa.ColumnElement[Any]] ``` Return an expression used to sort task runs #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `ArtifactSort` Defines artifact sorting options. **Methods:** #### `as_sql_sort` ```python as_sql_sort(self, db: 'PrefectDBInterface') -> Iterable[sa.ColumnElement[Any]] ``` Return an expression used to sort task runs #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `ArtifactCollectionSort` Defines artifact collection sorting options. **Methods:** #### `as_sql_sort` ```python as_sql_sort(self, db: 'PrefectDBInterface') -> Iterable[sa.ColumnElement[Any]] ``` Return an expression used to sort task runs #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `VariableSort` Defines variables sorting options. **Methods:** #### `as_sql_sort` ```python as_sql_sort(self, db: 'PrefectDBInterface') -> Iterable[sa.ColumnElement[Any]] ``` Return an expression used to sort task runs #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `BlockDocumentSort` Defines block document sorting options. **Methods:** #### `as_sql_sort` ```python as_sql_sort(self, db: 'PrefectDBInterface') -> Iterable[sa.ColumnElement[Any]] ``` Return an expression used to sort task runs #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` # states Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-schemas-states # `prefect.server.schemas.states` State schemas. ## Functions ### `Scheduled` ```python Scheduled(scheduled_time: Optional[DateTime] = None, cls: type[_State] = State, **kwargs: Any) -> _State ``` Convenience function for creating `Scheduled` states. **Returns:** * a Scheduled state ### `Completed` ```python Completed(cls: type[_State] = State, **kwargs: Any) -> _State ``` Convenience function for creating `Completed` states. **Returns:** * a Completed state ### `Running` ```python Running(cls: type[_State] = State, **kwargs: Any) -> _State ``` Convenience function for creating `Running` states. **Returns:** * a Running state ### `Failed` ```python Failed(cls: type[_State] = State, **kwargs: Any) -> _State ``` Convenience function for creating `Failed` states. **Returns:** * a Failed state ### `Crashed` ```python Crashed(cls: type[_State] = State, **kwargs: Any) -> _State ``` Convenience function for creating `Crashed` states. **Returns:** * a Crashed state ### `Cancelling` ```python Cancelling(cls: type[_State] = State, **kwargs: Any) -> _State ``` Convenience function for creating `Cancelling` states. **Returns:** * a Cancelling state ### `Cancelled` ```python Cancelled(cls: type[_State] = State, **kwargs: Any) -> _State ``` Convenience function for creating `Cancelled` states. **Returns:** * a Cancelled state ### `Pending` ```python Pending(cls: type[_State] = State, **kwargs: Any) -> _State ``` Convenience function for creating `Pending` states. **Returns:** * a Pending state ### `Paused` ```python Paused(cls: type[_State] = State, timeout_seconds: Optional[int] = None, pause_expiration_time: Optional[DateTime] = None, reschedule: bool = False, pause_key: Optional[str] = None, **kwargs: Any) -> _State ``` Convenience function for creating `Paused` states. **Returns:** * a Paused state ### `Suspended` ```python Suspended(cls: type[_State] = State, timeout_seconds: Optional[int] = None, pause_expiration_time: Optional[DateTime] = None, pause_key: Optional[str] = None, **kwargs: Any) -> _State ``` Convenience function for creating `Suspended` states. **Returns:** * a Suspended state ### `AwaitingRetry` ```python AwaitingRetry(cls: type[_State] = State, scheduled_time: Optional[DateTime] = None, **kwargs: Any) -> _State ``` Convenience function for creating `AwaitingRetry` states. **Returns:** * an AwaitingRetry state ### `AwaitingConcurrencySlot` ```python AwaitingConcurrencySlot(cls: type[_State] = State, scheduled_time: Optional[DateTime] = None, **kwargs: Any) -> _State ``` Convenience function for creating `AwaitingConcurrencySlot` states. **Returns:** * an AwaitingConcurrencySlot state ### `Retrying` ```python Retrying(cls: type[_State] = State, **kwargs: Any) -> _State ``` Convenience function for creating `Retrying` states. **Returns:** * a Retrying state ### `Late` ```python Late(cls: type[_State] = State, scheduled_time: Optional[DateTime] = None, **kwargs: Any) -> _State ``` Convenience function for creating `Late` states. **Returns:** * a Late state ## Classes ### `StateType` Enumeration of state types. **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `CountByState` **Methods:** #### `check_key` ```python check_key(cls, value: Optional[Any], info: ValidationInfo) -> Optional[Any] ``` #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `StateDetails` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `StateBaseModel` **Methods:** #### `orm_dict` ```python orm_dict(self, *args: Any, **kwargs: Any) -> dict[str, Any] ``` This method is used as a convenience method for constructing fixtues by first building a `State` schema object and converting it into an ORM-compatible format. Because the `data` field is not writable on ORM states, this method omits the `data` field entirely for the purposes of constructing an ORM model. If state data is required, an artifact must be created separately. ### `State` Represents the state of a run. **Methods:** #### `default_name_from_type` ```python default_name_from_type(self) -> Self ``` If a name is not provided, use the type #### `default_scheduled_start_time` ```python default_scheduled_start_time(self) -> Self ``` #### `fresh_copy` ```python fresh_copy(self, **kwargs: Any) -> Self ``` Return a fresh copy of the state with a new ID. #### `from_orm_without_result` ```python from_orm_without_result(cls, orm_state: Union['ORMFlowRunState', 'ORMTaskRunState'], with_data: Optional[Any] = None) -> Self ``` During orchestration, ORM states can be instantiated prior to inserting results into the artifact table and the `data` field will not be eagerly loaded. In these cases, sqlalchemy will attempt to lazily load the the relationship, which will fail when called within a synchronous pydantic method. This method will construct a `State` object from an ORM model without a loaded artifact and attach data passed using the `with_data` argument to the `data` field. #### `is_cancelled` ```python is_cancelled(self) -> bool ``` #### `is_cancelling` ```python is_cancelling(self) -> bool ``` #### `is_completed` ```python is_completed(self) -> bool ``` #### `is_crashed` ```python is_crashed(self) -> bool ``` #### `is_failed` ```python is_failed(self) -> bool ``` #### `is_final` ```python is_final(self) -> bool ``` #### `is_paused` ```python is_paused(self) -> bool ``` #### `is_pending` ```python is_pending(self) -> bool ``` #### `is_running` ```python is_running(self) -> bool ``` #### `is_scheduled` ```python is_scheduled(self) -> bool ``` #### `orm_dict` ```python orm_dict(self, *args: Any, **kwargs: Any) -> dict[str, Any] ``` This method is used as a convenience method for constructing fixtues by first building a `State` schema object and converting it into an ORM-compatible format. Because the `data` field is not writable on ORM states, this method omits the `data` field entirely for the purposes of constructing an ORM model. If state data is required, an artifact must be created separately. #### `result` ```python result(self, raise_on_failure: Literal[True] = ...) -> Any ``` #### `result` ```python result(self, raise_on_failure: Literal[False] = False) -> Union[Any, Exception] ``` #### `result` ```python result(self, raise_on_failure: bool = ...) -> Union[Any, Exception] ``` #### `result` ```python result(self, raise_on_failure: bool = True) -> Union[Any, Exception] ``` #### `to_state_create` ```python to_state_create(self) -> 'StateCreate' ``` # statuses Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-schemas-statuses # `prefect.server.schemas.statuses` ## Classes ### `WorkPoolStatus` Enumeration of work pool statuses. **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` #### `in_kebab_case` ```python in_kebab_case(self) -> str ``` ### `WorkerStatus` Enumeration of worker statuses. **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `DeploymentStatus` Enumeration of deployment statuses. **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` #### `in_kebab_case` ```python in_kebab_case(self) -> str ``` ### `WorkQueueStatus` Enumeration of work queue statuses. **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` #### `in_kebab_case` ```python in_kebab_case(self) -> str ``` # ui Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-schemas-ui # `prefect.server.schemas.ui` Schemas for UI endpoints. ## Classes ### `UITaskRun` A task run with additional details for display in the UI. # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-services-__init__ # `prefect.server.services` *This module is empty or contains only private/internal implementations.* # base Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-services-base # `prefect.server.services.base` ## Functions ### `run_multiple_services` ```python run_multiple_services(loop_services: List[LoopService]) -> NoReturn ``` Only one signal handler can be active at a time, so this function takes a list of loop services and runs all of them with a global signal handler. ## Classes ### `Service` **Methods:** #### `all_services` ```python all_services(cls) -> Sequence[type[Self]] ``` Get list of all service classes #### `enabled` ```python enabled(cls) -> bool ``` Whether the service is enabled #### `enabled_services` ```python enabled_services(cls) -> list[type[Self]] ``` Get list of enabled service classes #### `environment_variable_name` ```python environment_variable_name(cls) -> str ``` #### `run_services` ```python run_services(cls) -> NoReturn ``` Run enabled services until cancelled. #### `running` ```python running(cls) -> AsyncGenerator[None, None] ``` A context manager that runs enabled services on entry and stops them on exit. #### `service_settings` ```python service_settings(cls) -> ServicesBaseSetting ``` The Prefect setting that controls whether the service is enabled #### `start` ```python start(self) -> NoReturn ``` Start running the service, which may run indefinitely #### `stop` ```python stop(self) -> None ``` Stop the service ### `RunInEphemeralServers` A marker class for services that should run even when running an ephemeral server **Methods:** #### `all_services` ```python all_services(cls) -> Sequence[type[Self]] ``` Get list of all service classes #### `enabled` ```python enabled(cls) -> bool ``` Whether the service is enabled #### `enabled_services` ```python enabled_services(cls) -> list[type[Self]] ``` Get list of enabled service classes #### `environment_variable_name` ```python environment_variable_name(cls) -> str ``` #### `run_services` ```python run_services(cls) -> NoReturn ``` Run enabled services until cancelled. #### `running` ```python running(cls) -> AsyncGenerator[None, None] ``` A context manager that runs enabled services on entry and stops them on exit. #### `service_settings` ```python service_settings(cls) -> ServicesBaseSetting ``` The Prefect setting that controls whether the service is enabled #### `start` ```python start(self) -> NoReturn ``` Start running the service, which may run indefinitely #### `stop` ```python stop(self) -> None ``` Stop the service ### `RunInWebservers` A marker class for services that should run when running a webserver **Methods:** #### `all_services` ```python all_services(cls) -> Sequence[type[Self]] ``` Get list of all service classes #### `enabled` ```python enabled(cls) -> bool ``` Whether the service is enabled #### `enabled_services` ```python enabled_services(cls) -> list[type[Self]] ``` Get list of enabled service classes #### `environment_variable_name` ```python environment_variable_name(cls) -> str ``` #### `run_services` ```python run_services(cls) -> NoReturn ``` Run enabled services until cancelled. #### `running` ```python running(cls) -> AsyncGenerator[None, None] ``` A context manager that runs enabled services on entry and stops them on exit. #### `service_settings` ```python service_settings(cls) -> ServicesBaseSetting ``` The Prefect setting that controls whether the service is enabled #### `start` ```python start(self) -> NoReturn ``` Start running the service, which may run indefinitely #### `stop` ```python stop(self) -> None ``` Stop the service ### `LoopService` Loop services are relatively lightweight maintenance routines that need to run periodically. This class makes it straightforward to design and integrate them. Users only need to define the `run_once` coroutine to describe the behavior of the service on each loop. **Methods:** #### `all_services` ```python all_services(cls) -> Sequence[type[Self]] ``` Get list of all service classes #### `enabled` ```python enabled(cls) -> bool ``` Whether the service is enabled #### `enabled_services` ```python enabled_services(cls) -> list[type[Self]] ``` Get list of enabled service classes #### `environment_variable_name` ```python environment_variable_name(cls) -> str ``` #### `run_once` ```python run_once(self) -> None ``` Represents one loop of the service. Subclasses must override this method. To actually run the service once, call `LoopService().start(loops=1)` instead of `LoopService().run_once()`, because this method will not invoke setup and teardown methods properly. #### `run_services` ```python run_services(cls) -> NoReturn ``` Run enabled services until cancelled. #### `running` ```python running(cls) -> AsyncGenerator[None, None] ``` A context manager that runs enabled services on entry and stops them on exit. #### `service_settings` ```python service_settings(cls) -> ServicesBaseSetting ``` The Prefect setting that controls whether the service is enabled #### `start` ```python start(self, loops: None = None) -> NoReturn ``` Run the service indefinitely. #### `start` ```python start(self, loops: int) -> None ``` Run the service `loops` time. **Args:** * `loops`: the number of loops to run before exiting. #### `start` ```python start(self, loops: int | None = None) -> None | NoReturn ``` Run the service `loops` time. Pass loops=None to run forever. **Args:** * `loops`: the number of loops to run before exiting. #### `start` ```python start(self) -> NoReturn ``` Start running the service, which may run indefinitely #### `stop` ```python stop(self, block: bool = True) -> None ``` Gracefully stops a running LoopService and optionally blocks until the service stops. **Args:** * `block`: if True, blocks until the service is finished running. Otherwise it requests a stop and returns but the service may still be running a final loop. #### `stop` ```python stop(self) -> None ``` Stop the service # cancellation_cleanup Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-services-cancellation_cleanup # `prefect.server.services.cancellation_cleanup` The CancellationCleanup service. Responsible for cancelling tasks and subflows that haven't finished. ## Classes ### `CancellationCleanup` Cancels tasks and subflows of flow runs that have been cancelled **Methods:** #### `clean_up_cancelled_flow_run_task_runs` ```python clean_up_cancelled_flow_run_task_runs(self, db: PrefectDBInterface) -> None ``` #### `clean_up_cancelled_subflow_runs` ```python clean_up_cancelled_subflow_runs(self, db: PrefectDBInterface) -> None ``` #### `run_once` ```python run_once(self, db: PrefectDBInterface) -> None ``` * cancels active tasks belonging to recently cancelled flow runs * cancels any active subflow that belongs to a cancelled flow #### `run_once` ```python run_once(self) -> None ``` Represents one loop of the service. Subclasses must override this method. To actually run the service once, call `LoopService().start(loops=1)` instead of `LoopService().run_once()`, because this method will not invoke setup and teardown methods properly. #### `service_settings` ```python service_settings(cls) -> ServicesBaseSetting ``` #### `start` ```python start(self, loops: None = None) -> NoReturn ``` Run the service indefinitely. #### `stop` ```python stop(self, block: bool = True) -> None ``` Gracefully stops a running LoopService and optionally blocks until the service stops. **Args:** * `block`: if True, blocks until the service is finished running. Otherwise it requests a stop and returns but the service may still be running a final loop. # foreman Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-services-foreman # `prefect.server.services.foreman` Foreman is a loop service designed to monitor workers. ## Classes ### `Foreman` Monitors the status of workers and their associated work pools **Methods:** #### `run_once` ```python run_once(self, db: PrefectDBInterface) -> None ``` Iterate over workers current marked as online. Mark workers as offline if they have an old last\_heartbeat\_time. Marks work pools as not ready if they do not have any online workers and are currently marked as ready. Mark deployments as not ready if they have a last\_polled time that is older than the configured deployment last polled timeout. #### `run_once` ```python run_once(self) -> None ``` Represents one loop of the service. Subclasses must override this method. To actually run the service once, call `LoopService().start(loops=1)` instead of `LoopService().run_once()`, because this method will not invoke setup and teardown methods properly. #### `service_settings` ```python service_settings(cls) -> ServicesBaseSetting ``` #### `start` ```python start(self, loops: None = None) -> NoReturn ``` Run the service indefinitely. #### `stop` ```python stop(self, block: bool = True) -> None ``` Gracefully stops a running LoopService and optionally blocks until the service stops. **Args:** * `block`: if True, blocks until the service is finished running. Otherwise it requests a stop and returns but the service may still be running a final loop. # late_runs Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-services-late_runs # `prefect.server.services.late_runs` The MarkLateRuns service. Responsible for putting flow runs in a Late state if they are not started on time. The threshold for a late run can be configured by changing `PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS`. ## Classes ### `MarkLateRuns` Finds flow runs that are later than their scheduled start time A flow run is defined as "late" if has not scheduled within a certain amount of time after its scheduled start time. The exact amount is configurable in Prefect REST API Settings. **Methods:** #### `run_once` ```python run_once(self, db: PrefectDBInterface) -> None ``` Mark flow runs as late by: * Querying for flow runs in a scheduled state that are Scheduled to start in the past * For any runs past the "late" threshold, setting the flow run state to a new `Late` state #### `run_once` ```python run_once(self) -> None ``` Represents one loop of the service. Subclasses must override this method. To actually run the service once, call `LoopService().start(loops=1)` instead of `LoopService().run_once()`, because this method will not invoke setup and teardown methods properly. #### `service_settings` ```python service_settings(cls) -> ServicesBaseSetting ``` #### `start` ```python start(self, loops: None = None) -> NoReturn ``` Run the service indefinitely. #### `stop` ```python stop(self, block: bool = True) -> None ``` Gracefully stops a running LoopService and optionally blocks until the service stops. **Args:** * `block`: if True, blocks until the service is finished running. Otherwise it requests a stop and returns but the service may still be running a final loop. # pause_expirations Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-services-pause_expirations # `prefect.server.services.pause_expirations` The FailExpiredPauses service. Responsible for putting Paused flow runs in a Failed state if they are not resumed on time. ## Classes ### `FailExpiredPauses` Fails flow runs that have been paused and never resumed **Methods:** #### `run_once` ```python run_once(self, db: PrefectDBInterface) -> None ``` Mark flow runs as failed by: * Querying for flow runs in a Paused state that have timed out * For any runs past the "expiration" threshold, setting the flow run state to a new `Failed` state #### `run_once` ```python run_once(self) -> None ``` Represents one loop of the service. Subclasses must override this method. To actually run the service once, call `LoopService().start(loops=1)` instead of `LoopService().run_once()`, because this method will not invoke setup and teardown methods properly. #### `service_settings` ```python service_settings(cls) -> ServicesBaseSetting ``` #### `start` ```python start(self, loops: None = None) -> NoReturn ``` Run the service indefinitely. #### `stop` ```python stop(self, block: bool = True) -> None ``` Gracefully stops a running LoopService and optionally blocks until the service stops. **Args:** * `block`: if True, blocks until the service is finished running. Otherwise it requests a stop and returns but the service may still be running a final loop. # repossessor Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-services-repossessor # `prefect.server.services.repossessor` ## Classes ### `Repossessor` Handles the reconciliation of expired leases; no tow truck dependency. **Methods:** #### `run_once` ```python run_once(self) -> None ``` #### `run_once` ```python run_once(self) -> None ``` Represents one loop of the service. Subclasses must override this method. To actually run the service once, call `LoopService().start(loops=1)` instead of `LoopService().run_once()`, because this method will not invoke setup and teardown methods properly. #### `service_settings` ```python service_settings(cls) -> ServicesBaseSetting ``` #### `start` ```python start(self, loops: None = None) -> NoReturn ``` Run the service indefinitely. #### `stop` ```python stop(self, block: bool = True) -> None ``` Gracefully stops a running LoopService and optionally blocks until the service stops. **Args:** * `block`: if True, blocks until the service is finished running. Otherwise it requests a stop and returns but the service may still be running a final loop. # scheduler Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-services-scheduler # `prefect.server.services.scheduler` The Scheduler service. ## Classes ### `TryAgain` Internal control-flow exception used to retry the Scheduler's main loop ### `Scheduler` Schedules flow runs from deployments. **Methods:** #### `run_once` ```python run_once(self, db: PrefectDBInterface) -> None ``` Schedule flow runs by: * Querying for deployments with active schedules * Generating the next set of flow runs based on each deployments schedule * Inserting all scheduled flow runs into the database All inserted flow runs are committed to the database at the termination of the loop. #### `run_once` ```python run_once(self) -> None ``` Represents one loop of the service. Subclasses must override this method. To actually run the service once, call `LoopService().start(loops=1)` instead of `LoopService().run_once()`, because this method will not invoke setup and teardown methods properly. #### `service_settings` ```python service_settings(cls) -> ServicesBaseSetting ``` #### `start` ```python start(self, loops: None = None) -> NoReturn ``` Run the service indefinitely. #### `stop` ```python stop(self, block: bool = True) -> None ``` Gracefully stops a running LoopService and optionally blocks until the service stops. **Args:** * `block`: if True, blocks until the service is finished running. Otherwise it requests a stop and returns but the service may still be running a final loop. ### `RecentDeploymentsScheduler` Schedules deployments that were updated very recently This scheduler can run on a tight loop and ensure that runs from newly-created or updated deployments are rapidly scheduled without having to wait for the "main" scheduler to complete its loop. Note that scheduling is idempotent, so its ok for this scheduler to attempt to schedule the same deployments as the main scheduler. It's purpose is to accelerate scheduling for any deployments that users are interacting with. **Methods:** #### `run_once` ```python run_once(self, db: PrefectDBInterface) -> None ``` Schedule flow runs by: * Querying for deployments with active schedules * Generating the next set of flow runs based on each deployments schedule * Inserting all scheduled flow runs into the database All inserted flow runs are committed to the database at the termination of the loop. #### `service_settings` ```python service_settings(cls) -> ServicesBaseSetting ``` #### `service_settings` ```python service_settings(cls) -> ServicesBaseSetting ``` # task_run_recorder Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-services-task_run_recorder # `prefect.server.services.task_run_recorder` ## Functions ### `task_run_from_event` ```python task_run_from_event(event: ReceivedEvent) -> TaskRun ``` ### `record_task_run_event` ```python record_task_run_event(event: ReceivedEvent, depth: int = 0) -> None ``` ### `record_lost_follower_task_run_events` ```python record_lost_follower_task_run_events() -> None ``` ### `periodically_process_followers` ```python periodically_process_followers(periodic_granularity: timedelta) -> NoReturn ``` Periodically process followers that are waiting on a leader event that never arrived ### `consumer` ```python consumer() -> AsyncGenerator[MessageHandler, None] ``` ## Classes ### `TaskRunRecorder` Constructs task runs and states from client-emitted events **Methods:** #### `all_services` ```python all_services(cls) -> Sequence[type[Self]] ``` Get list of all service classes #### `enabled` ```python enabled(cls) -> bool ``` Whether the service is enabled #### `enabled_services` ```python enabled_services(cls) -> list[type[Self]] ``` Get list of enabled service classes #### `environment_variable_name` ```python environment_variable_name(cls) -> str ``` #### `run_services` ```python run_services(cls) -> NoReturn ``` Run enabled services until cancelled. #### `running` ```python running(cls) -> AsyncGenerator[None, None] ``` A context manager that runs enabled services on entry and stops them on exit. #### `service_settings` ```python service_settings(cls) -> ServicesBaseSetting ``` #### `service_settings` ```python service_settings(cls) -> ServicesBaseSetting ``` The Prefect setting that controls whether the service is enabled #### `start` ```python start(self) -> NoReturn ``` #### `start` ```python start(self) -> NoReturn ``` Start running the service, which may run indefinitely #### `started_event` ```python started_event(self) -> asyncio.Event ``` #### `started_event` ```python started_event(self, value: asyncio.Event) -> None ``` #### `stop` ```python stop(self) -> None ``` #### `stop` ```python stop(self) -> None ``` Stop the service # telemetry Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-services-telemetry # `prefect.server.services.telemetry` The Telemetry service. ## Classes ### `Telemetry` Sends anonymous data to Prefect to help us improve It can be toggled off with the PREFECT\_SERVER\_ANALYTICS\_ENABLED setting. **Methods:** #### `enabled` ```python enabled(cls) -> bool ``` #### `environment_variable_name` ```python environment_variable_name(cls) -> str ``` #### `run_once` ```python run_once(self) -> None ``` Sends a heartbeat to the sens-o-matic #### `run_once` ```python run_once(self) -> None ``` Represents one loop of the service. Subclasses must override this method. To actually run the service once, call `LoopService().start(loops=1)` instead of `LoopService().run_once()`, because this method will not invoke setup and teardown methods properly. #### `service_settings` ```python service_settings(cls) -> ServicesBaseSetting ``` #### `start` ```python start(self, loops: None = None) -> NoReturn ``` Run the service indefinitely. #### `stop` ```python stop(self, block: bool = True) -> None ``` Gracefully stops a running LoopService and optionally blocks until the service stops. **Args:** * `block`: if True, blocks until the service is finished running. Otherwise it requests a stop and returns but the service may still be running a final loop. # task_queue Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-task_queue # `prefect.server.task_queue` Implements an in-memory task queue for delivering background task runs to TaskWorkers. ## Classes ### `TaskQueue` **Methods:** #### `configure_task_key` ```python configure_task_key(cls, task_key: str, scheduled_size: Optional[int] = None, retry_size: Optional[int] = None) -> None ``` #### `enqueue` ```python enqueue(cls, task_run: schemas.core.TaskRun) -> None ``` #### `for_key` ```python for_key(cls, task_key: str) -> Self ``` #### `get` ```python get(self) -> schemas.core.TaskRun ``` #### `get_nowait` ```python get_nowait(self) -> schemas.core.TaskRun ``` #### `put` ```python put(self, task_run: schemas.core.TaskRun) -> None ``` #### `reset` ```python reset(cls) -> None ``` A unit testing utility to reset the state of the task queues subsystem #### `retry` ```python retry(self, task_run: schemas.core.TaskRun) -> None ``` ### `MultiQueue` A queue that can pull tasks from from any of a number of task queues **Methods:** #### `get` ```python get(self) -> schemas.core.TaskRun ``` Gets the next task\_run from any of the given queues # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-utilities-__init__ # `prefect.server.utilities` *This module is empty or contains only private/internal implementations.* # database Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-utilities-database # `prefect.server.utilities.database` Utilities for interacting with Prefect REST API database and ORM layer. Prefect supports both SQLite and Postgres. Many of these utilities allow the Prefect REST API to seamlessly switch between the two. ## Functions ### `db_injector` ```python db_injector(func: Union[_DBMethod[T, P, R], _DBFunction[P, R]]) -> Union[_Method[T, P, R], _Function[P, R]] ``` ### `generate_uuid_postgresql` ```python generate_uuid_postgresql(element: GenerateUUID, compiler: SQLCompiler, **kwargs: Any) -> str ``` Generates a random UUID in Postgres; requires the pgcrypto extension. ### `generate_uuid_sqlite` ```python generate_uuid_sqlite(element: GenerateUUID, compiler: SQLCompiler, **kwargs: Any) -> str ``` Generates a random UUID in other databases (SQLite) by concatenating bytes in a way that approximates a UUID hex representation. This is sufficient for our purposes of having a random client-generated ID that is compatible with a UUID spec. ### `bindparams_from_clause` ```python bindparams_from_clause(query: sa.ClauseElement) -> dict[str, sa.BindParameter[Any]] ``` Retrieve all non-anonymous bind parameters defined in a SQL clause ### `datetime_or_interval_add_postgresql` ```python datetime_or_interval_add_postgresql(element: Union[date_add, interval_add, date_diff], compiler: SQLCompiler, **kwargs: Any) -> str ``` ### `date_diff_seconds_postgresql` ```python date_diff_seconds_postgresql(element: date_diff_seconds, compiler: SQLCompiler, **kwargs: Any) -> str ``` ### `current_timestamp_sqlite` ```python current_timestamp_sqlite(element: functions.now, compiler: SQLCompiler, **kwargs: Any) -> str ``` Generates the current timestamp for SQLite ### `date_add_sqlite` ```python date_add_sqlite(element: date_add, compiler: SQLCompiler, **kwargs: Any) -> str ``` ### `interval_add_sqlite` ```python interval_add_sqlite(element: interval_add, compiler: SQLCompiler, **kwargs: Any) -> str ``` ### `date_diff_sqlite` ```python date_diff_sqlite(element: date_diff, compiler: SQLCompiler, **kwargs: Any) -> str ``` ### `date_diff_seconds_sqlite` ```python date_diff_seconds_sqlite(element: date_diff_seconds, compiler: SQLCompiler, **kwargs: Any) -> str ``` ### `sqlite_json_operators` ```python sqlite_json_operators(element: sa.BinaryExpression[Any], compiler: SQLCompiler, override_operator: Optional[OperatorType] = None, **kwargs: Any) -> str ``` Intercept the PostgreSQL-only JSON / JSONB operators and translate them to SQLite ### `sqlite_greatest_as_max` ```python sqlite_greatest_as_max(element: greatest[Any], compiler: SQLCompiler, **kwargs: Any) -> str ``` ### `get_dialect` ```python get_dialect(obj: Union[str, Session, sa.Engine]) -> type[sa.Dialect] ``` Get the dialect of a session, engine, or connection url. Primary use case is figuring out whether the Prefect REST API is communicating with SQLite or Postgres. ## Classes ### `GenerateUUID` Platform-independent UUID default generator. Note the actual functionality for this class is specified in the `compiles`-decorated functions below ### `Timestamp` TypeDecorator that ensures that timestamps have a timezone. For SQLite, all timestamps are converted to UTC (since they are stored as naive timestamps without timezones) and recovered as UTC. **Methods:** #### `load_dialect_impl` ```python load_dialect_impl(self, dialect: sa.Dialect) -> TypeEngine[Any] ``` #### `process_bind_param` ```python process_bind_param(self, value: Optional[datetime.datetime], dialect: sa.Dialect) -> Optional[datetime.datetime] ``` #### `process_result_value` ```python process_result_value(self, value: Optional[datetime.datetime], dialect: sa.Dialect) -> Optional[datetime.datetime] ``` ### `UUID` Platform-independent UUID type. Uses PostgreSQL's UUID type, otherwise uses CHAR(36), storing as stringified hex values with hyphens. **Methods:** #### `load_dialect_impl` ```python load_dialect_impl(self, dialect: sa.Dialect) -> TypeEngine[Any] ``` #### `process_bind_param` ```python process_bind_param(self, value: Optional[Union[str, uuid.UUID]], dialect: sa.Dialect) -> Optional[str] ``` #### `process_result_value` ```python process_result_value(self, value: Optional[Union[str, uuid.UUID]], dialect: sa.Dialect) -> Optional[uuid.UUID] ``` ### `JSON` JSON type that returns SQLAlchemy's dialect-specific JSON types, where possible. Uses generic JSON otherwise. The "base" type is postgresql.JSONB to expose useful methods prior to SQL compilation **Methods:** #### `load_dialect_impl` ```python load_dialect_impl(self, dialect: sa.Dialect) -> TypeEngine[Any] ``` #### `process_bind_param` ```python process_bind_param(self, value: Optional[Any], dialect: sa.Dialect) -> Optional[Any] ``` Prepares the given value to be used as a JSON field in a parameter binding ### `Pydantic` A pydantic type that converts inserted parameters to json and converts read values to the pydantic type. **Methods:** #### `process_bind_param` ```python process_bind_param(self, value: Optional[T], dialect: sa.Dialect) -> Optional[str] ``` #### `process_result_value` ```python process_result_value(self, value: Optional[Any], dialect: sa.Dialect) -> Optional[T] ``` ### `date_add` Platform-independent way to add a timestamp and an interval ### `interval_add` Platform-independent way to add two intervals. ### `date_diff` Platform-independent difference of two timestamps. Computes d1 - d2. ### `date_diff_seconds` Platform-independent calculation of the number of seconds between two timestamps or from 'now' ### `greatest` # encryption Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-utilities-encryption # `prefect.server.utilities.encryption` Encryption utilities ## Functions ### `encrypt_fernet` ```python encrypt_fernet(session: AsyncSession, data: Mapping[str, Any]) -> str ``` ### `decrypt_fernet` ```python decrypt_fernet(session: AsyncSession, data: str) -> dict[str, Any] ``` # http Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-utilities-http # `prefect.server.utilities.http` ## Functions ### `should_redact_header` ```python should_redact_header(key: str) -> bool ``` Indicates whether an HTTP header is sensitive or noisy and should be redacted from events and templates. # leasing Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-utilities-leasing # `prefect.server.utilities.leasing` ## Classes ### `ResourceLease` ### `LeaseStorage` **Methods:** #### `create_lease` ```python create_lease(self, resource_ids: list[UUID], ttl: timedelta, metadata: T | None = None) -> ResourceLease[T] ``` Create a new resource lease. **Args:** * `resource_ids`: The IDs of the resources that the lease is associated with. * `ttl`: How long the lease should initially be held for. * `metadata`: Additional metadata associated with the lease. **Returns:** * A ResourceLease object representing the lease. #### `read_expired_lease_ids` ```python read_expired_lease_ids(self, limit: int = 100) -> list[UUID] ``` Read the IDs of expired leases. **Args:** * `limit`: The maximum number of expired leases to read. **Returns:** * A list of UUIDs representing the expired leases. #### `read_lease` ```python read_lease(self, lease_id: UUID) -> ResourceLease[T] | None ``` Read a resource lease. **Args:** * `lease_id`: The ID of the lease to read. **Returns:** * A ResourceLease object representing the lease, or None if not found. #### `renew_lease` ```python renew_lease(self, lease_id: UUID, ttl: timedelta) -> None ``` Renew a resource lease. **Args:** * `lease_id`: The ID of the lease to renew. * `ttl`: The new amount of time the lease should be held for. #### `revoke_lease` ```python revoke_lease(self, lease_id: UUID) -> None ``` Release a resource lease by removing it from list of active leases. **Args:** * `lease_id`: The ID of the lease to release. # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-utilities-messaging-__init__ # `prefect.server.utilities.messaging` ## Functions ### `create_cache` ```python create_cache() -> Cache ``` Creates a new cache with the applications default settings. **Returns:** * a new Cache instance ### `create_publisher` ```python create_publisher(topic: str, cache: Optional[Cache] = None, deduplicate_by: Optional[str] = None) -> Publisher ``` Creates a new publisher with the applications default settings. Args: topic: the topic to publish to Returns: a new Consumer instance ### `ephemeral_subscription` ```python ephemeral_subscription(topic: str) -> AsyncGenerator[Mapping[str, Any], Any] ``` Creates an ephemeral subscription to the given source, removing it when the context exits. ### `create_consumer` ```python create_consumer(topic: str, **kwargs: Any) -> Consumer ``` Creates a new consumer with the applications default settings. Args: topic: the topic to consume from Returns: a new Consumer instance ## Classes ### `Message` A protocol representing a message sent to a message broker. **Methods:** #### `attributes` ```python attributes(self) -> Mapping[str, Any] ``` #### `data` ```python data(self) -> Union[str, bytes] ``` ### `Cache` **Methods:** #### `clear_recently_seen_messages` ```python clear_recently_seen_messages(self) -> None ``` #### `forget_duplicates` ```python forget_duplicates(self, attribute: str, messages: Iterable[Message]) -> None ``` #### `without_duplicates` ```python without_duplicates(self, attribute: str, messages: Iterable[M]) -> list[M] ``` ### `Publisher` **Methods:** #### `publish_data` ```python publish_data(self, data: bytes, attributes: Mapping[str, str]) -> None ``` ### `CapturedMessage` ### `CapturingPublisher` **Methods:** #### `publish_data` ```python publish_data(self, data: bytes, attributes: Mapping[str, str]) -> None ``` #### `publish_data` ```python publish_data(self, data: bytes, attributes: Mapping[str, str]) -> None ``` ### `StopConsumer` Exception to raise to stop a consumer. ### `Consumer` Abstract base class for consumers that receive messages from a message broker and call a handler function for each message received. **Methods:** #### `run` ```python run(self, handler: MessageHandler) -> None ``` Runs the consumer (indefinitely) ### `CacheModule` ### `BrokerModule` # memory Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-utilities-messaging-memory # `prefect.server.utilities.messaging.memory` ## Functions ### `log_metrics_periodically` ```python log_metrics_periodically(interval: float = 2.0) -> None ``` ### `update_metric` ```python update_metric(topic: str, key: str, amount: int = 1) -> None ``` ### `break_topic` ```python break_topic() ``` ### `ephemeral_subscription` ```python ephemeral_subscription(topic: str) -> AsyncGenerator[Mapping[str, Any], None] ``` ## Classes ### `MemoryMessage` ### `Subscription` A subscription to a topic. Messages are delivered to the subscription's queue and retried up to a maximum number of times. If a message cannot be delivered after the maximum number of retries it is moved to the dead letter queue. The dead letter queue is a directory of JSON files containing the serialized message. Messages remain in the dead letter queue until they are removed manually. **Methods:** #### `deliver` ```python deliver(self, message: MemoryMessage) -> None ``` Deliver a message to the subscription's queue. **Args:** * `message`: The message to deliver. #### `get` ```python get(self) -> MemoryMessage ``` Get a message from the subscription's queue. #### `retry` ```python retry(self, message: MemoryMessage) -> None ``` Place a message back on the retry queue. If the message has retried more than the maximum number of times it is moved to the dead letter queue. **Args:** * `message`: The message to retry. #### `send_to_dead_letter_queue` ```python send_to_dead_letter_queue(self, message: MemoryMessage) -> None ``` Send a message to the dead letter queue. The dead letter queue is a directory of JSON files containing the serialized messages. **Args:** * `message`: The message to send to the dead letter queue. ### `Topic` **Methods:** #### `by_name` ```python by_name(cls, name: str) -> 'Topic' ``` #### `clear` ```python clear(self) -> None ``` #### `clear_all` ```python clear_all(cls) -> None ``` #### `publish` ```python publish(self, message: MemoryMessage) -> None ``` #### `subscribe` ```python subscribe(self, **subscription_kwargs: Any) -> Subscription ``` #### `unsubscribe` ```python unsubscribe(self, subscription: Subscription) -> None ``` ### `Cache` **Methods:** #### `clear_recently_seen_messages` ```python clear_recently_seen_messages(self) -> None ``` #### `forget_duplicates` ```python forget_duplicates(self, attribute: str, messages: Iterable[M]) -> None ``` #### `without_duplicates` ```python without_duplicates(self, attribute: str, messages: Iterable[M]) -> list[M] ``` ### `Publisher` **Methods:** #### `publish_data` ```python publish_data(self, data: bytes, attributes: Mapping[str, str]) -> None ``` ### `Consumer` **Methods:** #### `run` ```python run(self, handler: MessageHandler) -> None ``` # names Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-utilities-names # `prefect.server.utilities.names` This module is deprecated. Use `prefect.utilities.names` instead. # postgres_listener Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-utilities-postgres_listener # `prefect.server.utilities.postgres_listener` ## Functions ### `get_pg_notify_connection` ```python get_pg_notify_connection() -> Connection | None ``` Establishes and returns a raw asyncpg connection for LISTEN/NOTIFY. Returns None if not a PostgreSQL connection URL. ### `pg_listen` ```python pg_listen(connection: Connection, channel_name: str, heartbeat_interval: float = 5.0) -> AsyncGenerator[str, None] ``` Listens to a specific Postgres channel and yields payloads. Manages adding and removing the listener on the given connection. # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-utilities-schemas-__init__ # `prefect.server.utilities.schemas` *This module is empty or contains only private/internal implementations.* # bases Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-utilities-schemas-bases # `prefect.server.utilities.schemas.bases` ## Functions ### `get_class_fields_only` ```python get_class_fields_only(model: type[BaseModel]) -> set[str] ``` Gets all the field names defined on the model class but not any parent classes. Any fields that are on the parent but redefined on the subclass are included. ## Classes ### `PrefectDescriptorBase` A base class for descriptor objects used with PrefectBaseModel Pydantic needs to be told about any kind of non-standard descriptor objects used on a model, in order for these not to be treated as a field type instead. This base class is registered as an ignored type with PrefectBaseModel and any classes that inherit from it will also be ignored. This allows such descriptors to be used as properties, methods or other bound descriptor use cases. ### `PrefectBaseModel` A base pydantic.BaseModel for all Prefect schemas and pydantic models. As the basis for most Prefect schemas, this base model ignores extra fields that are passed to it at instantiation. Because adding new fields to API payloads is not considered a breaking change, this ensures that any Prefect client loading data from a server running a possibly-newer version of Prefect will be able to process those new fields gracefully. **Methods:** #### `model_dump_for_orm` ```python model_dump_for_orm(self) -> dict[str, Any] ``` Prefect extension to `BaseModel.model_dump`. Generate a Python dictionary representation of the model suitable for passing to SQLAlchemy model constructors, `INSERT` statements, etc. The critical difference here is that this method will return any nested BaseModel objects as `BaseModel` instances, rather than serialized Python dictionaries. Accepts the standard Pydantic `model_dump` arguments, except for `mode` (which is always "python"), `round_trip`, and `warnings`. Usage docs: [https://docs.pydantic.dev/2.6/concepts/serialization/#modelmodel\_dump](https://docs.pydantic.dev/2.6/concepts/serialization/#modelmodel_dump) **Args:** * `include`: A list of fields to include in the output. * `exclude`: A list of fields to exclude from the output. * `by_alias`: Whether to use the field's alias in the dictionary key if defined. * `exclude_unset`: Whether to exclude fields that have not been explicitly set. * `exclude_defaults`: Whether to exclude fields that are set to their default value. * `exclude_none`: Whether to exclude fields that have a value of `None`. **Returns:** * A dictionary representation of the model, suitable for passing * to SQLAlchemy model constructors, INSERT statements, etc. #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `IDBaseModel` A PrefectBaseModel with an auto-generated UUID ID value. The ID is reset on copy() and not included in equality comparisons. **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `TimeSeriesBaseModel` A PrefectBaseModel with a time-oriented UUIDv7 ID value. Used for models that operate like timeseries, such as runs, states, and logs. ### `ORMBaseModel` A PrefectBaseModel with an auto-generated UUID ID value and created / updated timestamps, intended for compatibility with our standard ORM models. The ID, created, and updated fields are reset on copy() and not included in equality comparisons. ### `ActionBaseModel` **Methods:** #### `model_validate_list` ```python model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. # serializers Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-utilities-schemas-serializers # `prefect.server.utilities.schemas.serializers` ## Functions ### `orjson_dumps` ```python orjson_dumps(v: Any) -> str ``` Utility for dumping a value to JSON using orjson. orjson.dumps returns bytes, to match standard json.dumps we need to decode. ### `orjson_dumps_extra_compatible` ```python orjson_dumps_extra_compatible(v: Any) -> str ``` Utility for dumping a value to JSON using orjson, but allows for 1. non-string keys: this is helpful for situations like pandas dataframes, which can result in non-string keys 2. numpy types: for serializing numpy arrays orjson.dumps returns bytes, to match standard json.dumps we need to decode. # server Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-utilities-server # `prefect.server.utilities.server` Utilities for the Prefect REST API server. ## Functions ### `method_paths_from_routes` ```python method_paths_from_routes(routes: Sequence[BaseRoute]) -> set[str] ``` Generate a set of strings describing the given routes in the format: \ \ For example, "GET /logs/" ## Classes ### `PrefectAPIRoute` A FastAPIRoute class which attaches an async stack to requests that exits before a response is returned. Requests already have `request.scope['fastapi_astack']` which is an async stack for the full scope of the request. This stack is used for managing contexts of FastAPI dependencies. If we want to close a dependency before the request is complete (i.e. before returning a response to the user), we need a stack with a different scope. This extension adds this stack at `request.state.response_scoped_stack`. **Methods:** #### `get_route_handler` ```python get_route_handler(self) -> Callable[[Request], Coroutine[Any, Any, Response]] ``` ### `PrefectRouter` A base class for Prefect REST API routers. **Methods:** #### `add_api_route` ```python add_api_route(self, path: str, endpoint: Callable[..., Any], **kwargs: Any) -> None ``` Add an API route. For routes that return content and have not specified a `response_model`, use return type annotation to infer the response model. For routes that return No-Content status codes, explicitly set a `response_class` to ensure nothing is returned in the response body. # subscriptions Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-utilities-subscriptions # `prefect.server.utilities.subscriptions` ## Functions ### `accept_prefect_socket` ```python accept_prefect_socket(websocket: WebSocket) -> Optional[WebSocket] ``` ### `still_connected` ```python still_connected(websocket: WebSocket) -> bool ``` Checks that a client websocket still seems to be connected during a period where the server is expected to be sending messages. # text_search_parser Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-utilities-text_search_parser # `prefect.server.utilities.text_search_parser` Text search query parser Parses text search queries according to the following syntax: * Space-separated terms β†’ OR logic (include) * Prefix with `-` or `!` β†’ Exclude term * Prefix with `+` β†’ Required term (AND logic, future) * Quote phrases β†’ Match exact phrase * Backslash escapes β†’ Allow quotes within phrases (") * Case-insensitive, substring matching * 200 character limit ## Functions ### `parse_text_search_query` ```python parse_text_search_query(query: str) -> TextSearchQuery ``` Parse a text search query string into structured components **Args:** * `query`: The query string to parse **Returns:** * TextSearchQuery with parsed include/exclude/required terms ## Classes ### `TextSearchQuery` Parsed text search query structure # user_templates Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-server-utilities-user_templates # `prefect.server.utilities.user_templates` Utilities to support safely rendering user-supplied templates ## Functions ### `register_user_template_filters` ```python register_user_template_filters(filters: dict[str, Any]) -> None ``` Register additional filters that will be available to user templates ### `validate_user_template` ```python validate_user_template(template: str) -> None ``` ### `matching_types_in_templates` ```python matching_types_in_templates(templates: list[str], types: set[str]) -> list[str] ``` ### `maybe_template` ```python maybe_template(possible: str) -> bool ``` ### `render_user_template` ```python render_user_template(template: str, context: dict[str, Any]) -> str ``` ### `render_user_template_sync` ```python render_user_template_sync(template: str, context: dict[str, Any]) -> str ``` ## Classes ### `UserTemplateEnvironment` ### `TemplateSecurityError` Raised when extended validation of a template fails. # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-__init__ # `prefect.settings` Prefect settings are defined using `BaseSettings` from `pydantic_settings`. `BaseSettings` can load setting values from system environment variables and each additionally specified `env_file`. The recommended user-facing way to access Prefect settings at this time is to import specific setting objects directly, like `from prefect.settings import PREFECT_API_URL; print(PREFECT_API_URL.value())`. Importantly, we replace the `callback` mechanism for updating settings with an "after" model\_validator that updates dependent settings. After [https://github.com/pydantic/pydantic/issues/9789](https://github.com/pydantic/pydantic/issues/9789) is resolved, we will be able to define context-aware defaults for settings, at which point we will not need to use the "after" model\_validator. # base Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-base # `prefect.settings.base` ## Functions ### `build_settings_config` ```python build_settings_config(path: tuple[str, ...] = tuple(), frozen: bool = False) -> PrefectSettingsConfigDict ``` ## Classes ### `PrefectBaseSettings` **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. ### `PrefectSettingsConfigDict` Configuration for the behavior of Prefect settings models. # constants Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-constants # `prefect.settings.constants` *This module is empty or contains only private/internal implementations.* # context Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-context # `prefect.settings.context` ## Functions ### `get_current_settings` ```python get_current_settings() -> Settings ``` Returns a settings object populated with values from the current settings context or, if no settings context is active, the environment. ### `temporary_settings` ```python temporary_settings(updates: Optional[Mapping['Setting', Any]] = None, set_defaults: Optional[Mapping['Setting', Any]] = None, restore_defaults: Optional[Iterable['Setting']] = None) -> Generator[Settings, None, None] ``` Temporarily override the current settings by entering a new profile. See `Settings.copy_with_update` for details on different argument behavior. Examples: ```python from prefect.settings import PREFECT_API_URL with temporary_settings(updates={PREFECT_API_URL: "foo"}): assert PREFECT_API_URL.value() == "foo" with temporary_settings(set_defaults={PREFECT_API_URL: "bar"}): assert PREFECT_API_URL.value() == "foo" with temporary_settings(restore_defaults={PREFECT_API_URL}): assert PREFECT_API_URL.value() is None with temporary_settings(set_defaults={PREFECT_API_URL: "bar"}) assert PREFECT_API_URL.value() == "bar" assert PREFECT_API_URL.value() is None ``` # legacy Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-legacy # `prefect.settings.legacy` ## Classes ### `Setting` Mimics the old Setting object for compatibility with existing code. **Methods:** #### `default` ```python default(self) -> Any ``` #### `is_secret` ```python is_secret(self) -> bool ``` #### `name` ```python name(self) -> str ``` #### `value` ```python value(self: Self) -> Any ``` #### `value_from` ```python value_from(self: Self, settings: Settings) -> Any ``` # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-models-__init__ # `prefect.settings.models` *This module is empty or contains only private/internal implementations.* # api Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-models-api # `prefect.settings.models.api` ## Classes ### `APISettings` Settings for interacting with the Prefect API **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # cli Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-models-cli # `prefect.settings.models.cli` ## Classes ### `CLISettings` Settings for controlling CLI behavior **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # client Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-models-client # `prefect.settings.models.client` ## Classes ### `ClientMetricsSettings` Settings for controlling metrics reporting from the client **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. ### `ClientSettings` Settings for controlling API client behavior **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # cloud Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-models-cloud # `prefect.settings.models.cloud` ## Functions ### `default_cloud_ui_url` ```python default_cloud_ui_url(settings: 'CloudSettings') -> Optional[str] ``` ## Classes ### `CloudSettings` Settings for interacting with Prefect Cloud **Methods:** #### `post_hoc_settings` ```python post_hoc_settings(self) -> Self ``` refactor on resolution of [https://github.com/pydantic/pydantic/issues/9789](https://github.com/pydantic/pydantic/issues/9789) we should not be modifying **pydantic\_fields\_set** directly, but until we can define dependencies between defaults in a first-class way, we need clean up post-hoc default assignments to keep set/unset fields correct after instantiation. #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # deployments Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-models-deployments # `prefect.settings.models.deployments` ## Classes ### `DeploymentsSettings` Settings for configuring deployments defaults **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # experiments Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-models-experiments # `prefect.settings.models.experiments` ## Classes ### `ExperimentsSettings` Settings for configuring experimental features **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # flows Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-models-flows # `prefect.settings.models.flows` ## Classes ### `FlowsSettings` Settings for controlling flow behavior **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # internal Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-models-internal # `prefect.settings.models.internal` ## Classes ### `InternalSettings` **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # logging Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-models-logging # `prefect.settings.models.logging` ## Functions ### `max_log_size_smaller_than_batch_size` ```python max_log_size_smaller_than_batch_size(values: dict[str, Any]) -> dict[str, Any] ``` Validator for settings asserting the batch size and match log size are compatible ## Classes ### `LoggingToAPISettings` Settings for controlling logging to the API **Methods:** #### `emit_warnings` ```python emit_warnings(self) -> Self ``` Emits warnings for misconfiguration of logging settings. #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. ### `LoggingSettings` Settings for controlling logging behavior **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # results Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-models-results # `prefect.settings.models.results` ## Classes ### `ResultsSettings` Settings for controlling result storage behavior **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # root Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-models-root # `prefect.settings.models.root` ## Functions ### `canonical_environment_prefix` ```python canonical_environment_prefix(settings: 'Settings') -> str ``` ## Classes ### `Settings` Settings for Prefect using Pydantic settings. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings](https://docs.pydantic.dev/latest/concepts/pydantic_settings) **Methods:** #### `copy_with_update` ```python copy_with_update(self: Self, updates: Optional[Mapping['Setting', Any]] = None, set_defaults: Optional[Mapping['Setting', Any]] = None, restore_defaults: Optional[Iterable['Setting']] = None) -> Self ``` Create a new Settings object with validation. **Args:** * `updates`: A mapping of settings to new values. Existing values for the given settings will be overridden. * `set_defaults`: A mapping of settings to new default values. Existing values for the given settings will only be overridden if they were not set. * `restore_defaults`: An iterable of settings to restore to their default values. **Returns:** * A new Settings object. #### `emit_warnings` ```python emit_warnings(self) -> Self ``` More post-hoc validation of settings, including warnings for misconfigurations. #### `hash_key` ```python hash_key(self) -> str ``` Return a hash key for the settings object. This is needed since some settings may be unhashable, like lists. #### `post_hoc_settings` ```python post_hoc_settings(self) -> Self ``` Handle remaining complex default assignments that aren't yet migrated to dependent settings. With Pydantic 2.10's dependent settings feature, we've migrated simple path-based defaults to use default\_factory. The remaining items here require access to the full Settings instance or have complex interdependencies that will be migrated in future PRs. #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # runner Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-models-runner # `prefect.settings.models.runner` ## Classes ### `RunnerServerSettings` Settings for controlling runner server behavior **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. ### `RunnerSettings` Settings for controlling runner behavior **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-models-server-__init__ # `prefect.settings.models.server` *This module is empty or contains only private/internal implementations.* # api Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-models-server-api # `prefect.settings.models.server.api` ## Classes ### `ServerAPISettings` Settings for controlling API server behavior **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # concurrency Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-models-server-concurrency # `prefect.settings.models.server.concurrency` ## Classes ### `ServerConcurrencySettings` **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # database Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-models-server-database # `prefect.settings.models.server.database` ## Functions ### `warn_on_database_password_value_without_usage` ```python warn_on_database_password_value_without_usage(settings: ServerDatabaseSettings) -> None ``` Validator for settings warning if the database password is set but not used. ## Classes ### `SQLAlchemyTLSSettings` Settings for controlling SQLAlchemy mTLS context when using a PostgreSQL database. **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. ### `SQLAlchemyConnectArgsSettings` Settings for controlling SQLAlchemy connection behavior; note that these settings only take effect when using a PostgreSQL database. **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. ### `SQLAlchemySettings` Settings for controlling SQLAlchemy behavior; note that these settings only take effect when using a PostgreSQL database. **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. ### `ServerDatabaseSettings` Settings for controlling server database behavior **Methods:** #### `emit_warnings` ```python emit_warnings(self) -> Self ``` More post-hoc validation of settings, including warnings for misconfigurations. #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `set_deprecated_sqlalchemy_settings_on_child_model_and_warn` ```python set_deprecated_sqlalchemy_settings_on_child_model_and_warn(cls, values: dict[str, Any]) -> dict[str, Any] ``` Set deprecated settings on the child model. #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # deployments Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-models-server-deployments # `prefect.settings.models.server.deployments` ## Classes ### `ServerDeploymentsSettings` **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # ephemeral Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-models-server-ephemeral # `prefect.settings.models.server.ephemeral` ## Classes ### `ServerEphemeralSettings` Settings for controlling ephemeral server behavior **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # events Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-models-server-events # `prefect.settings.models.server.events` ## Classes ### `ServerEventsSettings` Settings for controlling behavior of the events subsystem **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # flow_run_graph Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-models-server-flow_run_graph # `prefect.settings.models.server.flow_run_graph` ## Classes ### `ServerFlowRunGraphSettings` Settings for controlling behavior of the flow run graph **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # logs Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-models-server-logs # `prefect.settings.models.server.logs` ## Classes ### `ServerLogsSettings` Settings for controlling behavior of the logs subsystem **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # root Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-models-server-root # `prefect.settings.models.server.root` ## Classes ### `ServerSettings` Settings for controlling server behavior **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # services Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-models-server-services # `prefect.settings.models.server.services` ## Classes ### `ServicesBaseSetting` **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. ### `ServerServicesCancellationCleanupSettings` Settings for controlling the cancellation cleanup service ### `ServerServicesEventPersisterSettings` Settings for controlling the event persister service ### `ServerServicesEventLoggerSettings` Settings for controlling the event logger service ### `ServerServicesForemanSettings` Settings for controlling the foreman service ### `ServerServicesLateRunsSettings` Settings for controlling the late runs service ### `ServerServicesSchedulerSettings` Settings for controlling the scheduler service ### `ServerServicesPauseExpirationsSettings` Settings for controlling the pause expiration service ### `ServerServicesRepossessorSettings` Settings for controlling the repossessor service ### `ServerServicesTaskRunRecorderSettings` Settings for controlling the task run recorder service ### `ServerServicesTriggersSettings` Settings for controlling the triggers service ### `ServerServicesSettings` Settings for controlling server services **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # tasks Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-models-server-tasks # `prefect.settings.models.server.tasks` ## Classes ### `ServerTasksSchedulingSettings` Settings for controlling server-side behavior related to task scheduling **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. ### `ServerTasksSettings` Settings for controlling server-side behavior related to tasks **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # ui Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-models-server-ui # `prefect.settings.models.server.ui` ## Classes ### `ServerUISettings` **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # tasks Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-models-tasks # `prefect.settings.models.tasks` ## Classes ### `TasksRunnerSettings` **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. ### `TasksSchedulingSettings` **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. ### `TasksSettings` **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # testing Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-models-testing # `prefect.settings.models.testing` ## Classes ### `TestingSettings` **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # worker Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-models-worker # `prefect.settings.models.worker` ## Classes ### `WorkerWebserverSettings` **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. ### `WorkerSettings` **Methods:** #### `ser_model` ```python ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `settings_customise_sources` ```python settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # profiles Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-profiles # `prefect.settings.profiles` ## Functions ### `load_profiles` ```python load_profiles(include_defaults: bool = True) -> ProfilesCollection ``` Load profiles from the current profile path. Optionally include profiles from the default profile path. ### `load_current_profile` ```python load_current_profile() -> Profile ``` Load the current profile from the default and current profile paths. This will *not* include settings from the current settings context. Only settings that have been persisted to the profiles file will be saved. ### `save_profiles` ```python save_profiles(profiles: ProfilesCollection) -> None ``` Writes all non-default profiles to the current profiles path. ### `load_profile` ```python load_profile(name: str) -> Profile ``` Load a single profile by name. ### `update_current_profile` ```python update_current_profile(settings: dict[str | Setting, Any]) -> Profile ``` Update the persisted data for the profile currently in-use. If the profile does not exist in the profiles file, it will be created. Given settings will be merged with the existing settings as described in `ProfilesCollection.update_profile`. **Returns:** * The new profile. ## Classes ### `Profile` A user profile containing settings. **Methods:** #### `to_environment_variables` ```python to_environment_variables(self) -> dict[str, str] ``` Convert the profile settings to a dictionary of environment variables. #### `validate_settings` ```python validate_settings(self) -> None ``` Validate all settings in this profile by creating a partial Settings object with the nested structure properly constructed using accessor paths. ### `ProfilesCollection` " A utility class for working with a collection of profiles. Profiles in the collection must have unique names. The collection may store the name of the active profile. **Methods:** #### `active_profile` ```python active_profile(self) -> Profile | None ``` Retrieve the active profile in this collection. #### `add_profile` ```python add_profile(self, profile: Profile) -> None ``` Add a profile to the collection. If the profile name already exists, an exception will be raised. #### `items` ```python items(self) -> list[tuple[str, Profile]] ``` #### `names` ```python names(self) -> set[str] ``` Return a set of profile names in this collection. #### `remove_profile` ```python remove_profile(self, name: str) -> None ``` Remove a profile from the collection. #### `set_active` ```python set_active(self, name: str | None, check: bool = True) -> None ``` Set the active profile name in the collection. A null value may be passed to indicate that this collection does not determine the active profile. #### `to_dict` ```python to_dict(self) -> dict[str, Any] ``` Convert to a dictionary suitable for writing to disk. #### `update_profile` ```python update_profile(self, name: str, settings: dict[Setting, Any], source: Path | None = None) -> Profile ``` Add a profile to the collection or update the existing on if the name is already present in this collection. If updating an existing profile, the settings will be merged. Settings can be dropped from the existing profile by setting them to `None` in the new profile. Returns the new profile object. #### `without_profile_source` ```python without_profile_source(self, path: Path | None) -> 'ProfilesCollection' ``` Remove profiles that were loaded from a given path. Returns a new collection. # sources Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-settings-sources # `prefect.settings.sources` ## Classes ### `EnvFilterSettingsSource` Custom pydantic settings source to filter out specific environment variables. All validation aliases are loaded from environment variables by default. We use `AliasPath` to maintain the ability set fields via model initialization, but those shouldn't be loaded from environment variables. This loader allows use to say which environment variables should be ignored. ### `FilteredDotEnvSettingsSource` ### `ProfileSettingsTomlLoader` Custom pydantic settings source to load profile settings from a toml file. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) **Methods:** #### `get_field_value` ```python get_field_value(self, field: FieldInfo, field_name: str) -> Tuple[Any, str, bool] ``` Concrete implementation to get the field value from the profile settings ### `TomlConfigSettingsSourceBase` **Methods:** #### `get_field_value` ```python get_field_value(self, field: FieldInfo, field_name: str) -> tuple[Any, str, bool] ``` Concrete implementation to get the field value from toml data ### `PrefectTomlConfigSettingsSource` Custom pydantic settings source to load settings from a prefect.toml file **Methods:** #### `get_field_value` ```python get_field_value(self, field: FieldInfo, field_name: str) -> tuple[Any, str, bool] ``` Concrete implementation to get the field value from toml data ### `PyprojectTomlConfigSettingsSource` Custom pydantic settings source to load settings from a pyproject.toml file **Methods:** #### `get_field_value` ```python get_field_value(self, field: FieldInfo, field_name: str) -> tuple[Any, str, bool] ``` Concrete implementation to get the field value from toml data # states Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-states # `prefect.states` ## Functions ### `to_state_create` ```python to_state_create(state: State) -> 'StateCreate' ``` Convert the state to a `StateCreate` type which can be used to set the state of a run in the API. This method will drop this state's `data` if it is not a result type. Only results should be sent to the API. Other data is only available locally. ### `get_state_result` ```python get_state_result(state: 'State[R]', raise_on_failure: bool = True, retry_result_failure: bool = True) -> 'R' ``` Get the result from a state. See `State.result()` ### `format_exception` ```python format_exception(exc: BaseException, tb: TracebackType = None) -> str ``` ### `exception_to_crashed_state` ```python exception_to_crashed_state(exc: BaseException, result_store: Optional['ResultStore'] = None) -> State ``` Takes an exception that occurs *outside* of user code and converts it to a 'Crash' exception with a 'Crashed' state. ### `exception_to_failed_state` ```python exception_to_failed_state(exc: Optional[BaseException] = None, result_store: Optional['ResultStore'] = None, write_result: bool = False, **kwargs: Any) -> State[BaseException] ``` Convenience function for creating `Failed` states from exceptions ### `return_value_to_state` ```python return_value_to_state(retval: 'R', result_store: 'ResultStore', key: Optional[str] = None, expiration: Optional[datetime.datetime] = None, write_result: bool = False) -> 'State[R]' ``` Given a return value from a user's function, create a `State` the run should be placed in. * If data is returned, we create a 'COMPLETED' state with the data * If a single, manually created state is returned, we use that state as given (manual creation is determined by the lack of ids) * If an upstream state or iterable of upstream states is returned, we apply the aggregate rule The aggregate rule says that given multiple states we will determine the final state such that: * If any states are not COMPLETED the final state is FAILED * If all of the states are COMPLETED the final state is COMPLETED * The states will be placed in the final state `data` attribute Callers should resolve all futures into states before passing return values to this function. ### `get_state_exception` ```python get_state_exception(state: State) -> BaseException ``` If not given a FAILED or CRASHED state, this raise a value error. If the state result is a state, its exception will be returned. If the state result is an iterable of states, the exception of the first failure will be returned. If the state result is a string, a wrapper exception will be returned with the string as the message. If the state result is null, a wrapper exception will be returned with the state message attached. If the state result is not of a known type, a `TypeError` will be returned. When a wrapper exception is returned, the type will be: * `FailedRun` if the state type is FAILED. * `CrashedRun` if the state type is CRASHED. * `CancelledRun` if the state type is CANCELLED. ### `raise_state_exception` ```python raise_state_exception(state: State) -> None ``` Given a FAILED or CRASHED state, raise the contained exception. ### `is_state_iterable` ```python is_state_iterable(obj: Any) -> TypeGuard[Iterable[State]] ``` Check if a the given object is an iterable of states types Supported iterables are: * set * list * tuple Other iterables will return `False` even if they contain states. ### `Scheduled` ```python Scheduled(cls: Type['State[R]'] = State, scheduled_time: Optional[datetime.datetime] = None, **kwargs: Any) -> 'State[R]' ``` Convenience function for creating `Scheduled` states. **Returns:** * a Scheduled state ### `Completed` ```python Completed(cls: Type['State[R]'] = State, **kwargs: Any) -> 'State[R]' ``` Convenience function for creating `Completed` states. **Returns:** * a Completed state ### `Running` ```python Running(cls: Type['State[R]'] = State, **kwargs: Any) -> 'State[R]' ``` Convenience function for creating `Running` states. **Returns:** * a Running state ### `Failed` ```python Failed(cls: Type['State[R]'] = State, **kwargs: Any) -> 'State[R]' ``` Convenience function for creating `Failed` states. **Returns:** * a Failed state ### `Crashed` ```python Crashed(cls: Type['State[R]'] = State, **kwargs: Any) -> 'State[R]' ``` Convenience function for creating `Crashed` states. **Returns:** * a Crashed state ### `Cancelling` ```python Cancelling(cls: Type['State[R]'] = State, **kwargs: Any) -> 'State[R]' ``` Convenience function for creating `Cancelling` states. **Returns:** * a Cancelling state ### `Cancelled` ```python Cancelled(cls: Type['State[R]'] = State, **kwargs: Any) -> 'State[R]' ``` Convenience function for creating `Cancelled` states. **Returns:** * a Cancelled state ### `Pending` ```python Pending(cls: Type['State[R]'] = State, **kwargs: Any) -> 'State[R]' ``` Convenience function for creating `Pending` states. **Returns:** * a Pending state ### `Paused` ```python Paused(cls: Type['State[R]'] = State, timeout_seconds: Optional[int] = None, pause_expiration_time: Optional[datetime.datetime] = None, reschedule: bool = False, pause_key: Optional[str] = None, **kwargs: Any) -> 'State[R]' ``` Convenience function for creating `Paused` states. **Returns:** * a Paused state ### `Suspended` ```python Suspended(cls: Type['State[R]'] = State, timeout_seconds: Optional[int] = None, pause_expiration_time: Optional[datetime.datetime] = None, pause_key: Optional[str] = None, **kwargs: Any) -> 'State[R]' ``` Convenience function for creating `Suspended` states. **Returns:** * a Suspended state ### `AwaitingRetry` ```python AwaitingRetry(cls: Type['State[R]'] = State, scheduled_time: Optional[datetime.datetime] = None, **kwargs: Any) -> 'State[R]' ``` Convenience function for creating `AwaitingRetry` states. **Returns:** * an AwaitingRetry state ### `AwaitingConcurrencySlot` ```python AwaitingConcurrencySlot(cls: Type['State[R]'] = State, scheduled_time: Optional[datetime.datetime] = None, **kwargs: Any) -> 'State[R]' ``` Convenience function for creating `AwaitingConcurrencySlot` states. **Returns:** * an AwaitingConcurrencySlot state ### `Retrying` ```python Retrying(cls: Type['State[R]'] = State, **kwargs: Any) -> 'State[R]' ``` Convenience function for creating `Retrying` states. **Returns:** * a Retrying state ### `Late` ```python Late(cls: Type['State[R]'] = State, scheduled_time: Optional[datetime.datetime] = None, **kwargs: Any) -> 'State[R]' ``` Convenience function for creating `Late` states. **Returns:** * a Late state ## Classes ### `StateGroup` **Methods:** #### `all_completed` ```python all_completed(self) -> bool ``` #### `all_final` ```python all_final(self) -> bool ``` #### `any_cancelled` ```python any_cancelled(self) -> bool ``` #### `any_failed` ```python any_failed(self) -> bool ``` #### `any_paused` ```python any_paused(self) -> bool ``` #### `counts_message` ```python counts_message(self) -> str ``` #### `fail_count` ```python fail_count(self) -> int ``` # task_engine Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-task_engine # `prefect.task_engine` ## Functions ### `run_task_sync` ```python run_task_sync(task: 'Task[P, R]', task_run_id: Optional[UUID] = None, task_run: Optional[TaskRun] = None, parameters: Optional[dict[str, Any]] = None, wait_for: Optional['OneOrManyFutureOrResult[Any]'] = None, return_type: Literal['state', 'result'] = 'result', dependencies: Optional[dict[str, set[RunInput]]] = None, context: Optional[dict[str, Any]] = None) -> Union[R, State, None] ``` ### `run_task_async` ```python run_task_async(task: 'Task[P, R]', task_run_id: Optional[UUID] = None, task_run: Optional[TaskRun] = None, parameters: Optional[dict[str, Any]] = None, wait_for: Optional['OneOrManyFutureOrResult[Any]'] = None, return_type: Literal['state', 'result'] = 'result', dependencies: Optional[dict[str, set[RunInput]]] = None, context: Optional[dict[str, Any]] = None) -> Union[R, State, None] ``` ### `run_generator_task_sync` ```python run_generator_task_sync(task: 'Task[P, R]', task_run_id: Optional[UUID] = None, task_run: Optional[TaskRun] = None, parameters: Optional[dict[str, Any]] = None, wait_for: Optional['OneOrManyFutureOrResult[Any]'] = None, return_type: Literal['state', 'result'] = 'result', dependencies: Optional[dict[str, set[RunInput]]] = None, context: Optional[dict[str, Any]] = None) -> Generator[R, None, None] ``` ### `run_generator_task_async` ```python run_generator_task_async(task: 'Task[P, R]', task_run_id: Optional[UUID] = None, task_run: Optional[TaskRun] = None, parameters: Optional[dict[str, Any]] = None, wait_for: Optional['OneOrManyFutureOrResult[Any]'] = None, return_type: Literal['state', 'result'] = 'result', dependencies: Optional[dict[str, set[RunInput]]] = None, context: Optional[dict[str, Any]] = None) -> AsyncGenerator[R, None] ``` ### `run_task` ```python run_task(task: 'Task[P, Union[R, Coroutine[Any, Any, R]]]', task_run_id: Optional[UUID] = None, task_run: Optional[TaskRun] = None, parameters: Optional[dict[str, Any]] = None, wait_for: Optional['OneOrManyFutureOrResult[Any]'] = None, return_type: Literal['state', 'result'] = 'result', dependencies: Optional[dict[str, set[RunInput]]] = None, context: Optional[dict[str, Any]] = None) -> Union[R, State, None, Coroutine[Any, Any, Union[R, State, None]]] ``` Runs the provided task. **Args:** * `task`: The task to run * `task_run_id`: The ID of the task run; if not provided, a new task run will be created * `task_run`: The task run object; if not provided, a new task run will be created * `parameters`: The parameters to pass to the task * `wait_for`: A list of futures to wait for before running the task * `return_type`: The return type to return; either "state" or "result" * `dependencies`: A dictionary of task run inputs to use for dependency tracking * `context`: A dictionary containing the context to use for the task run; only required if the task is running on in a remote environment **Returns:** * The result of the task run ## Classes ### `TaskRunTimeoutError` Raised when a task run exceeds its timeout. ### `BaseTaskRunEngine` **Methods:** #### `compute_transaction_key` ```python compute_transaction_key(self) -> Optional[str] ``` #### `handle_rollback` ```python handle_rollback(self, txn: Transaction) -> None ``` #### `is_cancelled` ```python is_cancelled(self) -> bool ``` #### `is_running` ```python is_running(self) -> bool ``` Whether or not the engine is currently running a task. #### `log_finished_message` ```python log_finished_message(self) -> None ``` #### `record_terminal_state_timing` ```python record_terminal_state_timing(self, state: State) -> None ``` #### `state` ```python state(self) -> State ``` ### `SyncTaskRunEngine` **Methods:** #### `asset_context` ```python asset_context(self) ``` #### `begin_run` ```python begin_run(self) -> None ``` #### `call_hooks` ```python call_hooks(self, state: Optional[State] = None) -> None ``` #### `call_task_fn` ```python call_task_fn(self, transaction: Transaction) -> Union[ResultRecord[Any], None, Coroutine[Any, Any, R], R] ``` Convenience method to call the task function. Returns a coroutine if the task is async. #### `can_retry` ```python can_retry(self, exc_or_state: Exception | State[R]) -> bool ``` #### `client` ```python client(self) -> SyncPrefectClient ``` #### `handle_crash` ```python handle_crash(self, exc: BaseException) -> None ``` #### `handle_exception` ```python handle_exception(self, exc: Exception) -> None ``` #### `handle_retry` ```python handle_retry(self, exc_or_state: Exception | State[R]) -> bool ``` Handle any task run retries. * If the task has retries left, and the retry condition is met, set the task to retrying and return True. * If the task has a retry delay, place in AwaitingRetry state with a delayed scheduled time. * If the task has no retries left, or the retry condition is not met, return False. #### `handle_success` ```python handle_success(self, result: R, transaction: Transaction) -> Union[ResultRecord[R], None, Coroutine[Any, Any, R], R] ``` #### `handle_timeout` ```python handle_timeout(self, exc: TimeoutError) -> None ``` #### `initialize_run` ```python initialize_run(self, task_run_id: Optional[UUID] = None, dependencies: Optional[dict[str, set[RunInput]]] = None) -> Generator[Self, Any, Any] ``` Enters a client context and creates a task run if needed. #### `result` ```python result(self, raise_on_failure: bool = True) -> 'Union[R, State, None]' ``` #### `run_context` ```python run_context(self) ``` #### `set_state` ```python set_state(self, state: State[R], force: bool = False) -> State[R] ``` #### `setup_run_context` ```python setup_run_context(self, client: Optional[SyncPrefectClient] = None) ``` #### `start` ```python start(self, task_run_id: Optional[UUID] = None, dependencies: Optional[dict[str, set[RunInput]]] = None) -> Generator[None, None, None] ``` #### `transaction_context` ```python transaction_context(self) -> Generator[Transaction, None, None] ``` #### `wait_until_ready` ```python wait_until_ready(self) -> None ``` Waits until the scheduled time (if its the future), then enters Running. ### `AsyncTaskRunEngine` **Methods:** #### `asset_context` ```python asset_context(self) ``` #### `begin_run` ```python begin_run(self) -> None ``` #### `call_hooks` ```python call_hooks(self, state: Optional[State] = None) -> None ``` #### `call_task_fn` ```python call_task_fn(self, transaction: AsyncTransaction) -> Union[ResultRecord[Any], None, Coroutine[Any, Any, R], R] ``` Convenience method to call the task function. Returns a coroutine if the task is async. #### `can_retry` ```python can_retry(self, exc_or_state: Exception | State[R]) -> bool ``` #### `client` ```python client(self) -> PrefectClient ``` #### `handle_crash` ```python handle_crash(self, exc: BaseException) -> None ``` #### `handle_exception` ```python handle_exception(self, exc: Exception) -> None ``` #### `handle_retry` ```python handle_retry(self, exc_or_state: Exception | State[R]) -> bool ``` Handle any task run retries. * If the task has retries left, and the retry condition is met, set the task to retrying and return True. * If the task has a retry delay, place in AwaitingRetry state with a delayed scheduled time. * If the task has no retries left, or the retry condition is not met, return False. #### `handle_success` ```python handle_success(self, result: R, transaction: AsyncTransaction) -> Union[ResultRecord[R], None, Coroutine[Any, Any, R], R] ``` #### `handle_timeout` ```python handle_timeout(self, exc: TimeoutError) -> None ``` #### `initialize_run` ```python initialize_run(self, task_run_id: Optional[UUID] = None, dependencies: Optional[dict[str, set[RunInput]]] = None) -> AsyncGenerator[Self, Any] ``` Enters a client context and creates a task run if needed. #### `result` ```python result(self, raise_on_failure: bool = True) -> 'Union[R, State, None]' ``` #### `run_context` ```python run_context(self) ``` #### `set_state` ```python set_state(self, state: State, force: bool = False) -> State ``` #### `setup_run_context` ```python setup_run_context(self, client: Optional[PrefectClient] = None) ``` #### `start` ```python start(self, task_run_id: Optional[UUID] = None, dependencies: Optional[dict[str, set[RunInput]]] = None) -> AsyncGenerator[None, None] ``` #### `transaction_context` ```python transaction_context(self) -> AsyncGenerator[AsyncTransaction, None] ``` #### `wait_until_ready` ```python wait_until_ready(self) -> None ``` Waits until the scheduled time (if its the future), then enters Running. # task_runners Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-task_runners # `prefect.task_runners` ## Classes ### `TaskRunner` Abstract base class for task runners. A task runner is responsible for submitting tasks to the task run engine running in an execution environment. Submitted tasks are non-blocking and return a future object that can be used to wait for the task to complete and retrieve the result. Task runners are context managers and should be used in a `with` block to ensure proper cleanup of resources. **Methods:** #### `duplicate` ```python duplicate(self) -> Self ``` Return a new instance of this task runner with the same configuration. #### `map` ```python map(self, task: 'Task[P, R | CoroutineType[Any, Any, R]]', parameters: dict[str, Any | unmapped[Any] | allow_failure[Any]], wait_for: Iterable[PrefectFuture[R]] | None = None) -> PrefectFutureList[F] ``` Submit multiple tasks to the task run engine. **Args:** * `task`: The task to submit. * `parameters`: The parameters to use when running the task. * `wait_for`: A list of futures that the task depends on. **Returns:** * An iterable of future objects that can be used to wait for the tasks to * complete and retrieve the results. #### `name` ```python name(self) -> str ``` The name of this task runner #### `submit` ```python submit(self, task: 'Task[P, CoroutineType[Any, Any, R]]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None, dependencies: dict[str, set[RunInput]] | None = None) -> F ``` #### `submit` ```python submit(self, task: 'Task[Any, R]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None, dependencies: dict[str, set[RunInput]] | None = None) -> F ``` #### `submit` ```python submit(self, task: 'Task[P, R | CoroutineType[Any, Any, R]]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None, dependencies: dict[str, set[RunInput]] | None = None) -> F ``` ### `ThreadPoolTaskRunner` A task runner that executes tasks in a separate thread pool. **Examples:** Use a thread pool task runner with a flow: ```python from prefect import flow, task from prefect.task_runners import ThreadPoolTaskRunner @task def some_io_bound_task(x: int) -> int: # making a query to a database, reading a file, etc. return x * 2 @flow(task_runner=ThreadPoolTaskRunner(max_workers=3)) # use at most 3 threads at a time def my_io_bound_flow(): futures = [] for i in range(10): future = some_io_bound_task.submit(i * 100) futures.append(future) return [future.result() for future in futures] ``` Use a thread pool task runner as a context manager: ```python from prefect.task_runners import ThreadPoolTaskRunner @task def some_io_bound_task(x: int) -> int: # making a query to a database, reading a file, etc. return x * 2 # Use the runner directly with ThreadPoolTaskRunner(max_workers=2) as runner: future1 = runner.submit(some_io_bound_task, {"x": 1}) future2 = runner.submit(some_io_bound_task, {"x": 2}) result1 = future1.result() # 2 result2 = future2.result() # 4 ``` Configure max workers via settings: ```python # Set via environment variable # export PREFECT_TASK_RUNNER_THREAD_POOL_MAX_WORKERS=8 from prefect import flow from prefect.task_runners import ThreadPoolTaskRunner @flow(task_runner=ThreadPoolTaskRunner()) # Uses 8 workers from setting def my_flow(): ... ``` **Methods:** #### `cancel_all` ```python cancel_all(self) -> None ``` #### `duplicate` ```python duplicate(self) -> 'ThreadPoolTaskRunner[R]' ``` #### `map` ```python map(self, task: 'Task[P, CoroutineType[Any, Any, R]]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None) -> PrefectFutureList[PrefectConcurrentFuture[R]] ``` #### `map` ```python map(self, task: 'Task[Any, R]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None) -> PrefectFutureList[PrefectConcurrentFuture[R]] ``` #### `map` ```python map(self, task: 'Task[P, R | CoroutineType[Any, Any, R]]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None) -> PrefectFutureList[PrefectConcurrentFuture[R]] ``` #### `submit` ```python submit(self, task: 'Task[P, CoroutineType[Any, Any, R]]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None, dependencies: dict[str, set[RunInput]] | None = None) -> PrefectConcurrentFuture[R] ``` #### `submit` ```python submit(self, task: 'Task[Any, R]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None, dependencies: dict[str, set[RunInput]] | None = None) -> PrefectConcurrentFuture[R] ``` #### `submit` ```python submit(self, task: 'Task[P, R | CoroutineType[Any, Any, R]]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None, dependencies: dict[str, set[RunInput]] | None = None) -> PrefectConcurrentFuture[R] ``` Submit a task to the task run engine running in a separate thread. **Args:** * `task`: The task to submit. * `parameters`: The parameters to use when running the task. * `wait_for`: A list of futures that the task depends on. **Returns:** * A future object that can be used to wait for the task to complete and * retrieve the result. ### `ProcessPoolTaskRunner` A task runner that executes tasks in a separate process pool. This task runner uses `ProcessPoolExecutor` to run tasks in separate processes, providing true parallelism for CPU-bound tasks and process isolation. Tasks are executed with proper context propagation and error handling. **Examples:** Use a process pool task runner with a flow: ```python from prefect import flow, task from prefect.task_runners import ProcessPoolTaskRunner @task def compute_heavy_task(n: int) -> int: # CPU-intensive computation that benefits from process isolation return sum(i ** 2 for i in range(n)) @flow(task_runner=ProcessPoolTaskRunner(max_workers=4)) def my_flow(): futures = [] for i in range(10): future = compute_heavy_task.submit(i * 1000) futures.append(future) return [future.result() for future in futures] ``` Use a process pool task runner as a context manager: ```python from prefect.task_runners import ProcessPoolTaskRunner @task def my_task(x: int) -> int: return x * 2 # Use the runner directly with ProcessPoolTaskRunner(max_workers=2) as runner: future1 = runner.submit(my_task, {"x": 1}) future2 = runner.submit(my_task, {"x": 2}) result1 = future1.result() # 2 result2 = future2.result() # 4 ``` Configure max workers via settings: ```python # Set via environment variable # export PREFECT_TASKS_RUNNER_PROCESS_POOL_MAX_WORKERS=8 from prefect import flow from prefect.task_runners import ProcessPoolTaskRunner @flow(task_runner=ProcessPoolTaskRunner()) # Uses 8 workers from setting def my_flow(): ... ``` **Methods:** #### `cancel_all` ```python cancel_all(self) -> None ``` #### `duplicate` ```python duplicate(self) -> Self ``` #### `map` ```python map(self, task: 'Task[P, CoroutineType[Any, Any, R]]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None) -> PrefectFutureList[PrefectConcurrentFuture[R]] ``` #### `map` ```python map(self, task: 'Task[Any, R]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None) -> PrefectFutureList[PrefectConcurrentFuture[R]] ``` #### `map` ```python map(self, task: 'Task[P, R | CoroutineType[Any, Any, R]]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None) -> PrefectFutureList[PrefectConcurrentFuture[R]] ``` #### `submit` ```python submit(self, task: 'Task[P, CoroutineType[Any, Any, R]]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None, dependencies: dict[str, set[RunInput]] | None = None) -> PrefectConcurrentFuture[R] ``` #### `submit` ```python submit(self, task: 'Task[Any, R]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None, dependencies: dict[str, set[RunInput]] | None = None) -> PrefectConcurrentFuture[R] ``` #### `submit` ```python submit(self, task: 'Task[P, R | CoroutineType[Any, Any, R]]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None, dependencies: dict[str, set[RunInput]] | None = None) -> PrefectConcurrentFuture[R] ``` Submit a task to the task run engine running in a separate process. **Args:** * `task`: The task to submit. * `parameters`: The parameters to use when running the task. * `wait_for`: A list of futures that the task depends on. * `dependencies`: A dictionary of dependencies for the task. **Returns:** * A future object that can be used to wait for the task to complete and * retrieve the result. ### `PrefectTaskRunner` **Methods:** #### `duplicate` ```python duplicate(self) -> 'PrefectTaskRunner[R]' ``` #### `map` ```python map(self, task: 'Task[P, CoroutineType[Any, Any, R]]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None) -> PrefectFutureList[PrefectDistributedFuture[R]] ``` #### `map` ```python map(self, task: 'Task[Any, R]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None) -> PrefectFutureList[PrefectDistributedFuture[R]] ``` #### `map` ```python map(self, task: 'Task[P, R | CoroutineType[Any, Any, R]]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None) -> PrefectFutureList[PrefectDistributedFuture[R]] ``` #### `submit` ```python submit(self, task: 'Task[P, CoroutineType[Any, Any, R]]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None, dependencies: dict[str, set[RunInput]] | None = None) -> PrefectDistributedFuture[R] ``` #### `submit` ```python submit(self, task: 'Task[Any, R]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None, dependencies: dict[str, set[RunInput]] | None = None) -> PrefectDistributedFuture[R] ``` #### `submit` ```python submit(self, task: 'Task[P, R | CoroutineType[Any, Any, R]]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None, dependencies: dict[str, set[RunInput]] | None = None) -> PrefectDistributedFuture[R] ``` Submit a task to the task run engine running in a separate thread. **Args:** * `task`: The task to submit. * `parameters`: The parameters to use when running the task. * `wait_for`: A list of futures that the task depends on. **Returns:** * A future object that can be used to wait for the task to complete and * retrieve the result. # task_runs Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-task_runs # `prefect.task_runs` ## Classes ### `TaskRunWaiter` A service used for waiting for a task run to finish. This service listens for task run events and provides a way to wait for a specific task run to finish. This is useful for waiting for a task run to finish before continuing execution. The service is a singleton and must be started before use. The service will automatically start when the first instance is created. A single websocket connection is used to listen for task run events. The service can be used to wait for a task run to finish by calling `TaskRunWaiter.wait_for_task_run` with the task run ID to wait for. The method will return when the task run has finished or the timeout has elapsed. The service will automatically stop when the Python process exits or when the global loop thread is stopped. Example: ```python import asyncio from uuid import uuid4 from prefect import task from prefect.task_engine import run_task_async from prefect.task_runs import TaskRunWaiter @task async def test_task(): await asyncio.sleep(5) print("Done!") async def main(): task_run_id = uuid4() asyncio.create_task(run_task_async(task=test_task, task_run_id=task_run_id)) await TaskRunWaiter.wait_for_task_run(task_run_id) print("Task run finished") if __name__ == "__main__": asyncio.run(main()) ``` **Methods:** #### `add_done_callback` ```python add_done_callback(cls, task_run_id: uuid.UUID, callback: Callable[[], None]) -> None ``` Add a callback to be called when a task run finishes. **Args:** * `task_run_id`: The ID of the task run to wait for. * `callback`: The callback to call when the task run finishes. #### `instance` ```python instance(cls) -> Self ``` Get the singleton instance of TaskRunWaiter. #### `start` ```python start(self) -> None ``` Start the TaskRunWaiter service. #### `stop` ```python stop(self) -> None ``` Stop the TaskRunWaiter service. #### `wait_for_task_run` ```python wait_for_task_run(cls, task_run_id: uuid.UUID, timeout: Optional[float] = None) -> Optional[State[Any]] ``` Wait for a task run to finish and return its final state. Note this relies on a websocket connection to receive events from the server and will not work with an ephemeral server. **Args:** * `task_run_id`: The ID of the task run to wait for. * `timeout`: The maximum time to wait for the task run to finish. Defaults to None. **Returns:** * The final state of the task run if available, None otherwise. # task_worker Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-task_worker # `prefect.task_worker` ## Functions ### `should_try_to_read_parameters` ```python should_try_to_read_parameters(task: Task[P, R], task_run: TaskRun) -> bool ``` Determines whether a task run should read parameters from the result store. ### `create_status_server` ```python create_status_server(task_worker: TaskWorker) -> FastAPI ``` ### `serve` ```python serve(*tasks: Task[P, R]) ``` Serve the provided tasks so that their runs may be submitted to and executed in the engine. Tasks do not need to be within a flow run context to be submitted. You must `.submit` the same task object that you pass to `serve`. **Args:** * `- tasks`: A list of tasks to serve. When a scheduled task run is found for a given task, the task run will be submitted to the engine for execution. * `- limit`: The maximum number of tasks that can be run concurrently. Defaults to 10. Pass `None` to remove the limit. * `- status_server_port`: An optional port on which to start an HTTP server exposing status information about the task worker. If not provided, no status server will run. * `- timeout`: If provided, the task worker will exit after the given number of seconds. Defaults to None, meaning the task worker will run indefinitely. ### `store_parameters` ```python store_parameters(result_store: ResultStore, identifier: UUID, parameters: dict[str, Any]) -> None ``` Store parameters for a task run in the result store. **Args:** * `result_store`: The result store to store the parameters in. * `identifier`: The identifier of the task run. * `parameters`: The parameters to store. ### `read_parameters` ```python read_parameters(result_store: ResultStore, identifier: UUID) -> dict[str, Any] ``` Read parameters for a task run from the result store. **Args:** * `result_store`: The result store to read the parameters from. * `identifier`: The identifier of the task run. **Returns:** * The parameters for the task run. ## Classes ### `StopTaskWorker` Raised when the task worker is stopped. ### `TaskWorker` This class is responsible for serving tasks that may be executed in the background by a task runner via the traditional engine machinery. When `start()` is called, the task worker will open a websocket connection to a server-side queue of scheduled task runs. When a scheduled task run is found, the scheduled task run is submitted to the engine for execution with a minimal `EngineContext` so that the task run can be governed by orchestration rules. **Args:** * `- tasks`: A list of tasks to serve. These tasks will be submitted to the engine when a scheduled task run is found. * `- limit`: The maximum number of tasks that can be run concurrently. Defaults to 10. Pass `None` to remove the limit. **Methods:** #### `available_tasks` ```python available_tasks(self) -> Optional[int] ``` #### `client_id` ```python client_id(self) -> str ``` #### `current_tasks` ```python current_tasks(self) -> Optional[int] ``` #### `execute_task_run` ```python execute_task_run(self, task_run: TaskRun) -> None ``` Execute a task run in the task worker. #### `handle_sigterm` ```python handle_sigterm(self, signum: int, frame: object) -> None ``` Shuts down the task worker when a SIGTERM is received. #### `limit` ```python limit(self) -> Optional[int] ``` #### `start` ```python start(self, timeout: Optional[float] = None) -> None ``` Starts a task worker, which runs the tasks provided in the constructor. **Args:** * `timeout`: If provided, the task worker will exit after the given number of seconds. Defaults to None, meaning the task worker will run indefinitely. #### `started` ```python started(self) -> bool ``` #### `started_at` ```python started_at(self) -> Optional[DateTime] ``` #### `stop` ```python stop(self) ``` Stops the task worker's polling cycle. # tasks Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-tasks # `prefect.tasks` Module containing the base workflow task class and decorator - for most use cases, using the `@task` decorator is preferred. ## Functions ### `task_input_hash` ```python task_input_hash(context: 'TaskRunContext', arguments: dict[str, Any]) -> Optional[str] ``` A task cache key implementation which hashes all inputs to the task using a JSON or cloudpickle serializer. If any arguments are not JSON serializable, the pickle serializer is used as a fallback. If cloudpickle fails, this will return a null key indicating that a cache key could not be generated for the given inputs. **Args:** * `context`: the active `TaskRunContext` * `arguments`: a dictionary of arguments to be passed to the underlying task **Returns:** * a string hash if hashing succeeded, else `None` ### `exponential_backoff` ```python exponential_backoff(backoff_factor: float) -> Callable[[int], list[float]] ``` A task retry backoff utility that configures exponential backoff for task retries. The exponential backoff design matches the urllib3 implementation. **Args:** * `backoff_factor`: the base delay for the first retry, subsequent retries will increase the delay time by powers of 2. **Returns:** * a callable that can be passed to the task constructor ### `task` ```python task(__fn: Optional[Callable[P, R]] = None) ``` Decorator to designate a function as a task in a Prefect workflow. This decorator may be used for asynchronous or synchronous functions. **Args:** * `name`: An optional name for the task; if not provided, the name will be inferred from the given function. * `description`: An optional string description for the task. * `tags`: An optional set of tags to be associated with runs of this task. These tags are combined with any tags defined by a `prefect.tags` context at task runtime. * `version`: An optional string specifying the version of this task definition * `cache_key_fn`: An optional callable that, given the task run context and call parameters, generates a string key; if the key matches a previous completed state, that state result will be restored instead of running the task again. * `cache_expiration`: An optional amount of time indicating how long cached states for this task should be restorable; if not provided, cached states will never expire. * `task_run_name`: An optional name to distinguish runs of this task; this name can be provided as a string template with the task's keyword arguments as variables, or a function that returns a string. * `retries`: An optional number of times to retry on task run failure * `retry_delay_seconds`: Optionally configures how long to wait before retrying the task after failure. This is only applicable if `retries` is nonzero. This setting can either be a number of seconds, a list of retry delays, or a callable that, given the total number of retries, generates a list of retry delays. If a number of seconds, that delay will be applied to all retries. If a list, each retry will wait for the corresponding delay before retrying. When passing a callable or a list, the number of configured retry delays cannot exceed 50. * `retry_jitter_factor`: An optional factor that defines the factor to which a retry can be jittered in order to avoid a "thundering herd". * `persist_result`: A toggle indicating whether the result of this task should be persisted to result storage. Defaults to `None`, which indicates that the global default should be used (which is `True` by default). * `result_storage`: An optional block to use to persist the result of this task. Defaults to the value set in the flow the task is called in. * `result_storage_key`: An optional key to store the result in storage at when persisted. Defaults to a unique identifier. * `result_serializer`: An optional serializer to use to serialize the result of this task for persistence. Defaults to the value set in the flow the task is called in. * `timeout_seconds`: An optional number of seconds indicating a maximum runtime for the task. If the task exceeds this runtime, it will be marked as failed. * `log_prints`: If set, `print` statements in the task will be redirected to the Prefect logger for the task run. Defaults to `None`, which indicates that the value from the flow should be used. * `refresh_cache`: If set, cached results for the cache key are not used. Defaults to `None`, which indicates that a cached result from a previous execution with matching cache key is used. * `on_failure`: An optional list of callables to run when the task enters a failed state. * `on_completion`: An optional list of callables to run when the task enters a completed state. * `retry_condition_fn`: An optional callable run when a task run returns a Failed state. Should return `True` if the task should continue to its retry policy (e.g. `retries=3`), and `False` if the task should end as failed. Defaults to `None`, indicating the task should always continue to its retry policy. * `viz_return_value`: An optional value to return when the task dependency tree is visualized. * `asset_deps`: An optional list of upstream assets that this task depends on. **Returns:** * A callable `Task` object which, when called, will submit the task for execution. **Examples:** Define a simple task ```python @task def add(x, y): return x + y ``` Define an async task ```python @task async def add(x, y): return x + y ``` Define a task with tags and a description ```python @task(tags={"a", "b"}, description="This task is empty but its my first!") def my_task(): pass ``` Define a task with a custom name ```python @task(name="The Ultimate Task") def my_task(): pass ``` Define a task that retries 3 times with a 5 second delay between attempts ```python from random import randint @task(retries=3, retry_delay_seconds=5) def my_task(): x = randint(0, 5) if x >= 3: # Make a task that fails sometimes raise ValueError("Retry me please!") return x ``` Define a task that is cached for a day based on its inputs ```python from prefect.tasks import task_input_hash from datetime import timedelta @task(cache_key_fn=task_input_hash, cache_expiration=timedelta(days=1)) def my_task(): return "hello" ``` ## Classes ### `TaskRunNameCallbackWithParameters` **Methods:** #### `is_callback_with_parameters` ```python is_callback_with_parameters(cls, callable: Callable[..., str]) -> TypeIs[Self] ``` ### `TaskOptions` A TypedDict representing all available task configuration options. This can be used with `Unpack` to provide type hints for \*\*kwargs. ### `Task` A Prefect task definition. Wraps a function with an entrypoint to the Prefect engine. Calling this class within a flow function creates a new task run. To preserve the input and output types, we use the generic type variables P and R for "Parameters" and "Returns" respectively. **Args:** * `fn`: The function defining the task. * `name`: An optional name for the task; if not provided, the name will be inferred from the given function. * `description`: An optional string description for the task. * `tags`: An optional set of tags to be associated with runs of this task. These tags are combined with any tags defined by a `prefect.tags` context at task runtime. * `version`: An optional string specifying the version of this task definition * `cache_policy`: A cache policy that determines the level of caching for this task * `cache_key_fn`: An optional callable that, given the task run context and call parameters, generates a string key; if the key matches a previous completed state, that state result will be restored instead of running the task again. * `cache_expiration`: An optional amount of time indicating how long cached states for this task should be restorable; if not provided, cached states will never expire. * `task_run_name`: An optional name to distinguish runs of this task; this name can be provided as a string template with the task's keyword arguments as variables, or a function that returns a string. * `retries`: An optional number of times to retry on task run failure. * `retry_delay_seconds`: Optionally configures how long to wait before retrying the task after failure. This is only applicable if `retries` is nonzero. This setting can either be a number of seconds, a list of retry delays, or a callable that, given the total number of retries, generates a list of retry delays. If a number of seconds, that delay will be applied to all retries. If a list, each retry will wait for the corresponding delay before retrying. When passing a callable or a list, the number of configured retry delays cannot exceed 50. * `retry_jitter_factor`: An optional factor that defines the factor to which a retry can be jittered in order to avoid a "thundering herd". * `persist_result`: A toggle indicating whether the result of this task should be persisted to result storage. Defaults to `None`, which indicates that the global default should be used (which is `True` by default). * `result_storage`: An optional block to use to persist the result of this task. Defaults to the value set in the flow the task is called in. * `result_storage_key`: An optional key to store the result in storage at when persisted. Defaults to a unique identifier. * `result_serializer`: An optional serializer to use to serialize the result of this task for persistence. Defaults to the value set in the flow the task is called in. * `timeout_seconds`: An optional number of seconds indicating a maximum runtime for the task. If the task exceeds this runtime, it will be marked as failed. * `log_prints`: If set, `print` statements in the task will be redirected to the Prefect logger for the task run. Defaults to `None`, which indicates that the value from the flow should be used. * `refresh_cache`: If set, cached results for the cache key are not used. Defaults to `None`, which indicates that a cached result from a previous execution with matching cache key is used. * `on_failure`: An optional list of callables to run when the task enters a failed state. * `on_completion`: An optional list of callables to run when the task enters a completed state. * `on_commit`: An optional list of callables to run when the task's idempotency record is committed. * `on_rollback`: An optional list of callables to run when the task rolls back. * `retry_condition_fn`: An optional callable run when a task run returns a Failed state. Should return `True` if the task should continue to its retry policy (e.g. `retries=3`), and `False` if the task should end as failed. Defaults to `None`, indicating the task should always continue to its retry policy. * `viz_return_value`: An optional value to return when the task dependency tree is visualized. * `asset_deps`: An optional list of upstream assets that this task depends on. **Methods:** #### `apply_async` ```python apply_async(self, args: Optional[tuple[Any, ...]] = None, kwargs: Optional[dict[str, Any]] = None, wait_for: Optional[Iterable[PrefectFuture[R]]] = None, dependencies: Optional[dict[str, set[RunInput]]] = None) -> PrefectDistributedFuture[R] ``` Create a pending task run for a task worker to execute. **Args:** * `args`: Arguments to run the task with * `kwargs`: Keyword arguments to run the task with **Returns:** * A PrefectDistributedFuture object representing the pending task run Examples: Define a task ```python from prefect import task @task def my_task(name: str = "world"): return f"hello {name}" ``` Create a pending task run for the task ```python from prefect import flow @flow def my_flow(): my_task.apply_async(("marvin",)) ``` Wait for a task to finish ```python @flow def my_flow(): my_task.apply_async(("marvin",)).wait() ``` ```python @flow def my_flow(): print(my_task.apply_async(("marvin",)).result()) my_flow() # hello marvin ``` TODO: Enforce ordering between tasks that do not exchange data ```python @task def task_1(): pass @task def task_2(): pass @flow def my_flow(): x = task_1.apply_async() # task 2 will wait for task_1 to complete y = task_2.apply_async(wait_for=[x]) ``` #### `create_local_run` ```python create_local_run(self, client: Optional['PrefectClient'] = None, id: Optional[UUID] = None, parameters: Optional[dict[str, Any]] = None, flow_run_context: Optional[FlowRunContext] = None, parent_task_run_context: Optional[TaskRunContext] = None, wait_for: Optional[OneOrManyFutureOrResult[Any]] = None, extra_task_inputs: Optional[dict[str, set[RunInput]]] = None, deferred: bool = False) -> TaskRun ``` #### `create_run` ```python create_run(self, client: Optional['PrefectClient'] = None, id: Optional[UUID] = None, parameters: Optional[dict[str, Any]] = None, flow_run_context: Optional[FlowRunContext] = None, parent_task_run_context: Optional[TaskRunContext] = None, wait_for: Optional[OneOrManyFutureOrResult[Any]] = None, extra_task_inputs: Optional[dict[str, set[RunInput]]] = None, deferred: bool = False) -> TaskRun ``` #### `delay` ```python delay(self, *args: P.args, **kwargs: P.kwargs) -> PrefectDistributedFuture[R] ``` An alias for `apply_async` with simpler calling semantics. Avoids having to use explicit "args" and "kwargs" arguments. Arguments will pass through as-is to the task. Examples: Define a task ```python from prefect import task @task def my_task(name: str = "world"): return f"hello {name}" ``` Create a pending task run for the task ```python from prefect import flow @flow def my_flow(): my_task.delay("marvin") ``` Wait for a task to finish ```python @flow def my_flow(): my_task.delay("marvin").wait() ``` Use the result from a task in a flow ```python @flow def my_flow(): print(my_task.delay("marvin").result()) my_flow() # hello marvin ``` #### `isclassmethod` ```python isclassmethod(self) -> bool ``` #### `ismethod` ```python ismethod(self) -> bool ``` #### `isstaticmethod` ```python isstaticmethod(self) -> bool ``` #### `map` ```python map(self: 'Task[P, R]', *args: Any, **kwargs: Any) -> list[State[R]] ``` #### `map` ```python map(self: 'Task[P, R]', *args: Any, **kwargs: Any) -> PrefectFutureList[R] ``` #### `map` ```python map(self: 'Task[P, R]', *args: Any, **kwargs: Any) -> list[State[R]] ``` #### `map` ```python map(self: 'Task[P, R]', *args: Any, **kwargs: Any) -> PrefectFutureList[R] ``` #### `map` ```python map(self: 'Task[P, Coroutine[Any, Any, R]]', *args: Any, **kwargs: Any) -> list[State[R]] ``` #### `map` ```python map(self: 'Task[P, Coroutine[Any, Any, R]]', *args: Any, **kwargs: Any) -> PrefectFutureList[R] ``` #### `map` ```python map(self, *args: Any, **kwargs: Any) -> Union[list[State[R]], PrefectFutureList[R]] ``` Submit a mapped run of the task to a worker. Must be called within a flow run context. Will return a list of futures that should be waited on before exiting the flow context to ensure all mapped tasks have completed. Must be called with at least one iterable and all iterables must be the same length. Any arguments that are not iterable will be treated as a static value and each task run will receive the same value. Will create as many task runs as the length of the iterable(s) in the backing API and submit the task runs to the flow's task runner. This call blocks if given a future as input while the future is resolved. It also blocks while the tasks are being submitted, once they are submitted, the flow function will continue executing. This method is always synchronous, even if the underlying user function is asynchronous. **Args:** * `*args`: Iterable and static arguments to run the tasks with * `return_state`: Return a list of Prefect States that wrap the results of each task run. * `wait_for`: Upstream task futures to wait for before starting the task * `**kwargs`: Keyword iterable arguments to run the task with **Returns:** * A list of futures allowing asynchronous access to the state of the * tasks Examples: Define a task ```python from prefect import task @task def my_task(x): return x + 1 ``` Create mapped tasks ```python from prefect import flow @flow def my_flow(): return my_task.map([1, 2, 3]) ``` Wait for all mapped tasks to finish ```python @flow def my_flow(): futures = my_task.map([1, 2, 3]) futures.wait(): # Now all of the mapped tasks have finished my_task(10) ``` Use the result from mapped tasks in a flow ```python @flow def my_flow(): futures = my_task.map([1, 2, 3]) for x in futures.result(): print(x) my_flow() # 2 # 3 # 4 ``` Enforce ordering between tasks that do not exchange data ```python @task def task_1(x): pass @task def task_2(y): pass @flow def my_flow(): x = task_1.submit() # task 2 will wait for task_1 to complete y = task_2.map([1, 2, 3], wait_for=[x]) return y ``` Use a non-iterable input as a constant across mapped tasks ```python @task def display(prefix, item): print(prefix, item) @flow def my_flow(): return display.map("Check it out: ", [1, 2, 3]) my_flow() # Check it out: 1 # Check it out: 2 # Check it out: 3 ``` Use `unmapped` to treat an iterable argument as a constant ```python from prefect import unmapped @task def add_n_to_items(items, n): return [item + n for item in items] @flow def my_flow(): return add_n_to_items.map(unmapped([10, 20]), n=[1, 2, 3]) my_flow() # [[11, 21], [12, 22], [13, 23]] ``` #### `on_commit` ```python on_commit(self, fn: Callable[['Transaction'], None]) -> Callable[['Transaction'], None] ``` #### `on_completion` ```python on_completion(self, fn: StateHookCallable) -> StateHookCallable ``` #### `on_failure` ```python on_failure(self, fn: StateHookCallable) -> StateHookCallable ``` #### `on_rollback` ```python on_rollback(self, fn: Callable[['Transaction'], None]) -> Callable[['Transaction'], None] ``` #### `serve` ```python serve(self) -> NoReturn ``` Serve the task using the provided task runner. This method is used to establish a websocket connection with the Prefect server and listen for submitted task runs to execute. **Args:** * `task_runner`: The task runner to use for serving the task. If not provided, the default task runner will be used. **Examples:** Serve a task using the default task runner ```python @task def my_task(): return 1 my_task.serve() ``` #### `submit` ```python submit(self: 'Task[P, R]', *args: P.args, **kwargs: P.kwargs) -> PrefectFuture[R] ``` #### `submit` ```python submit(self: 'Task[P, Coroutine[Any, Any, R]]', *args: P.args, **kwargs: P.kwargs) -> PrefectFuture[R] ``` #### `submit` ```python submit(self: 'Task[P, R]', *args: P.args, **kwargs: P.kwargs) -> PrefectFuture[R] ``` #### `submit` ```python submit(self: 'Task[P, Coroutine[Any, Any, R]]', *args: P.args, **kwargs: P.kwargs) -> State[R] ``` #### `submit` ```python submit(self: 'Task[P, R]', *args: P.args, **kwargs: P.kwargs) -> State[R] ``` #### `submit` ```python submit(self: 'Union[Task[P, R], Task[P, Coroutine[Any, Any, R]]]', *args: Any, **kwargs: Any) ``` Submit a run of the task to the engine. Will create a new task run in the backing API and submit the task to the flow's task runner. This call only blocks execution while the task is being submitted, once it is submitted, the flow function will continue executing. This method is always synchronous, even if the underlying user function is asynchronous. **Args:** * `*args`: Arguments to run the task with * `return_state`: Return the result of the flow run wrapped in a Prefect State. * `wait_for`: Upstream task futures to wait for before starting the task * `**kwargs`: Keyword arguments to run the task with **Returns:** * If `return_state` is False a future allowing asynchronous access to the state of the task * If `return_state` is True a future wrapped in a Prefect State allowing asynchronous access to the state of the task Examples: Define a task ```python from prefect import task @task def my_task(): return "hello" ``` Run a task in a flow ```python from prefect import flow @flow def my_flow(): my_task.submit() ``` Wait for a task to finish ```python @flow def my_flow(): my_task.submit().wait() ``` Use the result from a task in a flow ```python @flow def my_flow(): print(my_task.submit().result()) my_flow() # hello ``` Run an async task in an async flow ```python @task async def my_async_task(): pass @flow async def my_flow(): my_async_task.submit() ``` Run a sync task in an async flow ```python @flow async def my_flow(): my_task.submit() ``` Enforce ordering between tasks that do not exchange data ```python @task def task_1(): pass @task def task_2(): pass @flow def my_flow(): x = task_1.submit() # task 2 will wait for task_1 to complete y = task_2.submit(wait_for=[x]) ``` #### `with_options` ```python with_options(self) -> 'Task[P, R]' ``` Create a new task from the current object, updating provided options. **Args:** * `name`: A new name for the task. * `description`: A new description for the task. * `tags`: A new set of tags for the task. If given, existing tags are ignored, not merged. * `cache_key_fn`: A new cache key function for the task. * `cache_expiration`: A new cache expiration time for the task. * `task_run_name`: An optional name to distinguish runs of this task; this name can be provided as a string template with the task's keyword arguments as variables, or a function that returns a string. * `retries`: A new number of times to retry on task run failure. * `retry_delay_seconds`: Optionally configures how long to wait before retrying the task after failure. This is only applicable if `retries` is nonzero. This setting can either be a number of seconds, a list of retry delays, or a callable that, given the total number of retries, generates a list of retry delays. If a number of seconds, that delay will be applied to all retries. If a list, each retry will wait for the corresponding delay before retrying. When passing a callable or a list, the number of configured retry delays cannot exceed 50. * `retry_jitter_factor`: An optional factor that defines the factor to which a retry can be jittered in order to avoid a "thundering herd". * `persist_result`: A new option for enabling or disabling result persistence. * `result_storage`: A new storage type to use for results. * `result_serializer`: A new serializer to use for results. * `result_storage_key`: A new key for the persisted result to be stored at. * `timeout_seconds`: A new maximum time for the task to complete in seconds. * `log_prints`: A new option for enabling or disabling redirection of `print` statements. * `refresh_cache`: A new option for enabling or disabling cache refresh. * `on_completion`: A new list of callables to run when the task enters a completed state. * `on_failure`: A new list of callables to run when the task enters a failed state. * `retry_condition_fn`: An optional callable run when a task run returns a Failed state. Should return `True` if the task should continue to its retry policy, and `False` if the task should end as failed. Defaults to `None`, indicating the task should always continue to its retry policy. * `viz_return_value`: An optional value to return when the task dependency tree is visualized. **Returns:** * A new `Task` instance. Examples: Create a new task from an existing task and update the name: ```python @task(name="My task") def my_task(): return 1 new_task = my_task.with_options(name="My new task") ``` Create a new task from an existing task and update the retry settings: ```python from random import randint @task(retries=1, retry_delay_seconds=5) def my_task(): x = randint(0, 5) if x >= 3: # Make a task that fails sometimes raise ValueError("Retry me please!") return x new_task = my_task.with_options(retries=5, retry_delay_seconds=2) ``` Use a task with updated options within a flow: ```python @task(name="My task") def my_task(): return 1 @flow my_flow(): new_task = my_task.with_options(name="My new task") new_task() ``` ### `MaterializingTask` A task that materializes Assets. **Args:** * `assets`: List of Assets that this task materializes (can be str or Asset) * `materialized_by`: An optional tool that materialized the asset e.g. "dbt" or "spark" * `**task_kwargs`: All other Task arguments **Methods:** #### `with_options` ```python with_options(self, assets: Optional[Sequence[Union[str, Asset]]] = None, **task_kwargs: Unpack[TaskOptions]) -> 'MaterializingTask[P, R]' ``` # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-telemetry-__init__ # `prefect.telemetry` *This module is empty or contains only private/internal implementations.* # run_telemetry Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-telemetry-run_telemetry # `prefect.telemetry.run_telemetry` ## Classes ### `OTELSetter` A setter for OpenTelemetry that supports Prefect's custom labels. **Methods:** #### `set` ```python set(self, carrier: KeyValueLabels, key: str, value: str) -> None ``` ### `RunTelemetry` A class for managing the telemetry of runs. **Methods:** #### `async_start_span` ```python async_start_span(self, run: FlowOrTaskRun, client: PrefectClient, parameters: dict[str, Any] | None = None) -> Span | None ``` #### `end_span_on_failure` ```python end_span_on_failure(self, terminal_message: str | None = None) -> None ``` End a span for a run on failure. #### `end_span_on_success` ```python end_span_on_success(self) -> None ``` End a span for a run on success. #### `record_exception` ```python record_exception(self, exc: BaseException) -> None ``` Record an exception on a span. #### `start_span` ```python start_span(self, run: FlowOrTaskRun, client: SyncPrefectClient, parameters: dict[str, Any] | None = None) -> Span | None ``` #### `traceparent_from_span` ```python traceparent_from_span(span: Span) -> str | None ``` #### `update_run_name` ```python update_run_name(self, name: str) -> None ``` Update the name of the run. #### `update_state` ```python update_state(self, new_state: State) -> None ``` Update a span with the state of a run. # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-testing-__init__ # `prefect.testing` *This module is empty or contains only private/internal implementations.* # cli Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-testing-cli # `prefect.testing.cli` ## Functions ### `check_contains` ```python check_contains(cli_result: Result, content: str, should_contain: bool) -> None ``` Utility function to see if content is or is not in a CLI result. **Args:** * `should_contain`: if True, checks that content is in cli\_result, if False, checks that content is not in cli\_result ### `invoke_and_assert` ```python invoke_and_assert(command: str | list[str], user_input: str | None = None, prompts_and_responses: list[tuple[str, str] | tuple[str, str, str]] | None = None, expected_output: str | None = None, expected_output_contains: str | Iterable[str] | None = None, expected_output_does_not_contain: str | Iterable[str] | None = None, expected_line_count: int | None = None, expected_code: int | None = 0, echo: bool = True, temp_dir: str | None = None) -> Result ``` Test utility for the Prefect CLI application, asserts exact match with CLI output. **Args:** * `command`: Command passed to the Typer CliRunner * `user_input`: User input passed to the Typer CliRunner when running interactive commands. * `expected_output`: Used when you expect the CLI output to be an exact match with the provided text. * `expected_output_contains`: Used when you expect the CLI output to contain the string or strings. * `expected_output_does_not_contain`: Used when you expect the CLI output to not contain the string or strings. * `expected_code`: 0 if we expect the app to exit cleanly, else 1 if we expect the app to exit with an error. * `temp_dir`: if provided, the CLI command will be run with this as its present working directory. ### `temporary_console_width` ```python temporary_console_width(console: Console, width: int) ``` # docker Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-testing-docker # `prefect.testing.docker` ## Functions ### `capture_builders` ```python capture_builders() -> Generator[list[ImageBuilder], None, None] ``` Captures any instances of ImageBuilder created while this context is active # fixtures Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-testing-fixtures # `prefect.testing.fixtures` ## Functions ### `add_prefect_loggers_to_caplog` ```python add_prefect_loggers_to_caplog(caplog: pytest.LogCaptureFixture) -> Generator[None, None, None] ``` ### `is_port_in_use` ```python is_port_in_use(port: int) -> bool ``` ### `hosted_api_server` ```python hosted_api_server(unused_tcp_port_factory: Callable[[], int]) -> AsyncGenerator[str, None] ``` Runs an instance of the Prefect API server in a subprocess instead of the using the ephemeral application. Uses the same database as the rest of the tests. ### `use_hosted_api_server` ```python use_hosted_api_server(hosted_api_server: str) -> Generator[str, None, None] ``` Sets `PREFECT_API_URL` to the test session's hosted API endpoint. ### `disable_hosted_api_server` ```python disable_hosted_api_server() -> Generator[None, None, None] ``` Disables the hosted API server by setting `PREFECT_API_URL` to `None`. ### `enable_ephemeral_server` ```python enable_ephemeral_server(disable_hosted_api_server: None) -> Generator[None, None, None] ``` Enables the ephemeral server by setting `PREFECT_SERVER_ALLOW_EPHEMERAL_MODE` to `True`. ### `mock_anyio_sleep` ```python mock_anyio_sleep(monkeypatch: pytest.MonkeyPatch) -> Generator[Callable[[float], None], None, None] ``` Mock sleep used to not actually sleep but to set the current time to now + sleep delay seconds while still yielding to other tasks in the event loop. Provides "assert\_sleeps\_for" context manager which asserts a sleep time occurred within the context while using the actual runtime of the context as a tolerance. ### `recorder` ```python recorder() -> Recorder ``` ### `puppeteer` ```python puppeteer() -> Puppeteer ``` ### `events_server` ```python events_server(unused_tcp_port: int, recorder: Recorder, puppeteer: Puppeteer) -> AsyncGenerator[Server, None] ``` ### `events_api_url` ```python events_api_url(events_server: Server, unused_tcp_port: int) -> str ``` ### `events_cloud_api_url` ```python events_cloud_api_url(events_server: Server, unused_tcp_port: int) -> str ``` ### `mock_should_emit_events` ```python mock_should_emit_events(monkeypatch: pytest.MonkeyPatch) -> mock.Mock ``` ### `asserting_events_worker` ```python asserting_events_worker(monkeypatch: pytest.MonkeyPatch) -> Generator[EventsWorker, None, None] ``` ### `asserting_and_emitting_events_worker` ```python asserting_and_emitting_events_worker(monkeypatch: pytest.MonkeyPatch) -> Generator[EventsWorker, None, None] ``` ### `events_pipeline` ```python events_pipeline(asserting_events_worker: EventsWorker) -> AsyncGenerator[EventsPipeline, None] ``` ### `emitting_events_pipeline` ```python emitting_events_pipeline(asserting_and_emitting_events_worker: EventsWorker) -> AsyncGenerator[EventsPipeline, None] ``` ### `reset_worker_events` ```python reset_worker_events(asserting_events_worker: EventsWorker) -> Generator[None, None, None] ``` ### `enable_lineage_events` ```python enable_lineage_events() -> Generator[None, None, None] ``` A fixture that ensures lineage events are enabled. ## Classes ### `Recorder` ### `Puppeteer` # standard_test_suites Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-testing-standard_test_suites # `prefect.testing.standard_test_suites` *This module is empty or contains only private/internal implementations.* # utilities Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-testing-utilities # `prefect.testing.utilities` Internal utilities for tests. ## Functions ### `exceptions_equal` ```python exceptions_equal(a: Exception, b: Exception) -> bool ``` Exceptions cannot be compared by `==`. They can be compared using `is` but this will fail if the exception is serialized/deserialized so this utility does its best to assert equality using the type and args used to initialize the exception ### `kubernetes_environments_equal` ```python kubernetes_environments_equal(actual: list[dict[str, str]], expected: list[dict[str, str]] | dict[str, str]) -> bool ``` ### `assert_does_not_warn` ```python assert_does_not_warn(ignore_warnings: list[type[Warning]] | None = None) -> Generator[None, None, None] ``` Converts warnings to errors within this context to assert warnings are not raised, except for those specified in ignore\_warnings. Parameters: * ignore\_warnings: List of warning types to ignore. Example: \[DeprecationWarning, UserWarning] ### `prefect_test_harness` ```python prefect_test_harness(server_startup_timeout: int | None = 30) ``` Temporarily run flows against a local SQLite database for testing. **Args:** * `server_startup_timeout`: The maximum time to wait for the server to start. Defaults to 30 seconds. If set to `None`, the value of `PREFECT_SERVER_EPHEMERAL_STARTUP_TIMEOUT_SECONDS` will be used. **Examples:** ```python from prefect import flow from prefect.testing.utilities import prefect_test_harness @flow def my_flow(): return 'Done!' with prefect_test_harness(): assert my_flow() == 'Done!' # run against temporary db ``` ### `get_most_recent_flow_run` ```python get_most_recent_flow_run(client: 'PrefectClient | None' = None, flow_name: str | None = None) -> 'FlowRun' ``` ### `assert_blocks_equal` ```python assert_blocks_equal(found: Block, expected: Block, exclude_private: bool = True, **kwargs: Any) -> None ``` ### `assert_uses_result_serializer` ```python assert_uses_result_serializer(state: State, serializer: str | Serializer, client: 'PrefectClient') -> None ``` ### `assert_uses_result_storage` ```python assert_uses_result_storage(state: State, storage: 'str | ReadableFileSystem', client: 'PrefectClient') -> None ``` ### `a_test_step` ```python a_test_step(**kwargs: Any) -> dict[str, Any] ``` ### `b_test_step` ```python b_test_step(**kwargs: Any) -> dict[str, Any] ``` # transactions Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-transactions # `prefect.transactions` ## Functions ### `get_transaction` ```python get_transaction() -> BaseTransaction | None ``` ### `transaction` ```python transaction(key: str | None = None, store: ResultStore | None = None, commit_mode: CommitMode | None = None, isolation_level: IsolationLevel | None = None, overwrite: bool = False, write_on_commit: bool = True, logger: logging.Logger | LoggingAdapter | None = None) -> Generator[Transaction, None, None] ``` A context manager for opening and managing a transaction. **Args:** * `- key`: An identifier to use for the transaction * `- store`: The store to use for persisting the transaction result. If not provided, a default store will be used based on the current run context. * `- commit_mode`: The commit mode controlling when the transaction and child transactions are committed * `- overwrite`: Whether to overwrite an existing transaction record in the store * `- write_on_commit`: Whether to write the result to the store on commit. If not provided, will default will be determined by the current run context. If no run context is available, the value of `PREFECT_RESULTS_PERSIST_BY_DEFAULT` will be used. ### `atransaction` ```python atransaction(key: str | None = None, store: ResultStore | None = None, commit_mode: CommitMode | None = None, isolation_level: IsolationLevel | None = None, overwrite: bool = False, write_on_commit: bool = True, logger: logging.Logger | LoggingAdapter | None = None) -> AsyncGenerator[AsyncTransaction, None] ``` An asynchronous context manager for opening and managing an asynchronous transaction. **Args:** * `- key`: An identifier to use for the transaction * `- store`: The store to use for persisting the transaction result. If not provided, a default store will be used based on the current run context. * `- commit_mode`: The commit mode controlling when the transaction and child transactions are committed * `- overwrite`: Whether to overwrite an existing transaction record in the store * `- write_on_commit`: Whether to write the result to the store on commit. If not provided, the default will be determined by the current run context. If no run context is available, the value of `PREFECT_RESULTS_PERSIST_BY_DEFAULT` will be used. ## Classes ### `IsolationLevel` **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `CommitMode` **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `TransactionState` **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `BaseTransaction` A base model for transaction state. **Methods:** #### `add_child` ```python add_child(self, transaction: Self) -> None ``` #### `get` ```python get(self, name: str, default: Any = NotSet) -> Any ``` Get a stored value from the transaction. Child transactions will return values from their parents unless a value with the same name is set in the child transaction. Direct changes to returned values will not update the stored value. To update the stored value, use the `set` method. **Args:** * `name`: The name of the value to get * `default`: The default value to return if the value is not found **Returns:** * The value from the transaction **Examples:** Get a value from the transaction: ```python with transaction() as txn: txn.set("key", "value") ... assert txn.get("key") == "value" ``` Get a value from a parent transaction: ```python with transaction() as parent: parent.set("key", "parent_value") with transaction() as child: assert child.get("key") == "parent_value" ``` Update a stored value: ```python with transaction() as txn: txn.set("key", [1, 2, 3]) value = txn.get("key") value.append(4) # Stored value is not updated until `.set` is called assert value == [1, 2, 3, 4] assert txn.get("key") == [1, 2, 3] txn.set("key", value) assert txn.get("key") == [1, 2, 3, 4] ``` #### `get` ```python get(cls: type[Self]) -> Optional[Self] ``` Get the current context instance #### `get_active` ```python get_active(cls: Type[Self]) -> Optional[Self] ``` #### `get_parent` ```python get_parent(self) -> Self | None ``` #### `is_active` ```python is_active(self) -> bool ``` #### `is_committed` ```python is_committed(self) -> bool ``` #### `is_pending` ```python is_pending(self) -> bool ``` #### `is_rolled_back` ```python is_rolled_back(self) -> bool ``` #### `is_staged` ```python is_staged(self) -> bool ``` #### `model_copy` ```python model_copy(self: Self) -> Self ``` Duplicate the context model, optionally choosing which fields to include, exclude, or change. **Returns:** * A new model instance. #### `prepare_transaction` ```python prepare_transaction(self) -> None ``` Helper method to prepare transaction state and validate configuration. #### `serialize` ```python serialize(self, include_secrets: bool = True) -> dict[str, Any] ``` Serialize the context model to a dictionary that can be pickled with cloudpickle. #### `set` ```python set(self, name: str, value: Any) -> None ``` Set a stored value in the transaction. **Args:** * `name`: The name of the value to set * `value`: The value to set **Examples:** Set a value for use later in the transaction: ```python with transaction() as txn: txn.set("key", "value") ... assert txn.get("key") == "value" ``` #### `stage` ```python stage(self, value: Any, on_rollback_hooks: Optional[list[Callable[..., Any]]] = None, on_commit_hooks: Optional[list[Callable[..., Any]]] = None) -> None ``` Stage a value to be committed later. ### `Transaction` A model representing the state of a transaction. **Methods:** #### `add_child` ```python add_child(self, transaction: Self) -> None ``` #### `begin` ```python begin(self) -> None ``` #### `commit` ```python commit(self) -> bool ``` #### `get` ```python get(self, name: str, default: Any = NotSet) -> Any ``` Get a stored value from the transaction. Child transactions will return values from their parents unless a value with the same name is set in the child transaction. Direct changes to returned values will not update the stored value. To update the stored value, use the `set` method. **Args:** * `name`: The name of the value to get * `default`: The default value to return if the value is not found **Returns:** * The value from the transaction **Examples:** Get a value from the transaction: ```python with transaction() as txn: txn.set("key", "value") ... assert txn.get("key") == "value" ``` Get a value from a parent transaction: ```python with transaction() as parent: parent.set("key", "parent_value") with transaction() as child: assert child.get("key") == "parent_value" ``` Update a stored value: ```python with transaction() as txn: txn.set("key", [1, 2, 3]) value = txn.get("key") value.append(4) # Stored value is not updated until `.set` is called assert value == [1, 2, 3, 4] assert txn.get("key") == [1, 2, 3] txn.set("key", value) assert txn.get("key") == [1, 2, 3, 4] ``` #### `get_active` ```python get_active(cls: Type[Self]) -> Optional[Self] ``` #### `get_parent` ```python get_parent(self) -> Self | None ``` #### `is_active` ```python is_active(self) -> bool ``` #### `is_committed` ```python is_committed(self) -> bool ``` #### `is_pending` ```python is_pending(self) -> bool ``` #### `is_rolled_back` ```python is_rolled_back(self) -> bool ``` #### `is_staged` ```python is_staged(self) -> bool ``` #### `prepare_transaction` ```python prepare_transaction(self) -> None ``` Helper method to prepare transaction state and validate configuration. #### `read` ```python read(self) -> ResultRecord[Any] | None ``` #### `reset` ```python reset(self) -> None ``` #### `rollback` ```python rollback(self) -> bool ``` #### `run_hook` ```python run_hook(self, hook: Callable[..., Any], hook_type: str) -> None ``` #### `set` ```python set(self, name: str, value: Any) -> None ``` Set a stored value in the transaction. **Args:** * `name`: The name of the value to set * `value`: The value to set **Examples:** Set a value for use later in the transaction: ```python with transaction() as txn: txn.set("key", "value") ... assert txn.get("key") == "value" ``` #### `stage` ```python stage(self, value: Any, on_rollback_hooks: Optional[list[Callable[..., Any]]] = None, on_commit_hooks: Optional[list[Callable[..., Any]]] = None) -> None ``` Stage a value to be committed later. ### `AsyncTransaction` A model representing the state of an asynchronous transaction. **Methods:** #### `add_child` ```python add_child(self, transaction: Self) -> None ``` #### `begin` ```python begin(self) -> None ``` #### `commit` ```python commit(self) -> bool ``` #### `get` ```python get(self, name: str, default: Any = NotSet) -> Any ``` Get a stored value from the transaction. Child transactions will return values from their parents unless a value with the same name is set in the child transaction. Direct changes to returned values will not update the stored value. To update the stored value, use the `set` method. **Args:** * `name`: The name of the value to get * `default`: The default value to return if the value is not found **Returns:** * The value from the transaction **Examples:** Get a value from the transaction: ```python with transaction() as txn: txn.set("key", "value") ... assert txn.get("key") == "value" ``` Get a value from a parent transaction: ```python with transaction() as parent: parent.set("key", "parent_value") with transaction() as child: assert child.get("key") == "parent_value" ``` Update a stored value: ```python with transaction() as txn: txn.set("key", [1, 2, 3]) value = txn.get("key") value.append(4) # Stored value is not updated until `.set` is called assert value == [1, 2, 3, 4] assert txn.get("key") == [1, 2, 3] txn.set("key", value) assert txn.get("key") == [1, 2, 3, 4] ``` #### `get_active` ```python get_active(cls: Type[Self]) -> Optional[Self] ``` #### `get_parent` ```python get_parent(self) -> Self | None ``` #### `is_active` ```python is_active(self) -> bool ``` #### `is_committed` ```python is_committed(self) -> bool ``` #### `is_pending` ```python is_pending(self) -> bool ``` #### `is_rolled_back` ```python is_rolled_back(self) -> bool ``` #### `is_staged` ```python is_staged(self) -> bool ``` #### `prepare_transaction` ```python prepare_transaction(self) -> None ``` Helper method to prepare transaction state and validate configuration. #### `read` ```python read(self) -> ResultRecord[Any] | None ``` #### `reset` ```python reset(self) -> None ``` #### `rollback` ```python rollback(self) -> bool ``` #### `run_hook` ```python run_hook(self, hook: Callable[..., Any], hook_type: str) -> None ``` #### `set` ```python set(self, name: str, value: Any) -> None ``` Set a stored value in the transaction. **Args:** * `name`: The name of the value to set * `value`: The value to set **Examples:** Set a value for use later in the transaction: ```python with transaction() as txn: txn.set("key", "value") ... assert txn.get("key") == "value" ``` #### `stage` ```python stage(self, value: Any, on_rollback_hooks: Optional[list[Callable[..., Any]]] = None, on_commit_hooks: Optional[list[Callable[..., Any]]] = None) -> None ``` Stage a value to be committed later. # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-types-__init__ # `prefect.types` ## Functions ### `check_variable_value` ```python check_variable_value(value: object) -> object ``` ### `cast_none_to_empty_dict` ```python cast_none_to_empty_dict(value: Any) -> dict[str, Any] ``` ### `validate_set_T_from_delim_string` ```python validate_set_T_from_delim_string(value: Union[str, T, set[T], None], type_: Any, delim: str | None = None) -> set[T] ``` "no-info" before validator useful in scooping env vars e.g. `PREFECT_CLIENT_RETRY_EXTRA_CODES=429,502,503` -> `{429, 502, 503}` e.g. `PREFECT_CLIENT_RETRY_EXTRA_CODES=429` -> `{429}` ### `parse_retry_delay_input` ```python parse_retry_delay_input(value: Any) -> Any ``` Parses various inputs (string, int, float, list) into a format suitable for TaskRetryDelaySeconds (int, float, list\[float], or None). Handles comma-separated strings for lists of delays. ### `convert_none_to_empty_dict` ```python convert_none_to_empty_dict(v: Optional[KeyValueLabels]) -> KeyValueLabels ``` ## Classes ### `SecretDict` # entrypoint Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-types-entrypoint # `prefect.types.entrypoint` ## Classes ### `EntrypointType` Enum representing a entrypoint type. File path entrypoints are in the format: `path/to/file.py:function_name`. Module path entrypoints are in the format: `path.to.module.function_name`. # names Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-types-names # `prefect.types.names` ## Functions ### `raise_on_name_alphanumeric_dashes_only` ```python raise_on_name_alphanumeric_dashes_only(value: str | None, field_name: str = 'value') -> str | None ``` ### `raise_on_name_alphanumeric_underscores_only` ```python raise_on_name_alphanumeric_underscores_only(value: str | None, field_name: str = 'value') -> str | None ``` ### `raise_on_name_alphanumeric_dashes_underscores_only` ```python raise_on_name_alphanumeric_dashes_underscores_only(value: str, field_name: str = 'value') -> str ``` ### `non_emptyish` ```python non_emptyish(value: str) -> str ``` ### `validate_uri` ```python validate_uri(value: str) -> str ``` Validate that a string is a valid URI with lowercase protocol. ### `validate_valid_asset_key` ```python validate_valid_asset_key(value: str) -> str ``` Validate asset key with character restrictions and length limit. # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-utilities-__init__ # `prefect.utilities` *This module is empty or contains only private/internal implementations.* # annotations Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-utilities-annotations # `prefect.utilities.annotations` ## Classes ### `BaseAnnotation` Base class for Prefect annotation types. Inherits from `tuple` for unpacking support in other tools. **Methods:** #### `rewrap` ```python rewrap(self, value: T) -> Self ``` #### `unwrap` ```python unwrap(self) -> T ``` ### `unmapped` Wrapper for iterables. Indicates that this input should be sent as-is to all runs created during a mapping operation instead of being split. ### `allow_failure` Wrapper for states or futures. Indicates that the upstream run for this input can be failed. Generally, Prefect will not allow a downstream run to start if any of its inputs are failed. This annotation allows you to opt into receiving a failed input downstream. If the input is from a failed run, the attached exception will be passed to your function. ### `quote` Simple wrapper to mark an expression as a different type so it will not be coerced by Prefect. For example, if you want to return a state from a flow without having the flow assume that state. quote will also instruct prefect to ignore introspection of the wrapped object when passed as flow or task parameter. Parameter introspection can be a significant performance hit when the object is a large collection, e.g. a large dictionary or DataFrame, and each element needs to be visited. This will disable task dependency tracking for the wrapped object, but likely will increase performance. ``` @task def my_task(df): ... @flow def my_flow(): my_task(quote(df)) ``` **Methods:** #### `unquote` ```python unquote(self) -> T ``` ### `Quote` ### `NotSet` Singleton to distinguish `None` from a value that is not provided by the user. ### `freeze` Wrapper for parameters in deployments. Indicates that this parameter should be frozen in the UI and not editable when creating flow runs from this deployment. Example: ```python @flow def my_flow(customer_id: str): # flow logic deployment = my_flow.deploy(parameters={"customer_id": freeze("customer123")}) ``` **Methods:** #### `unfreeze` ```python unfreeze(self) -> T ``` Return the unwrapped value. # asyncutils Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-utilities-asyncutils # `prefect.utilities.asyncutils` Utilities for interoperability with async functions and workers from various contexts. ## Functions ### `get_thread_limiter` ```python get_thread_limiter() -> anyio.CapacityLimiter ``` ### `is_async_fn` ```python is_async_fn(func: _SyncOrAsyncCallable[P, R]) -> TypeGuard[Callable[P, Coroutine[Any, Any, Any]]] ``` Returns `True` if a function returns a coroutine. See [https://github.com/microsoft/pyright/issues/2142](https://github.com/microsoft/pyright/issues/2142) for an example use ### `is_async_gen_fn` ```python is_async_gen_fn(func: Callable[P, Any]) -> TypeGuard[Callable[P, AsyncGenerator[Any, Any]]] ``` Returns `True` if a function is an async generator. ### `create_task` ```python create_task(coroutine: Coroutine[Any, Any, R]) -> asyncio.Task[R] ``` Replacement for asyncio.create\_task that will ensure that tasks aren't garbage collected before they complete. Allows for "fire and forget" behavior in which tasks can be created and the application can move on. Tasks can also be awaited normally. See [https://docs.python.org/3/library/asyncio-task.html#asyncio.create\_task](https://docs.python.org/3/library/asyncio-task.html#asyncio.create_task) for details (and essentially this implementation) ### `run_coro_as_sync` ```python run_coro_as_sync(coroutine: Coroutine[Any, Any, R]) -> Optional[R] ``` Runs a coroutine from a synchronous context, as if it were a synchronous function. The coroutine is scheduled to run in the "run sync" event loop, which is running in its own thread and is started the first time it is needed. This allows us to share objects like async httpx clients among all coroutines running in the loop. If run\_sync is called from within the run\_sync loop, it will run the coroutine in a new thread, because otherwise a deadlock would occur. Note that this behavior should not appear anywhere in the Prefect codebase or in user code. **Args:** * `coroutine`: The coroutine to be run as a synchronous function. * `force_new_thread`: If True, the coroutine will always be run in a new thread. Defaults to False. * `wait_for_result`: If True, the function will wait for the coroutine to complete and return the result. If False, the function will submit the coroutine to the "run sync" event loop and return immediately, where it will eventually be run. Defaults to True. **Returns:** * The result of the coroutine if wait\_for\_result is True, otherwise None. ### `run_sync_in_worker_thread` ```python run_sync_in_worker_thread(__fn: Callable[P, R], *args: P.args, **kwargs: P.kwargs) -> R ``` Runs a sync function in a new worker thread so that the main thread's event loop is not blocked. Unlike the anyio function, this defaults to a cancellable thread and does not allow passing arguments to the anyio function so users can pass kwargs to their function. Note that cancellation of threads will not result in interrupted computation, the thread may continue running β€” the outcome will just be ignored. ### `call_with_mark` ```python call_with_mark(call: Callable[..., R]) -> R ``` ### `run_async_from_worker_thread` ```python run_async_from_worker_thread(__fn: Callable[P, Awaitable[R]], *args: P.args, **kwargs: P.kwargs) -> R ``` Runs an async function in the main thread's event loop, blocking the worker thread until completion ### `run_async_in_new_loop` ```python run_async_in_new_loop(__fn: Callable[P, Awaitable[R]], *args: P.args, **kwargs: P.kwargs) -> R ``` ### `mark_as_worker_thread` ```python mark_as_worker_thread() -> None ``` ### `in_async_worker_thread` ```python in_async_worker_thread() -> bool ``` ### `in_async_main_thread` ```python in_async_main_thread() -> bool ``` ### `sync_compatible` ```python sync_compatible(async_fn: Callable[P, Coroutine[Any, Any, R]]) -> Callable[P, Union[R, Coroutine[Any, Any, R]]] ``` Converts an async function into a dual async and sync function. When the returned function is called, we will attempt to determine the best way to enter the async function. * If in a thread with a running event loop, we will return the coroutine for the caller to await. This is normal async behavior. * If in a blocking worker thread with access to an event loop in another thread, we will submit the async method to the event loop. * If we cannot find an event loop, we will create a new one and run the async method then tear down the loop. Note: Type checkers will infer functions decorated with `@sync_compatible` are synchronous. If you want to use the decorated function in an async context, you will need to ignore the types and "cast" the return type to a coroutine. For example: ``` python result: Coroutine = sync_compatible(my_async_function)(arg1, arg2) # type: ignore ``` ### `asyncnullcontext` ```python asyncnullcontext(value: Optional[R] = None, *args: Any, **kwargs: Any) -> AsyncGenerator[Any, Optional[R]] ``` ### `sync` ```python sync(__async_fn: Callable[P, Awaitable[T]], *args: P.args, **kwargs: P.kwargs) -> T ``` Call an async function from a synchronous context. Block until completion. If in an asynchronous context, we will run the code in a separate loop instead of failing but a warning will be displayed since this is not recommended. ### `add_event_loop_shutdown_callback` ```python add_event_loop_shutdown_callback(coroutine_fn: Callable[[], Awaitable[Any]]) -> None ``` Adds a callback to the given callable on event loop closure. The callable must be a coroutine function. It will be awaited when the current event loop is shutting down. Requires use of `asyncio.run()` which waits for async generator shutdown by default or explicit call of `asyncio.shutdown_asyncgens()`. If the application is entered with `asyncio.run_until_complete()` and the user calls `asyncio.close()` without the generator shutdown call, this will not trigger callbacks. asyncio does not provided *any* other way to clean up a resource when the event loop is about to close. ### `create_gather_task_group` ```python create_gather_task_group() -> GatherTaskGroup ``` Create a new task group that gathers results ### `gather` ```python gather(*calls: Callable[[], Coroutine[Any, Any, T]]) -> list[T] ``` Run calls concurrently and gather their results. Unlike `asyncio.gather` this expects to receive *callables* not *coroutines*. This matches `anyio` semantics. ## Classes ### `GatherIncomplete` Used to indicate retrieving gather results before completion ### `GatherTaskGroup` A task group that gathers results. AnyIO does not include `gather` support. This class extends the `TaskGroup` interface to allow simple gathering. See [https://github.com/agronholm/anyio/issues/100](https://github.com/agronholm/anyio/issues/100) This class should be instantiated with `create_gather_task_group`. **Methods:** #### `get_result` ```python get_result(self, key: UUID) -> Any ``` #### `start` ```python start(self, func: object, *args: object) -> NoReturn ``` Since `start` returns the result of `task_status.started()` but here we must return the key instead, we just won't support this method for now. #### `start_soon` ```python start_soon(self, func: Callable[[Unpack[PosArgsT]], Awaitable[Any]], *args: Unpack[PosArgsT]) -> UUID ``` ### `LazySemaphore` # callables Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-utilities-callables # `prefect.utilities.callables` Utilities for working with Python callables. ## Functions ### `get_call_parameters` ```python get_call_parameters(fn: Callable[..., Any], call_args: tuple[Any, ...], call_kwargs: dict[str, Any], apply_defaults: bool = True) -> dict[str, Any] ``` Bind a call to a function to get parameter/value mapping. Default values on the signature will be included if not overridden. If the function has a `__prefect_self__` attribute, it will be included as the first parameter. This attribute is set when Prefect decorates a bound method, so this approach allows Prefect to work with bound methods in a way that is consistent with how Python handles them (i.e. users don't have to pass the instance argument to the method) while still making the implicit self argument visible to all of Prefect's parameter machinery (such as cache key functions). Raises a ParameterBindError if the arguments/kwargs are not valid for the function ### `get_parameter_defaults` ```python get_parameter_defaults(fn: Callable[..., Any]) -> dict[str, Any] ``` Get default parameter values for a callable. ### `explode_variadic_parameter` ```python explode_variadic_parameter(fn: Callable[..., Any], parameters: dict[str, Any]) -> dict[str, Any] ``` Given a parameter dictionary, move any parameters stored in a variadic keyword argument parameter (i.e. \*\*kwargs) into the top level. Example: ```python def foo(a, b, **kwargs): pass parameters = {"a": 1, "b": 2, "kwargs": {"c": 3, "d": 4}} explode_variadic_parameter(foo, parameters) # {"a": 1, "b": 2, "c": 3, "d": 4} ``` ### `collapse_variadic_parameters` ```python collapse_variadic_parameters(fn: Callable[..., Any], parameters: dict[str, Any]) -> dict[str, Any] ``` Given a parameter dictionary, move any parameters stored not present in the signature into the variadic keyword argument. Example: ```python def foo(a, b, **kwargs): pass parameters = {"a": 1, "b": 2, "c": 3, "d": 4} collapse_variadic_parameters(foo, parameters) # {"a": 1, "b": 2, "kwargs": {"c": 3, "d": 4}} ``` ### `parameters_to_args_kwargs` ```python parameters_to_args_kwargs(fn: Callable[..., Any], parameters: dict[str, Any]) -> tuple[tuple[Any, ...], dict[str, Any]] ``` Convert a `parameters` dictionary to positional and keyword arguments The function *must* have an identical signature to the original function or this will return an empty tuple and dict. ### `call_with_parameters` ```python call_with_parameters(fn: Callable[..., R], parameters: dict[str, Any]) -> R ``` Call a function with parameters extracted with `get_call_parameters` The function *must* have an identical signature to the original function or this will fail. If you need to send to a function with a different signature, extract the args/kwargs using `parameters_to_positional_and_keyword` directly ### `cloudpickle_wrapped_call` ```python cloudpickle_wrapped_call(__fn: Callable[..., Any], *args: Any, **kwargs: Any) -> Callable[[], bytes] ``` Serializes a function call using cloudpickle then returns a callable which will execute that call and return a cloudpickle serialized return value This is particularly useful for sending calls to libraries that only use the Python built-in pickler (e.g. `anyio.to_process` and `multiprocessing`) but may require a wider range of pickling support. ### `parameter_docstrings` ```python parameter_docstrings(docstring: Optional[str]) -> dict[str, str] ``` Given a docstring in Google docstring format, parse the parameter section and return a dictionary that maps parameter names to docstring. **Args:** * `docstring`: The function's docstring. **Returns:** * Mapping from parameter names to docstrings. ### `process_v1_params` ```python process_v1_params(param: inspect.Parameter) -> tuple[str, Any, Any] ``` ### `create_v1_schema` ```python create_v1_schema(name_: str, model_cfg: type[Any], model_fields: Optional[dict[str, Any]] = None) -> dict[str, Any] ``` ### `parameter_schema` ```python parameter_schema(fn: Callable[..., Any]) -> ParameterSchema ``` Given a function, generates an OpenAPI-compatible description of the function's arguments, including: * name * typing information * whether it is required * a default value * additional constraints (like possible enum values) **Args:** * `fn`: The function whose arguments will be serialized **Returns:** * the argument schema ### `parameter_schema_from_entrypoint` ```python parameter_schema_from_entrypoint(entrypoint: str) -> ParameterSchema ``` Generate a parameter schema from an entrypoint string. Will load the source code of the function and extract the signature and docstring to generate the schema. Useful for generating a schema for a function when instantiating the function may not be possible due to missing imports or other issues. **Args:** * `entrypoint`: A string representing the entrypoint to a function. The string should be in the format of `module.path.to.function\:do_stuff`. **Returns:** * The parameter schema for the function. ### `generate_parameter_schema` ```python generate_parameter_schema(signature: inspect.Signature, docstrings: dict[str, str]) -> ParameterSchema ``` Generate a parameter schema from a function signature and docstrings. To get a signature from a function, use `inspect.signature(fn)` or `_generate_signature_from_source(source_code, func_name)`. **Args:** * `signature`: The function signature. * `docstrings`: A dictionary mapping parameter names to docstrings. **Returns:** * The parameter schema. ### `raise_for_reserved_arguments` ```python raise_for_reserved_arguments(fn: Callable[..., Any], reserved_arguments: Iterable[str]) -> None ``` Raise a ReservedArgumentError if `fn` has any parameters that conflict with the names contained in `reserved_arguments`. ### `expand_mapping_parameters` ```python expand_mapping_parameters(func: Callable[..., Any], parameters: dict[str, Any]) -> list[dict[str, Any]] ``` Generates a list of call parameters to be used for individual calls in a mapping operation. **Args:** * `func`: The function to be called * `parameters`: A dictionary of parameters with iterables to be mapped over **Returns:** * A list of dictionaries to be used as parameters for each call in the mapping operation ## Classes ### `ParameterSchema` Simple data model corresponding to an OpenAPI `Schema`. **Methods:** #### `model_dump_for_openapi` ```python model_dump_for_openapi(self) -> dict[str, Any] ``` # collections Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-utilities-collections # `prefect.utilities.collections` Utilities for extensions of and operations on Python collections. ## Functions ### `dict_to_flatdict` ```python dict_to_flatdict(dct: NestedDict[KT, VT]) -> dict[tuple[KT, ...], VT] ``` Converts a (nested) dictionary to a flattened representation. Each key of the flat dict will be a CompoundKey tuple containing the "chain of keys" for the corresponding value. **Args:** * `dct`: The dictionary to flatten **Returns:** * A flattened dict of the same type as dct ### `flatdict_to_dict` ```python flatdict_to_dict(dct: dict[tuple[KT, ...], VT]) -> NestedDict[KT, VT] ``` Converts a flattened dictionary back to a nested dictionary. **Args:** * `dct`: The dictionary to be nested. Each key should be a tuple of keys as generated by `dict_to_flatdict` Returns A nested dict of the same type as dct ### `isiterable` ```python isiterable(obj: Any) -> bool ``` Return a boolean indicating if an object is iterable. Excludes types that are iterable but typically used as singletons: * str * bytes * IO objects ### `ensure_iterable` ```python ensure_iterable(obj: Union[T, Iterable[T]]) -> Collection[T] ``` ### `listrepr` ```python listrepr(objs: Iterable[Any], sep: str = ' ') -> str ``` ### `extract_instances` ```python extract_instances(objects: Iterable[Any], types: Union[type[T], tuple[type[T], ...]] = object) -> Union[list[T], dict[type[T], list[T]]] ``` Extract objects from a file and returns a dict of type -> instances **Args:** * `objects`: An iterable of objects * `types`: A type or tuple of types to extract, defaults to all objects **Returns:** * If a single type is given: a list of instances of that type * If a tuple of types is given: a mapping of type to a list of instances ### `batched_iterable` ```python batched_iterable(iterable: Iterable[T], size: int) -> Generator[tuple[T, ...], None, None] ``` Yield batches of a certain size from an iterable **Args:** * `iterable`: An iterable * `size`: The batch size to return ### `visit_collection` ```python visit_collection(expr: Any, visit_fn: Union[Callable[[Any, dict[str, VT]], Any], Callable[[Any], Any]]) -> Optional[Any] ``` Visits and potentially transforms every element of an arbitrary Python collection. If an element is a Python collection, it will be visited recursively. If an element is not a collection, `visit_fn` will be called with the element. The return value of `visit_fn` can be used to alter the element if `return_data` is set to `True`. Note: * When `return_data` is `True`, a copy of each collection is created only if `visit_fn` modifies an element within that collection. This approach minimizes performance penalties by avoiding unnecessary copying. * When `return_data` is `False`, no copies are created, and only side effects from `visit_fn` are applied. This mode is faster and should be used when no transformation of the collection is required, because it never has to copy any data. Supported types: * List (including iterators) * Tuple * Set * Dict (note: keys are also visited recursively) * Dataclass * Pydantic model * Prefect annotations Note that visit\_collection will not consume generators or async generators, as it would prevent the caller from iterating over them. **Args:** * `expr`: A Python object or expression. * `visit_fn`: A function that will be applied to every non-collection element of `expr`. The function can accept one or two arguments. If two arguments are accepted, the second argument will be the context dictionary. * `return_data`: If `True`, a copy of `expr` containing data modified by `visit_fn` will be returned. This is slower than `return_data=False` (the default). * `max_depth`: Controls the depth of recursive visitation. If set to zero, no recursion will occur. If set to a positive integer `N`, visitation will only descend to `N` layers deep. If set to any negative integer, no limit will be enforced and recursion will continue until terminal items are reached. By default, recursion is unlimited. * `context`: An optional dictionary. If passed, the context will be sent to each call to the `visit_fn`. The context can be mutated by each visitor and will be available for later visits to expressions at the given depth. Values will not be available "up" a level from a given expression. The context will be automatically populated with an 'annotation' key when visiting collections within a `BaseAnnotation` type. This requires the caller to pass `context={}` and will not be activated by default. * `remove_annotations`: If set, annotations will be replaced by their contents. By default, annotations are preserved but their contents are visited. * `_seen`: A set of object ids that have already been visited. This prevents infinite recursion when visiting recursive data structures. **Returns:** * The modified collection if `return_data` is `True`, otherwise `None`. ### `remove_nested_keys` ```python remove_nested_keys(keys_to_remove: list[HashableT], obj: Union[NestedDict[HashableT, VT], Any]) -> Union[NestedDict[HashableT, VT], Any] ``` Recurses a dictionary returns a copy without all keys that match an entry in `key_to_remove`. Return `obj` unchanged if not a dictionary. **Args:** * `keys_to_remove`: A list of keys to remove from obj obj: The object to remove keys from. **Returns:** * `obj` without keys matching an entry in `keys_to_remove` if `obj` is a dictionary. `obj` if `obj` is not a dictionary. ### `distinct` ```python distinct(iterable: Iterable[Union[T, HashableT]], key: Optional[Callable[[T], Hashable]] = None) -> Iterator[Union[T, HashableT]] ``` ### `get_from_dict` ```python get_from_dict(dct: NestedDict[str, VT], keys: Union[str, list[str]], default: Optional[R] = None) -> Union[VT, R, None] ``` Fetch a value from a nested dictionary or list using a sequence of keys. This function allows to fetch a value from a deeply nested structure of dictionaries and lists using either a dot-separated string or a list of keys. If a requested key does not exist, the function returns the provided default value. **Args:** * `dct`: The nested dictionary or list from which to fetch the value. * `keys`: The sequence of keys to use for access. Can be a dot-separated string or a list of keys. List indices can be included in the sequence as either integer keys or as string indices in square brackets. * `default`: The default value to return if the requested key path does not exist. Defaults to None. **Returns:** * The fetched value if the key exists, or the default value if it does not. Examples: ```python get_from_dict({'a': {'b': {'c': [1, 2, 3, 4]}}}, 'a.b.c[1]') # 2 get_from_dict({'a': {'b': [0, {'c': [1, 2]}]}}, ['a', 'b', 1, 'c', 1]) # 2 get_from_dict({'a': {'b': [0, {'c': [1, 2]}]}}, 'a.b.1.c.2', 'default') # 'default' ``` ### `set_in_dict` ```python set_in_dict(dct: NestedDict[str, VT], keys: Union[str, list[str]], value: VT) -> None ``` Sets a value in a nested dictionary using a sequence of keys. This function allows to set a value in a deeply nested structure of dictionaries and lists using either a dot-separated string or a list of keys. If a requested key does not exist, the function will create it as a new dictionary. **Args:** * `dct`: The dictionary to set the value in. * `keys`: The sequence of keys to use for access. Can be a dot-separated string or a list of keys. * `value`: The value to set in the dictionary. **Returns:** * The modified dictionary with the value set at the specified key path. **Raises:** * `KeyError`: If the key path exists and is not a dictionary. ### `deep_merge` ```python deep_merge(dct: NestedDict[str, VT1], merge: NestedDict[str, VT2]) -> NestedDict[str, Union[VT1, VT2]] ``` Recursively merges `merge` into `dct`. **Args:** * `dct`: The dictionary to merge into. * `merge`: The dictionary to merge from. **Returns:** * A new dictionary with the merged contents. ### `deep_merge_dicts` ```python deep_merge_dicts(*dicts: NestedDict[str, Any]) -> NestedDict[str, Any] ``` Recursively merges multiple dictionaries. **Args:** * `dicts`: The dictionaries to merge. **Returns:** * A new dictionary with the merged contents. ## Classes ### `AutoEnum` An enum class that automatically generates value from variable names. This guards against common errors where variable names are updated but values are not. In addition, because AutoEnums inherit from `str`, they are automatically JSON-serializable. See [https://docs.python.org/3/library/enum.html#using-automatic-values](https://docs.python.org/3/library/enum.html#using-automatic-values) **Methods:** #### `auto` ```python auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `StopVisiting` A special exception used to stop recursive visits in `visit_collection`. When raised, the expression is returned without modification and recursive visits in that path will end. # compat Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-utilities-compat # `prefect.utilities.compat` Utilities for Python version compatibility # context Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-utilities-context # `prefect.utilities.context` ## Functions ### `temporary_context` ```python temporary_context(context: Context) -> Generator[None, Any, None] ``` ### `get_task_run_id` ```python get_task_run_id() -> Optional[UUID] ``` ### `get_flow_run_id` ```python get_flow_run_id() -> Optional[UUID] ``` ### `get_task_and_flow_run_ids` ```python get_task_and_flow_run_ids() -> tuple[Optional[UUID], Optional[UUID]] ``` Get the task run and flow run ids from the context, if available. **Returns:** * tuple\[Optional\[UUID], Optional\[UUID]]: a tuple of the task run id and flow run id # dispatch Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-utilities-dispatch # `prefect.utilities.dispatch` Provides methods for performing dynamic dispatch for actions on base type to one of its subtypes. Example: ```python @register_base_type class Base: @classmethod def __dispatch_key__(cls): return cls.__name__.lower() class Foo(Base): ... key = get_dispatch_key(Foo) # 'foo' lookup_type(Base, key) # Foo ``` ## Functions ### `get_registry_for_type` ```python get_registry_for_type(cls: T) -> Optional[dict[str, T]] ``` Get the first matching registry for a class or any of its base classes. If not found, `None` is returned. ### `get_dispatch_key` ```python get_dispatch_key(cls_or_instance: Any, allow_missing: bool = False) -> Optional[str] ``` Retrieve the unique dispatch key for a class type or instance. This key is defined at the `__dispatch_key__` attribute. If it is a callable, it will be resolved. If `allow_missing` is `False`, an exception will be raised if the attribute is not defined or the key is null. If `True`, `None` will be returned in these cases. ### `register_base_type` ```python register_base_type(cls: T) -> T ``` Register a base type allowing child types to be registered for dispatch with `register_type`. The base class may or may not define a `__dispatch_key__` to allow lookups of the base type. ### `register_type` ```python register_type(cls: T) -> T ``` Register a type for lookup with dispatch. The type or one of its parents must define a unique `__dispatch_key__`. One of the classes base types must be registered using `register_base_type`. ### `lookup_type` ```python lookup_type(cls: T, dispatch_key: str) -> T ``` Look up a dispatch key in the type registry for the given class. # dockerutils Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-utilities-dockerutils # `prefect.utilities.dockerutils` ## Functions ### `python_version_minor` ```python python_version_minor() -> str ``` ### `python_version_micro` ```python python_version_micro() -> str ``` ### `get_prefect_image_name` ```python get_prefect_image_name(prefect_version: Optional[str] = None, python_version: Optional[str] = None, flavor: Optional[str] = None) -> str ``` Get the Prefect image name matching the current Prefect and Python versions. **Args:** * `prefect_version`: An optional override for the Prefect version. * `python_version`: An optional override for the Python version; must be at the minor level e.g. '3.9'. * `flavor`: An optional alternative image flavor to build, like 'conda' ### `silence_docker_warnings` ```python silence_docker_warnings() -> Generator[None, None, None] ``` ### `docker_client` ```python docker_client() -> Generator['DockerClient', None, None] ``` Get the environmentally-configured Docker client ### `build_image` ```python build_image(context: Path, dockerfile: str = 'Dockerfile', tag: Optional[str] = None, pull: bool = False, platform: Optional[str] = None, stream_progress_to: Optional[TextIO] = None, **kwargs: Any) -> str ``` Builds a Docker image, returning the image ID **Args:** * `context`: the root directory for the Docker build context * `dockerfile`: the path to the Dockerfile, relative to the context * `tag`: the tag to give this image * `pull`: True to pull the base image during the build * `stream_progress_to`: an optional stream (like sys.stdout, or an io.TextIO) that will collect the build output as it is reported by Docker **Returns:** * The image ID ### `push_image` ```python push_image(image_id: str, registry_url: str, name: str, tag: Optional[str] = None, stream_progress_to: Optional[TextIO] = None) -> str ``` Pushes a local image to a Docker registry, returning the registry-qualified tag for that image This assumes that the environment's Docker daemon is already authenticated to the given registry, and currently makes no attempt to authenticate. **Args:** * `image_id`: a Docker image ID * `registry_url`: the URL of a Docker registry * `name`: the name of this image * `tag`: the tag to give this image (defaults to a short representation of the image's ID) * `stream_progress_to`: an optional stream (like sys.stdout, or an io.TextIO) that will collect the build output as it is reported by Docker **Returns:** * A registry-qualified tag, like my-registry.example.com/my-image:abcdefg ### `to_run_command` ```python to_run_command(command: list[str]) -> str ``` Convert a process-style list of command arguments to a single Dockerfile RUN instruction. ### `parse_image_tag` ```python parse_image_tag(name: str) -> tuple[str, Optional[str]] ``` Parse Docker Image String * If a tag or digest exists, this function parses and returns the image registry and tag/digest, separately as a tuple. * Example 1: 'prefecthq/prefect:latest' -> ('prefecthq/prefect', 'latest') * Example 2: 'hostname.io:5050/folder/subfolder:latest' -> ('hostname.io:5050/folder/subfolder', 'latest') * Example 3: 'prefecthq/prefect\@sha256:abc123' -> ('prefecthq/prefect', 'sha256:abc123') * Supports parsing Docker Image strings that follow Docker Image Specification v1.1.0 * Image building tools typically enforce this standard **Args:** * `name`: Name of Docker Image ### `split_repository_path` ```python split_repository_path(repository_path: str) -> tuple[Optional[str], str] ``` Splits a Docker repository path into its namespace and repository components. **Args:** * `repository_path`: The Docker repository path to split. **Returns:** * Tuple\[Optional\[str], str]: A tuple containing the namespace and repository components. * namespace (Optional\[str]): The Docker namespace, combining the registry and organization. None if not present. * repository (Optionals\[str]): The repository name. ### `format_outlier_version_name` ```python format_outlier_version_name(version: str) -> str ``` Formats outlier docker version names to pass `packaging.version.parse` validation * Current cases are simple, but creates stub for more complicated formatting if eventually needed. * Example outlier versions that throw a parsing exception: * "20.10.0-ce" (variant of community edition label) * "20.10.0-ee" (variant of enterprise edition label) **Args:** * `version`: raw docker version value **Returns:** * value that can pass `packaging.version.parse` validation ### `generate_default_dockerfile` ```python generate_default_dockerfile(context: Optional[Path] = None) ``` Generates a default Dockerfile used for deploying flows. The Dockerfile is written to a temporary file and yielded. The temporary file is removed after the context manager exits. **Args:** * `- context`: The context to use for the Dockerfile. Defaults to the current working directory. ## Classes ### `BuildError` Raised when a Docker build fails ### `ImageBuilder` An interface for preparing Docker build contexts and building images **Methods:** #### `add_line` ```python add_line(self, line: str) -> None ``` Add a line to this image's Dockerfile #### `add_lines` ```python add_lines(self, lines: Iterable[str]) -> None ``` Add lines to this image's Dockerfile #### `assert_has_file` ```python assert_has_file(self, source: Path, container_path: PurePosixPath) -> None ``` Asserts that the given file or directory will be copied into the container at the given path #### `assert_has_line` ```python assert_has_line(self, line: str) -> None ``` Asserts that the given line is in the Dockerfile #### `assert_line_absent` ```python assert_line_absent(self, line: str) -> None ``` Asserts that the given line is absent from the Dockerfile #### `assert_line_after` ```python assert_line_after(self, second: str, first: str) -> None ``` Asserts that the second line appears after the first line #### `assert_line_before` ```python assert_line_before(self, first: str, second: str) -> None ``` Asserts that the first line appears before the second line #### `build` ```python build(self, pull: bool = False, stream_progress_to: Optional[TextIO] = None) -> str ``` Build the Docker image from the current state of the ImageBuilder **Args:** * `pull`: True to pull the base image during the build * `stream_progress_to`: an optional stream (like sys.stdout, or an io.TextIO) that will collect the build output as it is reported by Docker **Returns:** * The image ID #### `copy` ```python copy(self, source: Union[str, Path], destination: Union[str, PurePosixPath]) -> None ``` Copy a file to this image #### `write_text` ```python write_text(self, text: str, destination: Union[str, PurePosixPath]) -> None ``` ### `PushError` Raised when a Docker image push fails # engine Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-utilities-engine # `prefect.utilities.engine` ## Functions ### `collect_task_run_inputs` ```python collect_task_run_inputs(expr: Any, max_depth: int = -1) -> set[Union[TaskRunResult, FlowRunResult]] ``` This function recurses through an expression to generate a set of any discernible task run inputs it finds in the data structure. It produces a set of all inputs found. Examples: ```python task_inputs = { k: await collect_task_run_inputs(v) for k, v in parameters.items() } ``` ### `collect_task_run_inputs_sync` ```python collect_task_run_inputs_sync(expr: Any, future_cls: Any = PrefectFuture, max_depth: int = -1) -> set[Union[TaskRunResult, FlowRunResult]] ``` This function recurses through an expression to generate a set of any discernible task run inputs it finds in the data structure. It produces a set of all inputs found. **Examples:** ```python task_inputs = { k: collect_task_run_inputs_sync(v) for k, v in parameters.items() } ``` ### `capture_sigterm` ```python capture_sigterm() -> Generator[None, Any, None] ``` ### `resolve_inputs` ```python resolve_inputs(parameters: dict[str, Any], return_data: bool = True, max_depth: int = -1) -> dict[str, Any] ``` Resolve any `Quote`, `PrefectFuture`, or `State` types nested in parameters into data. **Returns:** * A copy of the parameters with resolved data **Raises:** * `UpstreamTaskError`: If any of the upstream states are not `COMPLETED` ### `propose_state` ```python propose_state(client: 'PrefectClient', state: State[Any], flow_run_id: UUID, force: bool = False) -> State[Any] ``` Propose a new state for a flow run, invoking Prefect orchestration logic. If the proposed state is accepted, the provided `state` will be augmented with details and returned. If the proposed state is rejected, a new state returned by the Prefect API will be returned. If the proposed state results in a WAIT instruction from the Prefect API, the function will sleep and attempt to propose the state again. If the proposed state results in an ABORT instruction from the Prefect API, an error will be raised. **Args:** * `state`: a new state for a flow run * `flow_run_id`: an optional flow run id, used when proposing flow run states **Returns:** * a State model representation of the flow run state **Raises:** * `prefect.exceptions.Abort`: if an ABORT instruction is received from the Prefect API ### `propose_state_sync` ```python propose_state_sync(client: 'SyncPrefectClient', state: State[Any], flow_run_id: UUID, force: bool = False) -> State[Any] ``` Propose a new state for a flow run, invoking Prefect orchestration logic. If the proposed state is accepted, the provided `state` will be augmented with details and returned. If the proposed state is rejected, a new state returned by the Prefect API will be returned. If the proposed state results in a WAIT instruction from the Prefect API, the function will sleep and attempt to propose the state again. If the proposed state results in an ABORT instruction from the Prefect API, an error will be raised. **Args:** * `state`: a new state for the flow run * `flow_run_id`: an optional flow run id, used when proposing flow run states **Returns:** * a State model representation of the flow run state **Raises:** * `ValueError`: if flow\_run\_id is not provided * `prefect.exceptions.Abort`: if an ABORT instruction is received from the Prefect API ### `get_state_for_result` ```python get_state_for_result(obj: Any) -> Optional[tuple[State, RunType]] ``` Get the state related to a result object. `link_state_to_result` must have been called first. ### `link_state_to_flow_run_result` ```python link_state_to_flow_run_result(state: State, result: Any) -> None ``` Creates a link between a state and flow run result ### `link_state_to_task_run_result` ```python link_state_to_task_run_result(state: State, result: Any) -> None ``` Creates a link between a state and task run result ### `link_state_to_result` ```python link_state_to_result(state: State, result: Any, run_type: RunType) -> None ``` Caches a link between a state and a result and its components using the `id` of the components to map to the state. The cache is persisted to the current flow run context since task relationships are limited to within a flow run. This allows dependency tracking to occur when results are passed around. Note: Because `id` is used, we cannot cache links between singleton objects. We only cache the relationship between components 1-layer deep. Example: Given the result \[1, \["a","b"], ("c",)], the following elements will be mapped to the state: * \[1, \["a","b"], ("c",)] * \["a","b"] * ("c",) Note: the int `1` will not be mapped to the state because it is a singleton. Other Notes: We do not hash the result because: * If changes are made to the object in the flow between task calls, we can still track that they are related. * Hashing can be expensive. * Not all objects are hashable. We do not set an attribute, e.g. `__prefect_state__`, on the result because: * Mutating user's objects is dangerous. * Unrelated equality comparisons can break unexpectedly. * The field can be preserved on copy. * We cannot set this attribute on Python built-ins. ### `should_log_prints` ```python should_log_prints(flow_or_task: Union['Flow[..., Any]', 'Task[..., Any]']) -> bool ``` ### `check_api_reachable` ```python check_api_reachable(client: 'PrefectClient', fail_message: str) -> None ``` ### `emit_task_run_state_change_event` ```python emit_task_run_state_change_event(task_run: TaskRun, initial_state: Optional[State[Any]], validated_state: State[Any], follows: Optional[Event] = None) -> Optional[Event] ``` ### `resolve_to_final_result` ```python resolve_to_final_result(expr: Any, context: dict[str, Any]) -> Any ``` Resolve any `PrefectFuture`, or `State` types nested in parameters into data. Designed to be use with `visit_collection`. ### `resolve_inputs_sync` ```python resolve_inputs_sync(parameters: dict[str, Any], return_data: bool = True, max_depth: int = -1) -> dict[str, Any] ``` Resolve any `Quote`, `PrefectFuture`, or `State` types nested in parameters into data. **Returns:** * A copy of the parameters with resolved data **Raises:** * `UpstreamTaskError`: If any of the upstream states are not `COMPLETED` # filesystem Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-utilities-filesystem # `prefect.utilities.filesystem` Utilities for working with file systems ## Functions ### `create_default_ignore_file` ```python create_default_ignore_file(path: str) -> bool ``` Creates default ignore file in the provided path if one does not already exist; returns boolean specifying whether a file was created. ### `filter_files` ```python filter_files(root: str = '.', ignore_patterns: Optional[Iterable[AnyStr]] = None, include_dirs: bool = True) -> set[str] ``` This function accepts a root directory path and a list of file patterns to ignore, and returns a list of files that excludes those that should be ignored. The specification matches that of [.gitignore files](https://git-scm.com/docs/gitignore). ### `tmpchdir` ```python tmpchdir(path: str) ``` Change current-working directories for the duration of the context, with special handling for UNC paths on Windows. ### `filename` ```python filename(path: str) -> str ``` Extract the file name from a path with remote file system support ### `is_local_path` ```python is_local_path(path: Union[str, pathlib.Path, Any]) -> bool ``` Check if the given path points to a local or remote file system ### `to_display_path` ```python to_display_path(path: Union[pathlib.Path, str], relative_to: Optional[Union[pathlib.Path, str]] = None) -> str ``` Convert a path to a displayable path. The absolute path or relative path to the current (or given) directory will be returned, whichever is shorter. ### `relative_path_to_current_platform` ```python relative_path_to_current_platform(path_str: str) -> Path ``` Converts a relative path generated on any platform to a relative path for the current platform. ### `get_open_file_limit` ```python get_open_file_limit() -> int ``` Get the maximum number of open files allowed for the current process # generics Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-utilities-generics # `prefect.utilities.generics` ## Functions ### `validate_list` ```python validate_list(model: type[T], input: Any) -> list[T] ``` # hashing Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-utilities-hashing # `prefect.utilities.hashing` ## Functions ### `stable_hash` ```python stable_hash(*args: Union[str, bytes]) -> str ``` Given some arguments, produces a stable 64-bit hash of their contents. Supports bytes and strings. Strings will be UTF-8 encoded. **Args:** * `*args`: Items to include in the hash. * `hash_algo`: Hash algorithm from hashlib to use. **Returns:** * A hex hash. ### `file_hash` ```python file_hash(path: str, hash_algo: Callable[..., Any] = _md5) -> str ``` Given a path to a file, produces a stable hash of the file contents. **Args:** * `path`: the path to a file * `hash_algo`: Hash algorithm from hashlib to use. **Returns:** * a hash of the file contents ### `hash_objects` ```python hash_objects(*args: Any, **kwargs: Any) -> Optional[str] ``` Attempt to hash objects by dumping to JSON or serializing with cloudpickle. **Args:** * `*args`: Positional arguments to hash * `hash_algo`: Hash algorithm to use * `raise_on_failure`: If True, raise exceptions instead of returning None * `**kwargs`: Keyword arguments to hash **Returns:** * A hash string or None if hashing failed **Raises:** * `HashError`: If objects cannot be hashed and raise\_on\_failure is True # importtools Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-utilities-importtools # `prefect.utilities.importtools` ## Functions ### `to_qualified_name` ```python to_qualified_name(obj: Any) -> str ``` Given an object, returns its fully-qualified name: a string that represents its Python import path. **Args:** * `obj`: an importable Python object **Returns:** * the qualified name ### `from_qualified_name` ```python from_qualified_name(name: str) -> Any ``` Import an object given a fully-qualified name. **Args:** * `name`: The fully-qualified name of the object to import. **Returns:** * the imported object **Examples:** ```python obj = from_qualified_name("random.randint") import random obj == random.randint # True ``` ### `load_script_as_module` ```python load_script_as_module(path: str) -> ModuleType ``` Execute a script at the given path. Sets the module name to a unique identifier to ensure thread safety. Uses a lock to safely modify sys.path for relative imports. If an exception occurs during execution of the script, a `prefect.exceptions.ScriptError` is created to wrap the exception and raised. ### `load_module` ```python load_module(module_name: str) -> ModuleType ``` Import a module with support for relative imports within the module. ### `import_object` ```python import_object(import_path: str) -> Any ``` Load an object from an import path. Import paths can be formatted as one of: * module.object * module:object * /path/to/script.py:object * module:object.method * /path/to/script.py:object.method This function is not thread safe as it modifies the 'sys' module during execution. ### `lazy_import` ```python lazy_import(name: str, error_on_import: bool = False, help_message: Optional[str] = None) -> ModuleType ``` Create a lazily-imported module to use in place of the module of the given name. Use this to retain module-level imports for libraries that we don't want to actually import until they are needed. NOTE: Lazy-loading a subpackage can cause the subpackage to be imported twice if another non-lazy import also imports the subpackage. For example, using both `lazy_import("docker.errors")` and `import docker.errors` in the same codebase will import `docker.errors` twice and can lead to unexpected behavior, e.g. type check failures and import-time side effects running twice. Adapted from the [Python documentation][1] and [lazy\_loader][2] [1]: https://docs.python.org/3/library/importlib.html#implementing-lazy-imports [2]: https://github.com/scientific-python/lazy_loader ### `safe_load_namespace` ```python safe_load_namespace(source_code: str, filepath: Optional[str] = None) -> dict[str, Any] ``` Safely load a namespace from source code, optionally handling relative imports. If a `filepath` is provided, `sys.path` is modified to support relative imports. Changes to `sys.path` are reverted after completion, but this function is not thread safe and use of it in threaded contexts may result in undesirable behavior. **Args:** * `source_code`: The source code to load * `filepath`: Optional file path of the source code. If provided, enables relative imports. **Returns:** * The namespace loaded from the source code. ## Classes ### `DelayedImportErrorModule` A fake module returned by `lazy_import` when the module cannot be found. When any of the module's attributes are accessed, we will throw a `ModuleNotFoundError`. Adapted from [lazy\_loader][1] [1]: https://github.com/scientific-python/lazy_loader ### `AliasedModuleDefinition` A definition for the `AliasedModuleFinder`. **Args:** * `alias`: The import name to create * `real`: The import name of the module to reference for the alias * `callback`: A function to call when the alias module is loaded ### `AliasedModuleFinder` **Methods:** #### `find_spec` ```python find_spec(self, fullname: str, path: Optional[Sequence[str]] = None, target: Optional[ModuleType] = None) -> Optional[ModuleSpec] ``` The fullname is the imported path, e.g. "foo.bar". If there is an alias "phi" for "foo" then on import of "phi.bar" we will find the spec for "foo.bar" and create a new spec for "phi.bar" that points to "foo.bar". ### `AliasedModuleLoader` **Methods:** #### `exec_module` ```python exec_module(self, module: ModuleType) -> None ``` # math Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-utilities-math # `prefect.utilities.math` ## Functions ### `poisson_interval` ```python poisson_interval(average_interval: float, lower: float = 0, upper: float = 1) -> float ``` Generates an "inter-arrival time" for a Poisson process. Draws a random variable from an exponential distribution using the inverse-CDF method. Can optionally be passed a lower and upper bound between (0, 1] to clamp the potential output values. ### `exponential_cdf` ```python exponential_cdf(x: float, average_interval: float) -> float ``` ### `lower_clamp_multiple` ```python lower_clamp_multiple(k: float) -> float ``` Computes a lower clamp multiple that can be used to bound a random variate drawn from an exponential distribution. Given an upper clamp multiple `k` (and corresponding upper bound k \* average\_interval), this function computes a lower clamp multiple `c` (corresponding to a lower bound c \* average\_interval) where the probability mass between the lower bound and the median is equal to the probability mass between the median and the upper bound. ### `clamped_poisson_interval` ```python clamped_poisson_interval(average_interval: float, clamping_factor: float = 0.3) -> float ``` Bounds Poisson "inter-arrival times" to a range defined by the clamping factor. The upper bound for this random variate is: average\_interval \* (1 + clamping\_factor). A lower bound is picked so that the average interval remains approximately fixed. ### `bounded_poisson_interval` ```python bounded_poisson_interval(lower_bound: float, upper_bound: float) -> float ``` Bounds Poisson "inter-arrival times" to a range. Unlike `clamped_poisson_interval` this does not take a target average interval. Instead, the interval is predetermined and the average is calculated as their midpoint. This allows Poisson intervals to be used in cases where a lower bound must be enforced. # names Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-utilities-names # `prefect.utilities.names` ## Functions ### `generate_slug` ```python generate_slug(n_words: int) -> str ``` Generates a random slug. **Args:** * `- n_words`: the number of words in the slug ### `obfuscate` ```python obfuscate(s: Any, show_tail: bool = False) -> str ``` Obfuscates any data type's string representation. See `obfuscate_string`. ### `obfuscate_string` ```python obfuscate_string(s: str, show_tail: bool = False) -> str ``` Obfuscates a string by returning a new string of 8 characters. If the input string is longer than 10 characters and show\_tail is True, then up to 4 of its final characters will become final characters of the obfuscated string; all other characters are "\*". "abc" -> "********" "abcdefgh" -> "********" "abcdefghijk" -> "\*\*\*\*\*\*\*k" "abcdefghijklmnopqrs" -> "\*\*\*\*pqrs" # processutils Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-utilities-processutils # `prefect.utilities.processutils` ## Functions ### `open_process` ```python open_process(command: list[str], **kwargs: Any) -> AsyncGenerator[anyio.abc.Process, Any] ``` Like `anyio.open_process` but with: * Support for Windows command joining * Termination of the process on exception during yield * Forced cleanup of process resources during cancellation ### `run_process` ```python run_process(command: list[str], **kwargs: Any) -> anyio.abc.Process ``` Like `anyio.run_process` but with: * Use of our `open_process` utility to ensure resources are cleaned up * Simple `stream_output` support to connect the subprocess to the parent stdout/err * Support for submission with `TaskGroup.start` marking as 'started' after the process has been created. When used, the PID is returned to the task status. ### `consume_process_output` ```python consume_process_output(process: anyio.abc.Process, stdout_sink: Optional[TextSink[str]] = None, stderr_sink: Optional[TextSink[str]] = None) -> None ``` ### `stream_text` ```python stream_text(source: TextReceiveStream, *sinks: Optional[TextSink[str]]) -> None ``` ### `forward_signal_handler` ```python forward_signal_handler(pid: int, signum: int, *signums: int) -> None ``` Forward subsequent signum events (e.g. interrupts) to respective signums. ### `setup_signal_handlers_server` ```python setup_signal_handlers_server(pid: int, process_name: str, print_fn: PrintFn) -> None ``` Handle interrupts of the server gracefully. ### `setup_signal_handlers_agent` ```python setup_signal_handlers_agent(pid: int, process_name: str, print_fn: PrintFn) -> None ``` Handle interrupts of the agent gracefully. ### `setup_signal_handlers_worker` ```python setup_signal_handlers_worker(pid: int, process_name: str, print_fn: PrintFn) -> None ``` Handle interrupts of workers gracefully. ### `get_sys_executable` ```python get_sys_executable() -> str ``` # pydantic Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-utilities-pydantic # `prefect.utilities.pydantic` ## Functions ### `add_cloudpickle_reduction` ```python add_cloudpickle_reduction(__model_cls: Optional[type[M]] = None, **kwargs: Any) -> Union[type[M], Callable[[type[M]], type[M]]] ``` Adds a `__reducer__` to the given class that ensures it is cloudpickle compatible. Workaround for issues with cloudpickle when using cythonized pydantic which throws exceptions when attempting to pickle the class which has "compiled" validator methods dynamically attached to it. We cannot define this utility in the model class itself because the class is the type that contains unserializable methods. Any model using some features of Pydantic (e.g. `Path` validation) with a Cython compiled Pydantic installation may encounter pickling issues. See related issue at [https://github.com/cloudpipe/cloudpickle/issues/408](https://github.com/cloudpipe/cloudpickle/issues/408) ### `get_class_fields_only` ```python get_class_fields_only(model: type[BaseModel]) -> set[str] ``` Gets all the field names defined on the model class but not any parent classes. Any fields that are on the parent but redefined on the subclass are included. ### `add_type_dispatch` ```python add_type_dispatch(model_cls: type[M]) -> type[M] ``` Extend a Pydantic model to add a 'type' field that is used as a discriminator field to dynamically determine the subtype that when deserializing models. This allows automatic resolution to subtypes of the decorated model. If a type field already exists, it should be a string literal field that has a constant value for each subclass. The default value of this field will be used as the dispatch key. If a type field does not exist, one will be added. In this case, the value of the field will be set to the value of the `__dispatch_key__`. The base class should define a `__dispatch_key__` class method that is used to determine the unique key for each subclass. Alternatively, each subclass can define the `__dispatch_key__` as a string literal. The base class must not define a 'type' field. If it is not desirable to add a field to the model and the dispatch key can be tracked separately, the lower level utilities in `prefect.utilities.dispatch` should be used directly. ### `custom_pydantic_encoder` ```python custom_pydantic_encoder(type_encoders: dict[Any, Callable[[type[Any]], Any]], obj: Any) -> Any ``` ### `parse_obj_as` ```python parse_obj_as(type_: type[T], data: Any, mode: Literal['python', 'json', 'strings'] = 'python') -> T ``` Parse a given data structure as a Pydantic model via `TypeAdapter`. Read more about `TypeAdapter` [here](https://docs.pydantic.dev/latest/concepts/type_adapter/). **Args:** * `type_`: The type to parse the data as. * `data`: The data to be parsed. * `mode`: The mode to use for parsing, either `python`, `json`, or `strings`. Defaults to `python`, where `data` should be a Python object (e.g. `dict`). **Returns:** * The parsed `data` as the given `type_`. ### `handle_secret_render` ```python handle_secret_render(value: object, context: dict[str, Any]) -> object ``` ## Classes ### `PartialModel` A utility for creating a Pydantic model in several steps. Fields may be set at initialization, via attribute assignment, or at finalization when the concrete model is returned. Pydantic validation does not occur until finalization. Each field can only be set once and a `ValueError` will be raised on assignment if a field already has a value. **Methods:** #### `finalize` ```python finalize(self, **kwargs: Any) -> M ``` #### `raise_if_already_set` ```python raise_if_already_set(self, name: str) -> None ``` #### `raise_if_not_in_model` ```python raise_if_not_in_model(self, name: str) -> None ``` # render_swagger Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-utilities-render_swagger # `prefect.utilities.render_swagger` ## Functions ### `swagger_lib` ```python swagger_lib(config: MkDocsConfig) -> dict[str, Any] ``` Provides the actual swagger library used ## Classes ### `SwaggerPlugin` **Methods:** #### `on_page_markdown` ```python on_page_markdown() -> Optional[str] ``` # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-utilities-schema_tools-__init__ # `prefect.utilities.schema_tools` *This module is empty or contains only private/internal implementations.* # hydration Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-utilities-schema_tools-hydration # `prefect.utilities.schema_tools.hydration` ## Functions ### `handler` ```python handler(kind: PrefectKind) -> Callable[[Handler], Handler] ``` ### `call_handler` ```python call_handler(kind: PrefectKind, obj: dict[str, Any], ctx: HydrationContext) -> Any ``` ### `null_handler` ```python null_handler(obj: dict[str, Any], ctx: HydrationContext) ``` ### `json_handler` ```python json_handler(obj: dict[str, Any], ctx: HydrationContext) ``` ### `jinja_handler` ```python jinja_handler(obj: dict[str, Any], ctx: HydrationContext) -> Any ``` ### `workspace_variable_handler` ```python workspace_variable_handler(obj: dict[str, Any], ctx: HydrationContext) -> Any ``` ### `hydrate` ```python hydrate(obj: dict[str, Any], ctx: Optional[HydrationContext] = None) -> dict[str, Any] ``` ## Classes ### `HydrationContext` **Methods:** #### `build` ```python build(cls, session: AsyncSession, raise_on_error: bool = False, render_jinja: bool = False, render_workspace_variables: bool = False) -> Self ``` ### `Placeholder` **Methods:** #### `is_error` ```python is_error(self) -> bool ``` ### `RemoveValue` **Methods:** #### `is_error` ```python is_error(self) -> bool ``` ### `HydrationError` **Methods:** #### `is_error` ```python is_error(self) -> bool ``` #### `is_error` ```python is_error(self) -> bool ``` #### `message` ```python message(self) -> str ``` ### `KeyNotFound` **Methods:** #### `is_error` ```python is_error(self) -> bool ``` #### `key` ```python key(self) -> str ``` #### `message` ```python message(self) -> str ``` #### `message` ```python message(self) -> str ``` ### `ValueNotFound` **Methods:** #### `key` ```python key(self) -> str ``` #### `key` ```python key(self) -> str ``` #### `message` ```python message(self) -> str ``` ### `TemplateNotFound` **Methods:** #### `key` ```python key(self) -> str ``` #### `key` ```python key(self) -> str ``` #### `message` ```python message(self) -> str ``` ### `VariableNameNotFound` **Methods:** #### `key` ```python key(self) -> str ``` #### `key` ```python key(self) -> str ``` #### `message` ```python message(self) -> str ``` ### `InvalidJSON` **Methods:** #### `is_error` ```python is_error(self) -> bool ``` #### `message` ```python message(self) -> str ``` #### `message` ```python message(self) -> str ``` ### `InvalidJinja` **Methods:** #### `is_error` ```python is_error(self) -> bool ``` #### `message` ```python message(self) -> str ``` #### `message` ```python message(self) -> str ``` ### `WorkspaceVariableNotFound` **Methods:** #### `is_error` ```python is_error(self) -> bool ``` #### `message` ```python message(self) -> str ``` #### `message` ```python message(self) -> str ``` #### `variable_name` ```python variable_name(self) -> str ``` ### `WorkspaceVariable` **Methods:** #### `is_error` ```python is_error(self) -> bool ``` ### `ValidJinja` **Methods:** #### `is_error` ```python is_error(self) -> bool ``` # validation Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-utilities-schema_tools-validation # `prefect.utilities.schema_tools.validation` ## Functions ### `is_valid_schema` ```python is_valid_schema(schema: ObjectSchema, preprocess: bool = True) -> None ``` ### `validate` ```python validate(obj: dict[str, Any], schema: ObjectSchema, raise_on_error: bool = False, preprocess: bool = True, ignore_required: bool = False, allow_none_with_default: bool = False) -> list[JSONSchemaValidationError] ``` ### `is_valid` ```python is_valid(obj: dict[str, Any], schema: ObjectSchema) -> bool ``` ### `prioritize_placeholder_errors` ```python prioritize_placeholder_errors(errors: list[JSONSchemaValidationError]) -> list[JSONSchemaValidationError] ``` ### `build_error_obj` ```python build_error_obj(errors: list[JSONSchemaValidationError]) -> dict[str, Any] ``` ### `process_properties` ```python process_properties(properties: dict[str, dict[str, Any]], required_fields: list[str], allow_none_with_default: bool = False) -> None ``` ### `preprocess_schema` ```python preprocess_schema(schema: ObjectSchema, allow_none_with_default: bool = False) -> ObjectSchema ``` ## Classes ### `CircularSchemaRefError` ### `ValidationError` # services Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-utilities-services # `prefect.utilities.services` ## Functions ### `critical_service_loop` ```python critical_service_loop(workload: Callable[..., Coroutine[Any, Any, Any]], interval: float, memory: int = 10, consecutive: int = 3, backoff: int = 1, printer: Callable[..., None] = print, run_once: bool = False, jitter_range: Optional[float] = None) -> None ``` Runs the given `workload` function on the specified `interval`, while being forgiving of intermittent issues like temporary HTTP errors. If more than a certain number of `consecutive` errors occur, print a summary of up to `memory` recent exceptions to `printer`, then begin backoff. The loop will exit after reaching the consecutive error limit `backoff` times. On each backoff, the interval will be doubled. On a successful loop, the backoff will be reset. **Args:** * `workload`: the function to call * `interval`: how frequently to call it * `memory`: how many recent errors to remember * `consecutive`: how many consecutive errors must we see before we begin backoff * `backoff`: how many times we should allow consecutive errors before exiting * `printer`: a `print`-like function where errors will be reported * `run_once`: if set, the loop will only run once then return * `jitter_range`: if set, the interval will be a random variable (rv) drawn from a clamped Poisson distribution where lambda = interval and the rv is bound between `interval * (1 - range) < rv < interval * (1 + range)` ### `start_client_metrics_server` ```python start_client_metrics_server() -> None ``` Start the process-wide Prometheus metrics server for client metrics (if enabled with `PREFECT_CLIENT_METRICS_ENABLED`) on the port `PREFECT_CLIENT_METRICS_PORT`. ### `stop_client_metrics_server` ```python stop_client_metrics_server() -> None ``` Stop the process-wide Prometheus metrics server for client metrics, if it has previously been started # slugify Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-utilities-slugify # `prefect.utilities.slugify` *This module is empty or contains only private/internal implementations.* # templating Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-utilities-templating # `prefect.utilities.templating` ## Functions ### `determine_placeholder_type` ```python determine_placeholder_type(name: str) -> PlaceholderType ``` Determines the type of a placeholder based on its name. **Args:** * `name`: The name of the placeholder **Returns:** * The type of the placeholder ### `find_placeholders` ```python find_placeholders(template: T) -> set[Placeholder] ``` Finds all placeholders in a template. **Args:** * `template`: template to discover placeholders in **Returns:** * A set of all placeholders in the template ### `apply_values` ```python apply_values(template: T, values: dict[str, Any], remove_notset: bool = True, warn_on_notset: bool = False) -> Union[T, type[NotSet]] ``` Replaces placeholders in a template with values from a supplied dictionary. Will recursively replace placeholders in dictionaries and lists. If a value has no placeholders, it will be returned unchanged. If a template contains only a single placeholder, the placeholder will be fully replaced with the value. If a template contains text before or after a placeholder or there are multiple placeholders, the placeholders will be replaced with the corresponding variable values. If a template contains a placeholder that is not in `values`, NotSet will be returned to signify that no placeholder replacement occurred. If `template` is a dictionary that contains a key with a value of NotSet, the key will be removed in the return value unless `remove_notset` is set to False. **Args:** * `template`: template to discover and replace values in * `values`: The values to apply to placeholders in the template * `remove_notset`: If True, remove keys with an unset value * `warn_on_notset`: If True, warn when a placeholder is not found in `values` **Returns:** * The template with the values applied ### `resolve_block_document_references` ```python resolve_block_document_references(template: T, client: Optional['PrefectClient'] = None, value_transformer: Optional[Callable[[str, Any], Any]] = None) -> Union[T, dict[str, Any]] ``` Resolve block document references in a template by replacing each reference with its value or the return value of the transformer function if provided. Recursively searches for block document references in dictionaries and lists. Identifies block document references by the as dictionary with the following structure: ``` { "$ref": { "block_document_id": } } ``` where `` is the ID of the block document to resolve. Once the block document is retrieved from the API, the data of the block document is used to replace the reference. ## Accessing Values: To access different values in a block document, use dot notation combined with the block document's prefix, slug, and block name. For a block document with the structure: ```json { "value": { "key": { "nested-key": "nested-value" }, "list": [ {"list-key": "list-value"}, 1, 2 ] } } ``` examples of value resolution are as follows: 1. Accessing a nested dictionary: Format: `prefect.blocks...value.key` Example: Returns `{"nested-key": "nested-value"}` 2. Accessing a specific nested value: Format: `prefect.blocks...value.key.nested-key` Example: Returns `"nested-value"` 3. Accessing a list element's key-value: Format: `prefect.blocks...value.list[0].list-key` Example: Returns `"list-value"` ## Default Resolution for System Blocks: For system blocks, which only contain a `value` attribute, this attribute is resolved by default. **Args:** * `template`: The template to resolve block documents in * `value_transformer`: A function that takes the block placeholder and the block value and returns replacement text for the template **Returns:** * The template with block documents resolved ### `resolve_variables` ```python resolve_variables(template: T, client: Optional['PrefectClient'] = None) -> T ``` Resolve variables in a template by replacing each variable placeholder with the value of the variable. Recursively searches for variable placeholders in dictionaries and lists. Strips variable placeholders if the variable is not found. **Args:** * `template`: The template to resolve variables in **Returns:** * The template with variables resolved ## Classes ### `PlaceholderType` ### `Placeholder` # text Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-utilities-text # `prefect.utilities.text` ## Functions ### `truncated_to` ```python truncated_to(length: int, value: Optional[str]) -> str ``` ### `fuzzy_match_string` ```python fuzzy_match_string(word: str, possibilities: Iterable[str]) -> Optional[str] ``` # timeout Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-utilities-timeout # `prefect.utilities.timeout` ## Functions ### `fail_if_not_timeout_error` ```python fail_if_not_timeout_error(timeout_exc_type: type[Exception]) -> None ``` ### `timeout_async` ```python timeout_async(seconds: Optional[float] = None, timeout_exc_type: type[TimeoutError] = TimeoutError) ``` ### `timeout` ```python timeout(seconds: Optional[float] = None, timeout_exc_type: type[TimeoutError] = TimeoutError) ``` # urls Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-utilities-urls # `prefect.utilities.urls` ## Functions ### `validate_restricted_url` ```python validate_restricted_url(url: str) -> None ``` Validate that the provided URL is safe for outbound requests. This prevents attacks like SSRF (Server Side Request Forgery), where an attacker can make requests to internal services (like the GCP metadata service, localhost addresses, or in-cluster Kubernetes services) **Args:** * `url`: The URL to validate. **Raises:** * `ValueError`: If the URL is a restricted URL. ### `convert_class_to_name` ```python convert_class_to_name(obj: Any) -> str ``` Convert CamelCase class name to dash-separated lowercase name ### `url_for` ```python url_for(obj: Union['PrefectFuture[Any]', 'Block', 'Variable', 'Automation', 'Resource', 'ReceivedEvent', BaseModel, str], obj_id: Optional[Union[str, UUID]] = None, url_type: URLType = 'ui', default_base_url: Optional[str] = None, **additional_format_kwargs: Any) -> Optional[str] ``` Returns the URL for a Prefect object. Pass in a supported object directly or provide an object name and ID. **Args:** * `obj`: A Prefect object to get the URL for, or its URL name and ID. * `obj_id`: The UUID of the object. * `url_type`: Whether to return the URL for the UI (default) or API. * `default_base_url`: The default base URL to use if no URL is configured. * `additional_format_kwargs`: Additional keyword arguments to pass to the URL format. **Returns:** * Optional\[str]: The URL for the given object or None if the object is not supported. **Examples:** url\_for(my\_flow\_run) url\_for(obj=my\_flow\_run) url\_for("flow-run", obj\_id="123e4567-e89b-12d3-a456-426614174000") # visualization Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-utilities-visualization # `prefect.utilities.visualization` Utilities for working with Flow\.visualize() ## Functions ### `get_task_viz_tracker` ```python get_task_viz_tracker() -> Optional['TaskVizTracker'] ``` ### `track_viz_task` ```python track_viz_task(is_async: bool, task_name: str, parameters: dict[str, Any], viz_return_value: Optional[Any] = None) -> Union[Coroutine[Any, Any, Any], Any] ``` Return a result if sync otherwise return a coroutine that returns the result ### `build_task_dependencies` ```python build_task_dependencies(task_run_tracker: TaskVizTracker) -> graphviz.Digraph ``` Constructs a Graphviz directed graph object that represents the dependencies between tasks in the given TaskVizTracker. * task\_run\_tracker (TaskVizTracker): An object containing tasks and their dependencies. * graphviz.Digraph: A directed graph object depicting the relationships and dependencies between tasks. Raises: * GraphvizImportError: If there's an ImportError related to graphviz. * FlowVisualizationError: If there's any other error during the visualization process or if return values of tasks are directly accessed without specifying a `viz_return_value`. ### `visualize_task_dependencies` ```python visualize_task_dependencies(graph: graphviz.Digraph, flow_run_name: str) -> None ``` Renders and displays a Graphviz directed graph representing task dependencies. The graph is rendered in PNG format and saved with the name specified by flow\_run\_name. After rendering, the visualization is opened and displayed. Parameters: * graph (graphviz.Digraph): The directed graph object to visualize. * flow\_run\_name (str): The name to use when saving the rendered graph image. Raises: * GraphvizExecutableNotFoundError: If Graphviz isn't found on the system. * FlowVisualizationError: If there's any other error during the visualization process or if return values of tasks are directly accessed without specifying a `viz_return_value`. ## Classes ### `FlowVisualizationError` ### `VisualizationUnsupportedError` ### `TaskVizTrackerState` ### `GraphvizImportError` ### `GraphvizExecutableNotFoundError` ### `VizTask` ### `TaskVizTracker` **Methods:** #### `add_task` ```python add_task(self, task: VizTask) -> None ``` #### `link_viz_return_value_to_viz_task` ```python link_viz_return_value_to_viz_task(self, viz_return_value: Any, viz_task: VizTask) -> None ``` We cannot track booleans, Ellipsis, None, NotImplemented, or the integers from -5 to 256 because they are singletons. # variables Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-variables # `prefect.variables` ## Classes ### `Variable` Variables are named, mutable JSON values that can be shared across tasks and flows. **Args:** * `name`: A string identifying the variable. * `value`: A string that is the value of the variable. * `tags`: An optional list of strings to associate with the variable. **Methods:** #### `aget` ```python aget(cls, name: str, default: StrictVariableValue = None) -> StrictVariableValue ``` Asynchronously get a variable's value by name. If the variable does not exist, return the default value. **Args:** * `- name`: The name of the variable value to get. * `- default`: The default value to return if the variable does not exist. #### `aset` ```python aset(cls, name: str, value: StrictVariableValue, tags: Optional[list[str]] = None, overwrite: bool = False) -> 'Variable' ``` Asynchronously sets a new variable. If one exists with the same name, must pass `overwrite=True` Returns the newly set variable object. **Args:** * `- name`: The name of the variable to set. * `- value`: The value of the variable to set. * `- tags`: An optional list of strings to associate with the variable. * `- overwrite`: Whether to overwrite the variable if it already exists. #### `aunset` ```python aunset(cls, name: str) -> bool ``` Asynchronously unset a variable by name. **Args:** * `- name`: The name of the variable to unset. Returns `True` if the variable was deleted, `False` if the variable did not exist. #### `get` ```python get(cls, name: str, default: StrictVariableValue = None) -> StrictVariableValue ``` Get a variable's value by name. If the variable does not exist, return the default value. **Args:** * `- name`: The name of the variable value to get. * `- default`: The default value to return if the variable does not exist. #### `set` ```python set(cls, name: str, value: StrictVariableValue, tags: Optional[list[str]] = None, overwrite: bool = False) -> 'Variable' ``` Sets a new variable. If one exists with the same name, must pass `overwrite=True` Returns the newly set variable object. **Args:** * `- name`: The name of the variable to set. * `- value`: The value of the variable to set. * `- tags`: An optional list of strings to associate with the variable. * `- overwrite`: Whether to overwrite the variable if it already exists. #### `unset` ```python unset(cls, name: str) -> bool ``` Unset a variable by name. **Args:** * `- name`: The name of the variable to unset. Returns `True` if the variable was deleted, `False` if the variable did not exist. # __init__ Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-workers-__init__ # `prefect.workers` *This module is empty or contains only private/internal implementations.* # base Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-workers-base # `prefect.workers.base` ## Classes ### `BaseJobConfiguration` **Methods:** #### `from_template_and_values` ```python from_template_and_values(cls, base_job_template: dict[str, Any], values: dict[str, Any], client: 'PrefectClient | None' = None) ``` Creates a valid worker configuration object from the provided base configuration and overrides. Important: this method expects that the base\_job\_template was already validated server-side. #### `is_using_a_runner` ```python is_using_a_runner(self) -> bool ``` #### `json_template` ```python json_template(cls) -> dict[str, Any] ``` Returns a dict with job configuration as keys and the corresponding templates as values Defaults to using the job configuration parameter name as the template variable name. e.g. ```python { key1: '{{ key1 }}', # default variable template key2: '{{ template2 }}', # `template2` specifically provide as template } ``` #### `prepare_for_flow_run` ```python prepare_for_flow_run(self, flow_run: 'FlowRun', deployment: 'DeploymentResponse | None' = None, flow: 'APIFlow | None' = None, work_pool: 'WorkPool | None' = None, worker_name: str | None = None) -> None ``` Prepare the job configuration for a flow run. This method is called by the worker before starting a flow run. It should be used to set any configuration values that are dependent on the flow run. **Args:** * `flow_run`: The flow run to be executed. * `deployment`: The deployment that the flow run is associated with. * `flow`: The flow that the flow run is associated with. * `work_pool`: The work pool that the flow run is running in. * `worker_name`: The name of the worker that is submitting the flow run. ### `BaseVariables` **Methods:** #### `model_json_schema` ```python model_json_schema(cls, by_alias: bool = True, ref_template: str = '#/definitions/{model}', schema_generator: Type[GenerateJsonSchema] = GenerateJsonSchema, mode: Literal['validation', 'serialization'] = 'validation') -> dict[str, Any] ``` TODO: stop overriding this method - use GenerateSchema in ConfigDict instead? ### `BaseWorkerResult` ### `BaseWorker` **Methods:** #### `client` ```python client(self) -> PrefectClient ``` #### `get_all_available_worker_types` ```python get_all_available_worker_types() -> list[str] ``` Returns all worker types available in the local registry. #### `get_and_submit_flow_runs` ```python get_and_submit_flow_runs(self) -> list['FlowRun'] ``` #### `get_default_base_job_template` ```python get_default_base_job_template(cls) -> dict[str, Any] ``` #### `get_description` ```python get_description(cls) -> str ``` #### `get_documentation_url` ```python get_documentation_url(cls) -> str ``` #### `get_flow_run_logger` ```python get_flow_run_logger(self, flow_run: 'FlowRun') -> PrefectLogAdapter ``` #### `get_logo_url` ```python get_logo_url(cls) -> str ``` #### `get_name_slug` ```python get_name_slug(self) -> str ``` #### `get_status` ```python get_status(self) -> dict[str, Any] ``` Retrieves the status of the current worker including its name, current worker pool, the work pool queues it is polling, and its local settings. #### `get_worker_class_from_type` ```python get_worker_class_from_type(type: str) -> Optional[Type['BaseWorker[Any, Any, Any]']] ``` Returns the worker class for a given worker type. If the worker type is not recognized, returns None. #### `is_worker_still_polling` ```python is_worker_still_polling(self, query_interval_seconds: float) -> bool ``` This method is invoked by a webserver healthcheck handler and returns a boolean indicating if the worker has recorded a scheduled flow run poll within a variable amount of time. The `query_interval_seconds` is the same value that is used by the loop services - we will evaluate if the \_last\_polled\_time was within that interval x 30 (so 10s -> 5m) The instance property `self._last_polled_time` is currently set/updated in `get_and_submit_flow_runs()` #### `limiter` ```python limiter(self) -> anyio.CapacityLimiter ``` #### `run` ```python run(self, flow_run: 'FlowRun', configuration: C, task_status: Optional[anyio.abc.TaskStatus[int]] = None) -> R ``` Runs a given flow run on the current worker. #### `setup` ```python setup(self) -> None ``` Prepares the worker to run. #### `start` ```python start(self, run_once: bool = False, with_healthcheck: bool = False, printer: Callable[..., None] = print) -> None ``` Starts the worker and runs the main worker loops. By default, the worker will run loops to poll for scheduled/cancelled flow runs and sync with the Prefect API server. If `run_once` is set, the worker will only run each loop once and then return. If `with_healthcheck` is set, the worker will start a healthcheck server which can be used to determine if the worker is still polling for flow runs and restart the worker if necessary. **Args:** * `run_once`: If set, the worker will only run each loop once then return. * `with_healthcheck`: If set, the worker will start a healthcheck server. * `printer`: A `print`-like function where logs will be reported. #### `submit` ```python submit(self, flow: 'Flow[..., FR]', parameters: dict[str, Any] | None = None, job_variables: dict[str, Any] | None = None) -> 'PrefectFlowRunFuture[FR]' ``` EXPERIMENTAL: The interface for this method is subject to change. Submits a flow to run via the worker. **Args:** * `flow`: The flow to submit * `parameters`: The parameters to pass to the flow **Returns:** * A flow run object #### `sync_with_backend` ```python sync_with_backend(self) -> None ``` Updates the worker's local information about it's current work pool and queues. Sends a worker heartbeat to the API. #### `teardown` ```python teardown(self, *exc_info: Any) -> None ``` Cleans up resources after the worker is stopped. #### `work_pool` ```python work_pool(self) -> WorkPool ``` # block Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-workers-block # `prefect.workers.block` 2024-06-27: This surfaces an actionable error message for moved or removed objects in Prefect 3.0 upgrade. # cloud Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-workers-cloud # `prefect.workers.cloud` 2024-06-27: This surfaces an actionable error message for moved or removed objects in Prefect 3.0 upgrade. # process Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-workers-process # `prefect.workers.process` Module containing the Process worker used for executing flow runs as subprocesses. To start a Process worker, run the following command: ```bash prefect worker start --pool 'my-work-pool' --type process ``` Replace `my-work-pool` with the name of the work pool you want the worker to poll for flow runs. For more information about work pools and workers, checkout out the [Prefect docs](https://docs.prefect.io/v3/concepts/work-pools/). ## Classes ### `ProcessJobConfiguration` **Methods:** #### `from_template_and_values` ```python from_template_and_values(cls, base_job_template: dict[str, Any], values: dict[str, Any], client: 'PrefectClient | None' = None) ``` Creates a valid worker configuration object from the provided base configuration and overrides. Important: this method expects that the base\_job\_template was already validated server-side. #### `is_using_a_runner` ```python is_using_a_runner(self) -> bool ``` #### `json_template` ```python json_template(cls) -> dict[str, Any] ``` Returns a dict with job configuration as keys and the corresponding templates as values Defaults to using the job configuration parameter name as the template variable name. e.g. ```python { key1: '{{ key1 }}', # default variable template key2: '{{ template2 }}', # `template2` specifically provide as template } ``` #### `prepare_for_flow_run` ```python prepare_for_flow_run(self, flow_run: 'FlowRun', deployment: 'DeploymentResponse | None' = None, flow: 'APIFlow | None' = None, work_pool: 'WorkPool | None' = None, worker_name: str | None = None) -> None ``` #### `prepare_for_flow_run` ```python prepare_for_flow_run(self, flow_run: 'FlowRun', deployment: 'DeploymentResponse | None' = None, flow: 'APIFlow | None' = None, work_pool: 'WorkPool | None' = None, worker_name: str | None = None) -> None ``` Prepare the job configuration for a flow run. This method is called by the worker before starting a flow run. It should be used to set any configuration values that are dependent on the flow run. **Args:** * `flow_run`: The flow run to be executed. * `deployment`: The deployment that the flow run is associated with. * `flow`: The flow that the flow run is associated with. * `work_pool`: The work pool that the flow run is running in. * `worker_name`: The name of the worker that is submitting the flow run. #### `validate_working_dir` ```python validate_working_dir(cls, v: Path | str | None) -> Path | None ``` ### `ProcessVariables` **Methods:** #### `model_json_schema` ```python model_json_schema(cls, by_alias: bool = True, ref_template: str = '#/definitions/{model}', schema_generator: Type[GenerateJsonSchema] = GenerateJsonSchema, mode: Literal['validation', 'serialization'] = 'validation') -> dict[str, Any] ``` TODO: stop overriding this method - use GenerateSchema in ConfigDict instead? ### `ProcessWorkerResult` Contains information about the final state of a completed process ### `ProcessWorker` **Methods:** #### `run` ```python run(self, flow_run: 'FlowRun', configuration: ProcessJobConfiguration, task_status: Optional[anyio.abc.TaskStatus[int]] = None) -> ProcessWorkerResult ``` #### `start` ```python start(self, run_once: bool = False, with_healthcheck: bool = False, printer: Callable[..., None] = print) -> None ``` Starts the worker and runs the main worker loops. By default, the worker will run loops to poll for scheduled/cancelled flow runs and sync with the Prefect API server. If `run_once` is set, the worker will only run each loop once and then return. If `with_healthcheck` is set, the worker will start a healthcheck server which can be used to determine if the worker is still polling for flow runs and restart the worker if necessary. **Args:** * `run_once`: If set, the worker will only run each loop once then return. * `with_healthcheck`: If set, the worker will start a healthcheck server. * `printer`: A `print`-like function where logs will be reported. # server Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-workers-server # `prefect.workers.server` ## Functions ### `build_healthcheck_server` ```python build_healthcheck_server(worker: BaseWorker[Any, Any, Any], query_interval_seconds: float, log_level: str = 'error') -> uvicorn.Server ``` Build a healthcheck FastAPI server for a worker. **Args:** * `worker`: the worker whose health we will check * `log_level`: the log ### `start_healthcheck_server` ```python start_healthcheck_server(worker: BaseWorker[Any, Any, Any], query_interval_seconds: float, log_level: str = 'error') -> None ``` Run a healthcheck FastAPI server for a worker. **Args:** * `worker`: the worker whose health we will check * `log_level`: the log level to use for the server # utilities Source: https://docs-3.prefect.io/v3/api-ref/python/prefect-workers-utilities # `prefect.workers.utilities` ## Functions ### `get_available_work_pool_types` ```python get_available_work_pool_types() -> List[str] ``` ### `get_default_base_job_template_for_infrastructure_type` ```python get_default_base_job_template_for_infrastructure_type(infra_type: str) -> Optional[Dict[str, Any]] ``` # Cloud API Overview Source: https://docs-3.prefect.io/v3/api-ref/rest-api/cloud/index The Prefect Cloud API enables you to interact programmatically with Prefect Cloud. The Prefect Cloud API is organized around REST. Explore the interactive [Prefect Cloud REST API reference](https://app.prefect.cloud/api/docs). # REST API overview Source: https://docs-3.prefect.io/v3/api-ref/rest-api/index Prefect REST API for interacting with Prefect Cloud & self-hosted Prefect server. The Prefect API is organized around REST. It is used for communicating data from clients to a self-hosted Prefect server instance so that orchestration can be performed. This API is consumed by clients such as the Prefect Python SDK or the server dashboard. Prefect Cloud and self-hosted Prefect server each provide a REST API. * Prefect Cloud: * [Interactive Prefect Cloud REST API documentation](https://app.prefect.cloud/api/docs) * [Finding your Prefect Cloud details](#finding-your-prefect-cloud-details) * Self-hosted Prefect server: * Interactive REST API documentation for self-hosted Prefect server is available under **Server API** on the sidebar navigation or at `http://localhost:4200/docs` or the `/docs` endpoint of the [PREFECT\_API\_URL](/v3/develop/settings-and-profiles/) you have configured to access the server. You must have the server running with `prefect server start` to access the interactive documentation. ## Interact with the REST API You can interact with the Prefect REST API in several ways: * Create an instance of [`PrefectClient`](https://reference.prefect.io/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient), which is part of the [Prefect Python SDK](/v3/api-ref/python/). * Use your favorite Python HTTP library such as [Requests](https://requests.readthedocs.io/en/latest/) or [HTTPX](https://www.python-httpx.org/) * Use an HTTP library in your language of choice * Use [curl](https://curl.se/) from the command line ### PrefectClient with self-hosted Prefect server This example uses `PrefectClient` with self-hosted Prefect server: ```python import asyncio from prefect.client.orchestration import get_client async def get_flows(): client = get_client() r = await client.read_flows(limit=5) return r r = asyncio.run(get_flows()) for flow in r: print(flow.name, flow.id) if __name__ == "__main__": asyncio.run(get_flows()) ``` Output: ```bash cat-facts 58ed68b1-0201-4f37-adef-0ea24bd2a022 dog-facts e7c0403d-44e7-45cf-a6c8-79117b7f3766 sloth-facts 771c0574-f5bf-4f59-a69d-3be3e061a62d capybara-facts fbadaf8b-584f-48b9-b092-07d351edd424 lemur-facts 53f710e7-3b0f-4b2f-ab6b-44934111818c ``` ### Requests with Prefect This example uses the Requests library with Prefect Cloud to return the five newest artifacts. {/* pmd-metadata: fixture:mock_post_200 */} ```python import requests PREFECT_API_URL="https://api.prefect.cloud/api/accounts/abc-my-cloud-account-id-is-here/workspaces/123-my-workspace-id-is-here" PREFECT_API_KEY="123abc_my_api_key_goes_here" data = { "sort": "CREATED_DESC", "limit": 5, "artifacts": { "key": { "exists_": True } } } headers = {"Authorization": f"Bearer {PREFECT_API_KEY}"} endpoint = f"{PREFECT_API_URL}/artifacts/filter" response = requests.post(endpoint, headers=headers, json=data) assert response.status_code == 200 for artifact in response.json(): print(artifact) ``` ### curl with Prefect Cloud This example uses curl with Prefect Cloud to create a flow run: ```bash ACCOUNT_ID="abc-my-cloud-account-id-goes-here" WORKSPACE_ID="123-my-workspace-id-goes-here" PREFECT_API_URL="https://api.prefect.cloud/api/accounts/$ACCOUNT_ID/workspaces/$WORKSPACE_ID" PREFECT_API_KEY="123abc_my_api_key_goes_here" DEPLOYMENT_ID="my_deployment_id" curl --location --request POST "$PREFECT_API_URL/deployments/$DEPLOYMENT_ID/create_flow_run" \ --header "Content-Type: application/json" \ --header "Authorization: Bearer $PREFECT_API_KEY" \ --header "X-PREFECT-API-VERSION: 0.8.4" \ --data-raw "{}" ``` Note that in this example `--data-raw "{}"` is required and is where you can specify other aspects of the flow run such as the state. Windows users substitute `^` for `\` for line multi-line commands. ## Finding your Prefect Cloud details When working with the Prefect Cloud REST API you will need your Account ID and often the Workspace ID for the [workspace](/v3/manage/cloud/workspaces/) you want to interact with. You can find both IDs for a [Prefect profile](/v3/develop/settings-and-profiles/) in the CLI with `prefect profile inspect my_profile`. This command will also display your [Prefect API key](/v3/how-to-guides/cloud/manage-users/api-keys), as shown below: ```bash PREFECT_API_URL='https://api.prefect.cloud/api/accounts/abc-my-account-id-is-here/workspaces/123-my-workspace-id-is-here' PREFECT_API_KEY='123abc_my_api_key_is_here' ``` Alternatively, view your Account ID and Workspace ID in your browser URL. For example: `https://app.prefect.cloud/account/abc-my-account-id-is-here/workspaces/123-my-workspace-id-is-here`. ## REST guidelines The REST APIs adhere to the following guidelines: * Collection names are pluralized (for example, `/flows` or `/runs`). * We indicate variable placeholders with colons: `GET /flows/:id`. * We use snake case for route names: `GET /task_runs`. * We avoid nested resources unless there is no possibility of accessing the child resource outside the parent context. For example, we query `/task_runs` with a flow run filter instead of accessing `/flow_runs/:id/task_runs`. * The API is hosted with an `/api/:version` prefix that (optionally) allows versioning in the future. By convention, we treat that as part of the base URL and do not include that in API examples. * Filtering, sorting, and pagination parameters are provided in the request body of `POST` requests where applicable. * Pagination parameters are `limit` and `offset`. * Sorting is specified with a single `sort` parameter. * See more information on [filtering](#filtering) below. ### HTTP verbs * `GET`, `PUT` and `DELETE` requests are always idempotent. `POST` and `PATCH` are not guaranteed to be idempotent. * `GET` requests cannot receive information from the request body. * `POST` requests can receive information from the request body. * `POST /collection` creates a new member of the collection. * `GET /collection` lists all members of the collection. * `GET /collection/:id` gets a specific member of the collection by ID. * `DELETE /collection/:id` deletes a specific member of the collection. * `PUT /collection/:id` creates or replaces a specific member of the collection. * `PATCH /collection/:id` partially updates a specific member of the collection. * `POST /collection/action` is how we implement non-CRUD actions. For example, to set a flow run's state, we use `POST /flow_runs/:id/set_state`. * `POST /collection/action` may also be used for read-only queries. This is to allow us to send complex arguments as body arguments (which often cannot be done via `GET`). Examples include `POST /flow_runs/filter`, `POST /flow_runs/count`, and `POST /flow_runs/history`. ## Filter results Objects can be filtered by providing filter criteria in the body of a `POST` request. When multiple criteria are specified, logical AND will be applied to the criteria. Filter criteria are structured as follows: ```json { "objects": { "object_field": { "field_operator_": } } } ``` In this example, `objects` is the name of the collection to filter over (for example, `flows`). The collection can be either the object being queried for (`flows` for `POST /flows/filter`) or a related object (`flow_runs` for `POST /flows/filter`). `object_field` is the name of the field over which to filter (`name` for `flows`). Note that some objects may have nested object fields, such as `{flow_run: {state: {type: {any_: []}}}}`. `field_operator_` is the operator to apply to a field when filtering. Common examples include: * `any_`: return objects where this field matches any of the following values. * `is_null_`: return objects where this field is or is not null. * `eq_`: return objects where this field is equal to the following value. * `all_`: return objects where this field matches all of the following values. * `before_`: return objects where this datetime field is less than or equal to the following value. * `after_`: return objects where this datetime field is greater than or equal to the following value. For example, to query for flows with the tag `"database"` and failed flow runs, `POST /flows/filter` with the following request body: ```json { "flows": { "tags": { "all_": ["database"] } }, "flow_runs": { "state": { "type": { "any_": ["FAILED"] } } } } ``` ## OpenAPI The Prefect REST API can be fully described with an OpenAPI 3.0 compliant document. [OpenAPI](https://swagger.io/docs/specification/about/) is a standard specification for describing REST APIs. To generate self-hosted Prefect server's complete OpenAPI document, run the following commands in an interactive Python session: ```python from prefect.server.api.server import create_app app = create_app() openapi_doc = app.openapi() ``` This document allows you to generate your own API client, explore the API using an API inspection tool, or write tests to ensure API compliance. # Clear Database Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/admin/clear-database post /api/admin/database/clear Clear all database tables without dropping them. # Create Database Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/admin/create-database post /api/admin/database/create Create all database objects. # Drop Database Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/admin/drop-database post /api/admin/database/drop Drop all database objects. # Read Settings Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/admin/read-settings get /api/admin/settings Get the current Prefect REST API settings. Secret setting values will be obfuscated. # Read Version Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/admin/read-version get /api/admin/version Returns the Prefect version number # Count Artifacts Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/artifacts/count-artifacts post /api/artifacts/count Count artifacts from the database. # Count Latest Artifacts Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/artifacts/count-latest-artifacts post /api/artifacts/latest/count Count artifacts from the database. # Create Artifact Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/artifacts/create-artifact post /api/artifacts/ Create an artifact. For more information, see https://docs.prefect.io/v3/develop/artifacts. # Delete Artifact Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/artifacts/delete-artifact delete /api/artifacts/{id} Delete an artifact from the database. # Read Artifact Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/artifacts/read-artifact get /api/artifacts/{id} Retrieve an artifact from the database. # Read Artifacts Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/artifacts/read-artifacts post /api/artifacts/filter Retrieve artifacts from the database. # Read Latest Artifact Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/artifacts/read-latest-artifact get /api/artifacts/{key}/latest Retrieve the latest artifact from the artifact table. # Read Latest Artifacts Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/artifacts/read-latest-artifacts post /api/artifacts/latest/filter Retrieve artifacts from the database. # Update Artifact Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/artifacts/update-artifact patch /api/artifacts/{id} Update an artifact in the database. # Count Automations Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/automations/count-automations post /api/automations/count # Create Automation Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/automations/create-automation post /api/automations/ Create an automation. For more information, see https://docs.prefect.io/v3/automate. # Delete Automation Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/automations/delete-automation delete /api/automations/{id} # Delete Automations Owned By Resource Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/automations/delete-automations-owned-by-resource delete /api/automations/owned-by/{resource_id} # Patch Automation Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/automations/patch-automation patch /api/automations/{id} # Read Automation Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/automations/read-automation get /api/automations/{id} # Read Automations Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/automations/read-automations post /api/automations/filter # Read Automations Related To Resource Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/automations/read-automations-related-to-resource get /api/automations/related-to/{resource_id} # Update Automation Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/automations/update-automation put /api/automations/{id} # Validate Template Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/automations/validate-template post /api/templates/validate # Validate Template Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/automations/validate-template-1 post /api/automations/templates/validate # Read Available Block Capabilities Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/block-capabilities/read-available-block-capabilities get /api/block_capabilities/ Get available block capabilities. For more information, see https://docs.prefect.io/v3/develop/blocks. # Count Block Documents Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/block-documents/count-block-documents post /api/block_documents/count Count block documents. # Create Block Document Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/block-documents/create-block-document post /api/block_documents/ Create a new block document. For more information, see https://docs.prefect.io/v3/develop/blocks. # Delete Block Document Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/block-documents/delete-block-document delete /api/block_documents/{id} # Read Block Document By Id Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/block-documents/read-block-document-by-id get /api/block_documents/{id} # Read Block Documents Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/block-documents/read-block-documents post /api/block_documents/filter Query for block documents. # Update Block Document Data Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/block-documents/update-block-document-data patch /api/block_documents/{id} # Create Block Schema Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/block-schemas/create-block-schema post /api/block_schemas/ Create a block schema. For more information, see https://docs.prefect.io/v3/develop/blocks. # Delete Block Schema Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/block-schemas/delete-block-schema delete /api/block_schemas/{id} Delete a block schema by id. # Read Block Schema By Checksum Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/block-schemas/read-block-schema-by-checksum get /api/block_schemas/checksum/{checksum} # Read Block Schema By Id Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/block-schemas/read-block-schema-by-id get /api/block_schemas/{id} Get a block schema by id. # Read Block Schemas Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/block-schemas/read-block-schemas post /api/block_schemas/filter Read all block schemas, optionally filtered by type # Create Block Type Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/block-types/create-block-type post /api/block_types/ Create a new block type. For more information, see https://docs.prefect.io/v3/develop/blocks. # Delete Block Type Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/block-types/delete-block-type delete /api/block_types/{id} # Install System Block Types Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/block-types/install-system-block-types post /api/block_types/install_system_block_types # Read Block Document By Name For Block Type Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/block-types/read-block-document-by-name-for-block-type get /api/block_types/slug/{slug}/block_documents/name/{block_document_name} # Read Block Documents For Block Type Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/block-types/read-block-documents-for-block-type get /api/block_types/slug/{slug}/block_documents # Read Block Type By Id Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/block-types/read-block-type-by-id get /api/block_types/{id} Get a block type by ID. # Read Block Type By Slug Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/block-types/read-block-type-by-slug get /api/block_types/slug/{slug} Get a block type by name. # Read Block Types Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/block-types/read-block-types post /api/block_types/filter Gets all block types. Optionally limit return with limit and offset. # Update Block Type Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/block-types/update-block-type patch /api/block_types/{id} Update a block type. # Read View Content Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/collections/read-view-content get /api/collections/views/{view} Reads the content of a view from the prefect-collection-registry. # Bulk Decrement Active Slots Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/concurrency-limits-v2/bulk-decrement-active-slots post /api/v2/concurrency_limits/decrement # Bulk Decrement Active Slots With Lease Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/concurrency-limits-v2/bulk-decrement-active-slots-with-lease post /api/v2/concurrency_limits/decrement-with-lease # Bulk Increment Active Slots Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/concurrency-limits-v2/bulk-increment-active-slots post /api/v2/concurrency_limits/increment # Bulk Increment Active Slots With Lease Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/concurrency-limits-v2/bulk-increment-active-slots-with-lease post /api/v2/concurrency_limits/increment-with-lease # Create Concurrency Limit V2 Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/concurrency-limits-v2/create-concurrency-limit-v2 post /api/v2/concurrency_limits/ Create a task run concurrency limit. For more information, see https://docs.prefect.io/v3/develop/global-concurrency-limits. # Delete Concurrency Limit V2 Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/concurrency-limits-v2/delete-concurrency-limit-v2 delete /api/v2/concurrency_limits/{id_or_name} # Read All Concurrency Limits V2 Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/concurrency-limits-v2/read-all-concurrency-limits-v2 post /api/v2/concurrency_limits/filter # Read Concurrency Limit V2 Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/concurrency-limits-v2/read-concurrency-limit-v2 get /api/v2/concurrency_limits/{id_or_name} # Renew Concurrency Lease Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/concurrency-limits-v2/renew-concurrency-lease post /api/v2/concurrency_limits/leases/{lease_id}/renew # Update Concurrency Limit V2 Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/concurrency-limits-v2/update-concurrency-limit-v2 patch /api/v2/concurrency_limits/{id_or_name} # Create Concurrency Limit Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/concurrency-limits/create-concurrency-limit post /api/concurrency_limits/ Create a task run concurrency limit. For more information, see https://docs.prefect.io/v3/develop/task-run-limits. # Decrement Concurrency Limits V1 Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/concurrency-limits/decrement-concurrency-limits-v1 post /api/concurrency_limits/decrement Decrement concurrency limits for the given tags. Finds and revokes the lease for V2 limits or decrements V1 active slots. Returns the list of limits that were decremented. # Delete Concurrency Limit Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/concurrency-limits/delete-concurrency-limit delete /api/concurrency_limits/{id} # Delete Concurrency Limit By Tag Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/concurrency-limits/delete-concurrency-limit-by-tag delete /api/concurrency_limits/tag/{tag} # Increment Concurrency Limits V1 Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/concurrency-limits/increment-concurrency-limits-v1 post /api/concurrency_limits/increment Increment concurrency limits for the given tags. During migration, this handles both V1 and V2 limits to support mixed states. Post-migration, it only uses V2 with lease-based concurrency. # Read Concurrency Limit Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/concurrency-limits/read-concurrency-limit get /api/concurrency_limits/{id} Get a concurrency limit by id. The `active slots` field contains a list of TaskRun IDs currently using a concurrency slot for the specified tag. # Read Concurrency Limit By Tag Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/concurrency-limits/read-concurrency-limit-by-tag get /api/concurrency_limits/tag/{tag} Get a concurrency limit by tag. The `active slots` field contains a list of TaskRun IDs currently using a concurrency slot for the specified tag. # Read Concurrency Limits Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/concurrency-limits/read-concurrency-limits post /api/concurrency_limits/filter Query for concurrency limits. For each concurrency limit the `active slots` field contains a list of TaskRun IDs currently using a concurrency slot for the specified tag. # Reset Concurrency Limit By Tag Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/concurrency-limits/reset-concurrency-limit-by-tag post /api/concurrency_limits/tag/{tag}/reset # Create Csrf Token Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/create-csrf-token get /api/csrf-token Create or update a CSRF token for a client # Count Deployments Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/deployments/count-deployments post /api/deployments/count Count deployments. # Create Deployment Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/deployments/create-deployment post /api/deployments/ Gracefully creates a new deployment from the provided schema. If a deployment with the same name and flow_id already exists, the deployment is updated. If the deployment has an active schedule, flow runs will be scheduled. When upserting, any scheduled runs from the existing deployment will be deleted. For more information, see https://docs.prefect.io/v3/deploy. # Create Deployment Schedules Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/deployments/create-deployment-schedules post /api/deployments/{id}/schedules # Create Flow Run From Deployment Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/deployments/create-flow-run-from-deployment post /api/deployments/{id}/create_flow_run Create a flow run from a deployment. Any parameters not provided will be inferred from the deployment's parameters. If tags are not provided, the deployment's tags will be used. If no state is provided, the flow run will be created in a SCHEDULED state. # Delete Deployment Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/deployments/delete-deployment delete /api/deployments/{id} Delete a deployment by id. # Delete Deployment Schedule Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/deployments/delete-deployment-schedule delete /api/deployments/{id}/schedules/{schedule_id} # Get Scheduled Flow Runs For Deployments Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/deployments/get-scheduled-flow-runs-for-deployments post /api/deployments/get_scheduled_flow_runs Get scheduled runs for a set of deployments. Used by a runner to poll for work. # Paginate Deployments Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/deployments/paginate-deployments post /api/deployments/paginate Pagination query for flow runs. # Pause Deployment Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/deployments/pause-deployment post /api/deployments/{id}/pause_deployment Set a deployment schedule to inactive. Any auto-scheduled runs still in a Scheduled state will be deleted. # Read Deployment Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/deployments/read-deployment get /api/deployments/{id} Get a deployment by id. # Read Deployment By Name Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/deployments/read-deployment-by-name get /api/deployments/name/{flow_name}/{deployment_name} Get a deployment using the name of the flow and the deployment. # Read Deployment Schedules Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/deployments/read-deployment-schedules get /api/deployments/{id}/schedules # Read Deployments Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/deployments/read-deployments post /api/deployments/filter Query for deployments. # Resume Deployment Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/deployments/resume-deployment post /api/deployments/{id}/resume_deployment Set a deployment schedule to active. Runs will be scheduled immediately. # Schedule Deployment Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/deployments/schedule-deployment post /api/deployments/{id}/schedule Schedule runs for a deployment. For backfills, provide start/end times in the past. This function will generate the minimum number of runs that satisfy the min and max times, and the min and max counts. Specifically, the following order will be respected. - Runs will be generated starting on or after the `start_time` - No more than `max_runs` runs will be generated - No runs will be generated after `end_time` is reached - At least `min_runs` runs will be generated - Runs will be generated until at least `start_time + min_time` is reached # Update Deployment Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/deployments/update-deployment patch /api/deployments/{id} # Update Deployment Schedule Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/deployments/update-deployment-schedule patch /api/deployments/{id}/schedules/{schedule_id} # Work Queue Check For Deployment Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/deployments/work-queue-check-for-deployment get /api/deployments/{id}/work_queue_check Get list of work-queues that are able to pick up the specified deployment. This endpoint is intended to be used by the UI to provide users warnings about deployments that are unable to be executed because there are no work queues that will pick up their runs, based on existing filter criteria. It may be deprecated in the future because there is not a strict relationship between work queues and deployments. # Count Account Events Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/events/count-account-events post /api/events/count-by/{countable} Returns distinct objects and the count of events associated with them. Objects that can be counted include the day the event occurred, the type of event, or the IDs of the resources associated with the event. # Create Events Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/events/create-events post /api/events Record a batch of Events. For more information, see https://docs.prefect.io/v3/concepts/events. # Read Account Events Page Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/events/read-account-events-page get /api/events/filter/next Returns the next page of Events for a previous query against the given Account, and the URL to request the next page (if there are more results). # Read Events Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/events/read-events post /api/events/filter Queries for Events matching the given filter criteria in the given Account. Returns the first page of results, and the URL to request the next page (if there are more results). # Read Flow Run State Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/flow-run-states/read-flow-run-state get /api/flow_run_states/{id} Get a flow run state by id. For more information, see https://docs.prefect.io/v3/develop/write-flows#final-state-determination. # Read Flow Run States Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/flow-run-states/read-flow-run-states get /api/flow_run_states/ Get states associated with a flow run. # Average Flow Run Lateness Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/flow-runs/average-flow-run-lateness post /api/flow_runs/lateness Query for average flow-run lateness in seconds. # Count Flow Runs Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/flow-runs/count-flow-runs post /api/flow_runs/count Query for flow runs. # Create Flow Run Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/flow-runs/create-flow-run post /api/flow_runs/ Create a flow run. If a flow run with the same flow_id and idempotency key already exists, the existing flow run will be returned. If no state is provided, the flow run will be created in a PENDING state. For more information, see https://docs.prefect.io/v3/develop/write-flows. # Create Flow Run Input Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/flow-runs/create-flow-run-input post /api/flow_runs/{id}/input Create a key/value input for a flow run. # Delete Flow Run Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/flow-runs/delete-flow-run delete /api/flow_runs/{id} Delete a flow run by id. # Delete Flow Run Input Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/flow-runs/delete-flow-run-input delete /api/flow_runs/{id}/input/{key} Delete a flow run input # Download Logs Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/flow-runs/download-logs get /api/flow_runs/{id}/logs/download Download all flow run logs as a CSV file, collecting all logs until there are no more logs to retrieve. # Filter Flow Run Input Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/flow-runs/filter-flow-run-input post /api/flow_runs/{id}/input/filter Filter flow run inputs by key prefix # Flow Run History Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/flow-runs/flow-run-history post /api/flow_runs/history Query for flow run history data across a given range and interval. # Read Flow Runs Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/flow-runs/paginate-flow-runs post /api/flow_runs/filter Query for flow runs. # Read Flow Run Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/flow-runs/read-flow-run get /api/flow_runs/{id} Get a flow run by id. # Read Flow Run Graph V1 Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/flow-runs/read-flow-run-graph-v1 get /api/flow_runs/{id}/graph Get a task run dependency map for a given flow run. # Read Flow Run Graph V2 Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/flow-runs/read-flow-run-graph-v2 get /api/flow_runs/{id}/graph-v2 Get a graph of the tasks and subflow runs for the given flow run # Read Flow Run Input Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/flow-runs/read-flow-run-input get /api/flow_runs/{id}/input/{key} Create a value from a flow run input # Read Flow Runs Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/flow-runs/read-flow-runs post /api/flow_runs/filter Query for flow runs. # Resume Flow Run Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/flow-runs/resume-flow-run post /api/flow_runs/{id}/resume Resume a paused flow run. # Set Flow Run State Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/flow-runs/set-flow-run-state post /api/flow_runs/{id}/set_state Set a flow run state, invoking any orchestration rules. # Update Flow Run Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/flow-runs/update-flow-run patch /api/flow_runs/{id} Updates a flow run. # Update Flow Run Labels Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/flow-runs/update-flow-run-labels patch /api/flow_runs/{id}/labels Update the labels of a flow run. # Count Flows Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/flows/count-flows post /api/flows/count Count flows. # Create Flow Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/flows/create-flow post /api/flows/ Gracefully creates a new flow from the provided schema. If a flow with the same name already exists, the existing flow is returned. For more information, see https://docs.prefect.io/v3/develop/write-flows. # Delete Flow Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/flows/delete-flow delete /api/flows/{id} Delete a flow by id. # Paginate Flows Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/flows/paginate-flows post /api/flows/paginate Pagination query for flows. # Read Flow Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/flows/read-flow get /api/flows/{id} Get a flow by id. # Read Flow By Name Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/flows/read-flow-by-name get /api/flows/name/{name} Get a flow by name. # Read Flows Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/flows/read-flows post /api/flows/filter Query for flows. # Update Flow Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/flows/update-flow patch /api/flows/{id} Updates a flow. # Server API Overview Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/index The Prefect server API enables you to interact programmatically with self-hosted Prefect server. The self-hosted Prefect server API is organized around REST. Select links in the left navigation menu to explore. Learn about [self-hosting Prefect server](/v3/how-to-guides/self-hosted/server-cli). # Create Logs Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/logs/create-logs post /api/logs/ Create new logs from the provided schema. For more information, see https://docs.prefect.io/v3/develop/logging. # Read Logs Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/logs/read-logs post /api/logs/filter Query for logs. # Health Check Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/root/health-check get /api/health # Hello Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/root/hello get /api/hello Say hello! # Perform Readiness Check Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/root/perform-readiness-check get /api/ready # Server Version Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/root/server-version get /api/version # Create Saved Search Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/savedsearches/create-saved-search put /api/saved_searches/ Gracefully creates a new saved search from the provided schema. If a saved search with the same name already exists, the saved search's fields are replaced. # Delete Saved Search Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/savedsearches/delete-saved-search delete /api/saved_searches/{id} Delete a saved search by id. # Read Saved Search Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/savedsearches/read-saved-search get /api/saved_searches/{id} Get a saved search by id. # Read Saved Searches Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/savedsearches/read-saved-searches post /api/saved_searches/filter Query for saved searches. # Read Task Run State Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/task-run-states/read-task-run-state get /api/task_run_states/{id} Get a task run state by id. For more information, see https://docs.prefect.io/v3/develop/write-tasks. # Read Task Run States Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/task-run-states/read-task-run-states get /api/task_run_states/ Get states associated with a task run. # Count Task Runs Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/task-runs/count-task-runs post /api/task_runs/count Count task runs. # Create Task Run Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/task-runs/create-task-run post /api/task_runs/ Create a task run. If a task run with the same flow_run_id, task_key, and dynamic_key already exists, the existing task run will be returned. If no state is provided, the task run will be created in a PENDING state. For more information, see https://docs.prefect.io/v3/develop/write-tasks. # Delete Task Run Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/task-runs/delete-task-run delete /api/task_runs/{id} Delete a task run by id. # Paginate Task Runs Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/task-runs/paginate-task-runs post /api/task_runs/paginate Pagination query for task runs. # Read Task Run Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/task-runs/read-task-run get /api/task_runs/{id} Get a task run by id. # Read Task Runs Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/task-runs/read-task-runs post /api/task_runs/filter Query for task runs. # Set Task Run State Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/task-runs/set-task-run-state post /api/task_runs/{id}/set_state Set a task run state, invoking any orchestration rules. # Task Run History Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/task-runs/task-run-history post /api/task_runs/history Query for task run history data across a given range and interval. # Update Task Run Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/task-runs/update-task-run patch /api/task_runs/{id} Updates a task run. # Read Task Workers Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/task-workers/read-task-workers post /api/task_workers/filter Read active task workers. Optionally filter by task keys. For more information, see https://docs.prefect.io/v3/concepts/flows-and-tasks#background-tasks. # Count Variables Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/variables/count-variables post /api/variables/count # Create Variable Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/variables/create-variable post /api/variables/ Create a variable. For more information, see https://docs.prefect.io/v3/develop/variables. # Delete Variable Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/variables/delete-variable delete /api/variables/{id} # Delete Variable By Name Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/variables/delete-variable-by-name delete /api/variables/name/{name} # Read Variable Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/variables/read-variable get /api/variables/{id} # Read Variable By Name Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/variables/read-variable-by-name get /api/variables/name/{name} # Read Variables Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/variables/read-variables post /api/variables/filter # Update Variable Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/variables/update-variable patch /api/variables/{id} # Update Variable By Name Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/variables/update-variable-by-name patch /api/variables/name/{name} # Count Work Pools Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/work-pools/count-work-pools post /api/work_pools/count Count work pools # Create Work Pool Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/work-pools/create-work-pool post /api/work_pools/ Creates a new work pool. If a work pool with the same name already exists, an error will be raised. For more information, see https://docs.prefect.io/v3/deploy/infrastructure-concepts/work-pools. # Create Work Queue Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/work-pools/create-work-queue post /api/work_pools/{work_pool_name}/queues Creates a new work pool queue. If a work pool queue with the same name already exists, an error will be raised. For more information, see https://docs.prefect.io/v3/deploy/infrastructure-concepts/work-pools#work-queues. # Delete Work Pool Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/work-pools/delete-work-pool delete /api/work_pools/{name} Delete a work pool # Delete Work Queue Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/work-pools/delete-work-queue delete /api/work_pools/{work_pool_name}/queues/{name} Delete a work pool queue # Delete Worker Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/work-pools/delete-worker delete /api/work_pools/{work_pool_name}/workers/{name} Delete a work pool's worker # Get Scheduled Flow Runs Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/work-pools/get-scheduled-flow-runs post /api/work_pools/{name}/get_scheduled_flow_runs Load scheduled runs for a worker # Read Work Pool Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/work-pools/read-work-pool get /api/work_pools/{name} Read a work pool by name # Read Work Pools Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/work-pools/read-work-pools post /api/work_pools/filter Read multiple work pools # Read Work Queue Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/work-pools/read-work-queue get /api/work_pools/{work_pool_name}/queues/{name} Read a work pool queue # Read Work Queues Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/work-pools/read-work-queues post /api/work_pools/{work_pool_name}/queues/filter Read all work pool queues # Read Workers Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/work-pools/read-workers post /api/work_pools/{work_pool_name}/workers/filter Read all worker processes # Update Work Pool Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/work-pools/update-work-pool patch /api/work_pools/{name} Update a work pool # Update Work Queue Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/work-pools/update-work-queue patch /api/work_pools/{work_pool_name}/queues/{name} Update a work pool queue # Worker Heartbeat Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/work-pools/worker-heartbeat post /api/work_pools/{work_pool_name}/workers/heartbeat # Create Work Queue Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/work-queues/create-work-queue post /api/work_queues/ Creates a new work queue. If a work queue with the same name already exists, an error will be raised. For more information, see https://docs.prefect.io/v3/deploy/infrastructure-concepts/work-pools#work-queues. # Delete Work Queue Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/work-queues/delete-work-queue delete /api/work_queues/{id} Delete a work queue by id. # Read Work Queue Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/work-queues/read-work-queue get /api/work_queues/{id} Get a work queue by id. # Read Work Queue By Name Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/work-queues/read-work-queue-by-name get /api/work_queues/name/{name} Get a work queue by id. # Read Work Queue Runs Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/work-queues/read-work-queue-runs post /api/work_queues/{id}/get_runs Get flow runs from the work queue. # Read Work Queue Status Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/work-queues/read-work-queue-status get /api/work_queues/{id}/status Get the status of a work queue. # Read Work Queues Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/work-queues/read-work-queues post /api/work_queues/filter Query for work queues. # Update Work Queue Source: https://docs-3.prefect.io/v3/api-ref/rest-api/server/work-queues/update-work-queue patch /api/work_queues/{id} Updates an existing work queue. # Settings reference Source: https://docs-3.prefect.io/v3/api-ref/settings-ref Reference for all available settings for Prefect. {/* This page is generated by `scripts/generate_settings_ref.py`. Update the generation script to update this page. */} To use `prefect.toml` or `pyproject.toml` for configuration, `prefect>=3.1` must be installed. ## Root Settings ### `home` The path to the Prefect home directory. Defaults to \~/.prefect **Type**: `string` **Default**: `~/.prefect` **TOML dotted key path**: `home` **Supported environment variables**: `PREFECT_HOME` ### `profiles_path` The path to a profiles configuration file. Supports \$PREFECT\_HOME templating. Defaults to \$PREFECT\_HOME/profiles.toml. **Type**: `string` **TOML dotted key path**: `profiles_path` **Supported environment variables**: `PREFECT_PROFILES_PATH` ### `debug_mode` If True, enables debug mode which may provide additional logging and debugging features. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `debug_mode` **Supported environment variables**: `PREFECT_DEBUG_MODE` ### `api` **Type**: [APISettings](#apisettings) **TOML dotted key path**: `api` ### `cli` **Type**: [CLISettings](#clisettings) **TOML dotted key path**: `cli` ### `client` **Type**: [ClientSettings](#clientsettings) **TOML dotted key path**: `client` ### `cloud` **Type**: [CloudSettings](#cloudsettings) **TOML dotted key path**: `cloud` ### `deployments` **Type**: [DeploymentsSettings](#deploymentssettings) **TOML dotted key path**: `deployments` ### `experiments` Settings for controlling experimental features **Type**: [ExperimentsSettings](#experimentssettings) **TOML dotted key path**: `experiments` ### `flows` **Type**: [FlowsSettings](#flowssettings) **TOML dotted key path**: `flows` ### `internal` Settings for internal Prefect machinery **Type**: [InternalSettings](#internalsettings) **TOML dotted key path**: `internal` ### `logging` **Type**: [LoggingSettings](#loggingsettings) **TOML dotted key path**: `logging` ### `results` **Type**: [ResultsSettings](#resultssettings) **TOML dotted key path**: `results` ### `runner` **Type**: [RunnerSettings](#runnersettings) **TOML dotted key path**: `runner` ### `server` **Type**: [ServerSettings](#serversettings) **TOML dotted key path**: `server` ### `tasks` Settings for controlling task behavior **Type**: [TasksSettings](#taskssettings) **TOML dotted key path**: `tasks` ### `testing` Settings used during testing **Type**: [TestingSettings](#testingsettings) **TOML dotted key path**: `testing` ### `worker` Settings for controlling worker behavior **Type**: [WorkerSettings](#workersettings) **TOML dotted key path**: `worker` ### `ui_url` The URL of the Prefect UI. If not set, the client will attempt to infer it. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `ui_url` **Supported environment variables**: `PREFECT_UI_URL` ### `silence_api_url_misconfiguration` If `True`, disable the warning when a user accidentally misconfigure its `PREFECT_API_URL` Sometimes when a user manually set `PREFECT_API_URL` to a custom url,reverse-proxy for example, we would like to silence this warning so we will set it to `FALSE`. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `silence_api_url_misconfiguration` **Supported environment variables**: `PREFECT_SILENCE_API_URL_MISCONFIGURATION` *** ## APISettings Settings for interacting with the Prefect API ### `url` The URL of the Prefect API. If not set, the client will attempt to infer it. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `api.url` **Supported environment variables**: `PREFECT_API_URL` ### `auth_string` The auth string used for basic authentication with a self-hosted Prefect API. Should be kept secret. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `api.auth_string` **Supported environment variables**: `PREFECT_API_AUTH_STRING` ### `key` The API key used for authentication with the Prefect API. Should be kept secret. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `api.key` **Supported environment variables**: `PREFECT_API_KEY` ### `tls_insecure_skip_verify` If `True`, disables SSL checking to allow insecure requests. Setting to False is recommended only during development. For example, when using self-signed certificates. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `api.tls_insecure_skip_verify` **Supported environment variables**: `PREFECT_API_TLS_INSECURE_SKIP_VERIFY` ### `ssl_cert_file` This configuration settings option specifies the path to an SSL certificate file. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `api.ssl_cert_file` **Supported environment variables**: `PREFECT_API_SSL_CERT_FILE` ### `enable_http2` If true, enable support for HTTP/2 for communicating with an API. If the API does not support HTTP/2, this will have no effect and connections will be made via HTTP/1.1. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `api.enable_http2` **Supported environment variables**: `PREFECT_API_ENABLE_HTTP2` ### `request_timeout` The default timeout for requests to the API **Type**: `number` **Default**: `60.0` **TOML dotted key path**: `api.request_timeout` **Supported environment variables**: `PREFECT_API_REQUEST_TIMEOUT` *** ## CLISettings Settings for controlling CLI behavior ### `colors` If True, use colors in CLI output. If `False`, output will not include colors codes. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `cli.colors` **Supported environment variables**: `PREFECT_CLI_COLORS` ### `prompt` If `True`, use interactive prompts in CLI commands. If `False`, no interactive prompts will be used. If `None`, the value will be dynamically determined based on the presence of an interactive-enabled terminal. **Type**: `boolean | None` **Default**: `None` **TOML dotted key path**: `cli.prompt` **Supported environment variables**: `PREFECT_CLI_PROMPT` ### `wrap_lines` If `True`, wrap text by inserting new lines in long lines in CLI output. If `False`, output will not be wrapped. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `cli.wrap_lines` **Supported environment variables**: `PREFECT_CLI_WRAP_LINES` *** ## ClientMetricsSettings Settings for controlling metrics reporting from the client ### `enabled` Whether or not to enable Prometheus metrics in the client. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `client.metrics.enabled` **Supported environment variables**: `PREFECT_CLIENT_METRICS_ENABLED`, `PREFECT_CLIENT_ENABLE_METRICS` ### `port` The port to expose the client Prometheus metrics on. **Type**: `integer` **Default**: `4201` **TOML dotted key path**: `client.metrics.port` **Supported environment variables**: `PREFECT_CLIENT_METRICS_PORT` *** ## ClientSettings Settings for controlling API client behavior ### `max_retries` The maximum number of retries to perform on failed HTTP requests. Defaults to 5. Set to 0 to disable retries. See `PREFECT_CLIENT_RETRY_EXTRA_CODES` for details on which HTTP status codes are retried. **Type**: `integer` **Default**: `5` **Constraints**: * Minimum: 0 **TOML dotted key path**: `client.max_retries` **Supported environment variables**: `PREFECT_CLIENT_MAX_RETRIES` ### `retry_jitter_factor` A value greater than or equal to zero to control the amount of jitter added to retried client requests. Higher values introduce larger amounts of jitter. Set to 0 to disable jitter. See `clamped_poisson_interval` for details on the how jitter can affect retry lengths. **Type**: `number` **Default**: `0.2` **Constraints**: * Minimum: 0.0 **TOML dotted key path**: `client.retry_jitter_factor` **Supported environment variables**: `PREFECT_CLIENT_RETRY_JITTER_FACTOR` ### `retry_extra_codes` A list of extra HTTP status codes to retry on. Defaults to an empty list. 429, 502 and 503 are always retried. Please note that not all routes are idempotent and retrying may result in unexpected behavior. **Type**: `string | integer | array | None` **Examples**: * `"404,429,503"` * `"429"` * `[404, 429, 503]` **TOML dotted key path**: `client.retry_extra_codes` **Supported environment variables**: `PREFECT_CLIENT_RETRY_EXTRA_CODES` ### `csrf_support_enabled` Determines if CSRF token handling is active in the Prefect client for API requests. When enabled (`True`), the client automatically manages CSRF tokens by retrieving, storing, and including them in applicable state-changing requests **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `client.csrf_support_enabled` **Supported environment variables**: `PREFECT_CLIENT_CSRF_SUPPORT_ENABLED` ### `custom_headers` Custom HTTP headers to include with every API request to the Prefect server. Headers are specified as key-value pairs. Note that headers like 'User-Agent' and CSRF-related headers are managed by Prefect and cannot be overridden. **Type**: `object` **Examples**: * `{'X-Custom-Header': 'value'}` * `{'Authorization': 'Bearer token'}` **TOML dotted key path**: `client.custom_headers` **Supported environment variables**: `PREFECT_CLIENT_CUSTOM_HEADERS` ### `metrics` **Type**: [ClientMetricsSettings](#clientmetricssettings) **TOML dotted key path**: `client.metrics` *** ## CloudSettings Settings for interacting with Prefect Cloud ### `api_url` API URL for Prefect Cloud. Used for authentication with Prefect Cloud. **Type**: `string` **Default**: `https://api.prefect.cloud/api` **TOML dotted key path**: `cloud.api_url` **Supported environment variables**: `PREFECT_CLOUD_API_URL` ### `enable_orchestration_telemetry` Whether or not to enable orchestration telemetry. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `cloud.enable_orchestration_telemetry` **Supported environment variables**: `PREFECT_CLOUD_ENABLE_ORCHESTRATION_TELEMETRY` ### `ui_url` The URL of the Prefect Cloud UI. If not set, the client will attempt to infer it. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `cloud.ui_url` **Supported environment variables**: `PREFECT_CLOUD_UI_URL` *** ## DeploymentsSettings Settings for configuring deployments defaults ### `default_work_pool_name` The default work pool to use when creating deployments. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `deployments.default_work_pool_name` **Supported environment variables**: `PREFECT_DEPLOYMENTS_DEFAULT_WORK_POOL_NAME`, `PREFECT_DEFAULT_WORK_POOL_NAME` ### `default_docker_build_namespace` The default Docker namespace to use when building images. **Type**: `string | None` **Default**: `None` **Examples**: * `"my-dockerhub-registry"` * `"4999999999999.dkr.ecr.us-east-2.amazonaws.com/my-ecr-repo"` **TOML dotted key path**: `deployments.default_docker_build_namespace` **Supported environment variables**: `PREFECT_DEPLOYMENTS_DEFAULT_DOCKER_BUILD_NAMESPACE`, `PREFECT_DEFAULT_DOCKER_BUILD_NAMESPACE` *** ## ExperimentsSettings Settings for configuring experimental features ### `warn` If `True`, warn on usage of experimental features. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `experiments.warn` **Supported environment variables**: `PREFECT_EXPERIMENTS_WARN`, `PREFECT_EXPERIMENTAL_WARN` ### `lineage_events_enabled` If `True`, enables emitting lineage events. Set to `False` to disable lineage event emission. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `experiments.lineage_events_enabled` **Supported environment variables**: `PREFECT_EXPERIMENTS_LINEAGE_EVENTS_ENABLED` *** ## FlowsSettings Settings for controlling flow behavior ### `default_retries` This value sets the default number of retries for all flows. **Type**: `integer` **Default**: `0` **Constraints**: * Minimum: 0 **TOML dotted key path**: `flows.default_retries` **Supported environment variables**: `PREFECT_FLOWS_DEFAULT_RETRIES`, `PREFECT_FLOW_DEFAULT_RETRIES` ### `default_retry_delay_seconds` This value sets the default retry delay seconds for all flows. **Type**: `integer | number | array` **Default**: `0` **TOML dotted key path**: `flows.default_retry_delay_seconds` **Supported environment variables**: `PREFECT_FLOWS_DEFAULT_RETRY_DELAY_SECONDS`, `PREFECT_FLOW_DEFAULT_RETRY_DELAY_SECONDS` *** ## InternalSettings ### `logging_level` The default logging level for Prefect's internal machinery loggers. **Type**: `string` **Default**: `ERROR` **Constraints**: * Allowed values: 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL' **TOML dotted key path**: `internal.logging_level` **Supported environment variables**: `PREFECT_INTERNAL_LOGGING_LEVEL`, `PREFECT_LOGGING_INTERNAL_LEVEL` *** ## LoggingSettings Settings for controlling logging behavior ### `level` The default logging level for Prefect loggers. **Type**: `string` **Default**: `INFO` **Constraints**: * Allowed values: 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL' **TOML dotted key path**: `logging.level` **Supported environment variables**: `PREFECT_LOGGING_LEVEL` ### `config_path` A path to a logging configuration file. Defaults to \$PREFECT\_HOME/logging.yml **Type**: `string` **TOML dotted key path**: `logging.config_path` **Supported environment variables**: `PREFECT_LOGGING_CONFIG_PATH`, `PREFECT_LOGGING_SETTINGS_PATH` ### `extra_loggers` Additional loggers to attach to Prefect logging at runtime. **Type**: `string | array | None` **Default**: `None` **TOML dotted key path**: `logging.extra_loggers` **Supported environment variables**: `PREFECT_LOGGING_EXTRA_LOGGERS` ### `log_prints` If `True`, `print` statements in flows and tasks will be redirected to the Prefect logger for the given run. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `logging.log_prints` **Supported environment variables**: `PREFECT_LOGGING_LOG_PRINTS` ### `colors` If `True`, use colors in CLI output. If `False`, output will not include colors codes. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `logging.colors` **Supported environment variables**: `PREFECT_LOGGING_COLORS` ### `markup` Whether to interpret strings wrapped in square brackets as a style. This allows styles to be conveniently added to log messages, e.g. `[red]This is a red message.[/red]`. However, the downside is, if enabled, strings that contain square brackets may be inaccurately interpreted and lead to incomplete output, e.g. `[red]This is a red message.[/red]` may be interpreted as `[red]This is a red message.[/red]`. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `logging.markup` **Supported environment variables**: `PREFECT_LOGGING_MARKUP` ### `to_api` **Type**: [LoggingToAPISettings](#loggingtoapisettings) **TOML dotted key path**: `logging.to_api` *** ## LoggingToAPISettings Settings for controlling logging to the API ### `enabled` If `True`, logs will be sent to the API. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `logging.to_api.enabled` **Supported environment variables**: `PREFECT_LOGGING_TO_API_ENABLED` ### `batch_interval` The number of seconds between batched writes of logs to the API. **Type**: `number` **Default**: `2.0` **TOML dotted key path**: `logging.to_api.batch_interval` **Supported environment variables**: `PREFECT_LOGGING_TO_API_BATCH_INTERVAL` ### `batch_size` The number of logs to batch before sending to the API. **Type**: `integer` **Default**: `4000000` **TOML dotted key path**: `logging.to_api.batch_size` **Supported environment variables**: `PREFECT_LOGGING_TO_API_BATCH_SIZE` ### `max_log_size` The maximum size in bytes for a single log. **Type**: `integer` **Default**: `1000000` **TOML dotted key path**: `logging.to_api.max_log_size` **Supported environment variables**: `PREFECT_LOGGING_TO_API_MAX_LOG_SIZE` ### `when_missing_flow` Controls the behavior when loggers attempt to send logs to the API handler from outside of a flow. All logs sent to the API must be associated with a flow run. The API log handler can only be used outside of a flow by manually providing a flow run identifier. Logs that are not associated with a flow run will not be sent to the API. This setting can be used to determine if a warning or error is displayed when the identifier is missing. The following options are available: * "warn": Log a warning message. * "error": Raise an error. * "ignore": Do not log a warning message or raise an error. **Type**: `string` **Default**: `warn` **Constraints**: * Allowed values: 'warn', 'error', 'ignore' **TOML dotted key path**: `logging.to_api.when_missing_flow` **Supported environment variables**: `PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW` *** ## ResultsSettings Settings for controlling result storage behavior ### `default_serializer` The default serializer to use when not otherwise specified. **Type**: `string` **Default**: `pickle` **TOML dotted key path**: `results.default_serializer` **Supported environment variables**: `PREFECT_RESULTS_DEFAULT_SERIALIZER` ### `persist_by_default` The default setting for persisting results when not otherwise specified. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `results.persist_by_default` **Supported environment variables**: `PREFECT_RESULTS_PERSIST_BY_DEFAULT` ### `default_storage_block` The `block-type/block-document` slug of a block to use as the default result storage. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `results.default_storage_block` **Supported environment variables**: `PREFECT_RESULTS_DEFAULT_STORAGE_BLOCK`, `PREFECT_DEFAULT_RESULT_STORAGE_BLOCK` ### `local_storage_path` The default location for locally persisted results. Defaults to \$PREFECT\_HOME/storage. **Type**: `string` **TOML dotted key path**: `results.local_storage_path` **Supported environment variables**: `PREFECT_RESULTS_LOCAL_STORAGE_PATH`, `PREFECT_LOCAL_STORAGE_PATH` *** ## RunnerServerSettings Settings for controlling runner server behavior ### `enable` Whether or not to enable the runner's webserver. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `runner.server.enable` **Supported environment variables**: `PREFECT_RUNNER_SERVER_ENABLE` ### `host` The host address the runner's webserver should bind to. **Type**: `string` **Default**: `localhost` **TOML dotted key path**: `runner.server.host` **Supported environment variables**: `PREFECT_RUNNER_SERVER_HOST` ### `port` The port the runner's webserver should bind to. **Type**: `integer` **Default**: `8080` **TOML dotted key path**: `runner.server.port` **Supported environment variables**: `PREFECT_RUNNER_SERVER_PORT` ### `log_level` The log level of the runner's webserver. **Type**: `string` **Default**: `ERROR` **Constraints**: * Allowed values: 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL' **TOML dotted key path**: `runner.server.log_level` **Supported environment variables**: `PREFECT_RUNNER_SERVER_LOG_LEVEL` ### `missed_polls_tolerance` Number of missed polls before a runner is considered unhealthy by its webserver. **Type**: `integer` **Default**: `2` **TOML dotted key path**: `runner.server.missed_polls_tolerance` **Supported environment variables**: `PREFECT_RUNNER_SERVER_MISSED_POLLS_TOLERANCE` *** ## RunnerSettings Settings for controlling runner behavior ### `process_limit` Maximum number of processes a runner will execute in parallel. **Type**: `integer` **Default**: `5` **TOML dotted key path**: `runner.process_limit` **Supported environment variables**: `PREFECT_RUNNER_PROCESS_LIMIT` ### `poll_frequency` Number of seconds a runner should wait between queries for scheduled work. **Type**: `integer` **Default**: `10` **TOML dotted key path**: `runner.poll_frequency` **Supported environment variables**: `PREFECT_RUNNER_POLL_FREQUENCY` ### `heartbeat_frequency` Number of seconds a runner should wait between heartbeats for flow runs. **Type**: `integer | None` **Default**: `None` **TOML dotted key path**: `runner.heartbeat_frequency` **Supported environment variables**: `PREFECT_RUNNER_HEARTBEAT_FREQUENCY` ### `server` **Type**: [RunnerServerSettings](#runnerserversettings) **TOML dotted key path**: `runner.server` *** ## SQLAlchemyConnectArgsSettings Settings for controlling SQLAlchemy connection behavior; note that these settings only take effect when using a PostgreSQL database. ### `application_name` Controls the application\_name field for connections opened from the connection pool when using a PostgreSQL database with the Prefect backend. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `server.database.sqlalchemy.connect_args.application_name` **Supported environment variables**: `PREFECT_SERVER_DATABASE_SQLALCHEMY_CONNECT_ARGS_APPLICATION_NAME` ### `statement_cache_size` Controls statement cache size for PostgreSQL connections. Setting this to 0 is required when using PgBouncer in transaction mode. Defaults to None. **Type**: `integer | None` **Default**: `None` **TOML dotted key path**: `server.database.sqlalchemy.connect_args.statement_cache_size` **Supported environment variables**: `PREFECT_SERVER_DATABASE_SQLALCHEMY_CONNECT_ARGS_STATEMENT_CACHE_SIZE` ### `prepared_statement_cache_size` Controls the size of the statement cache for PostgreSQL connections. When set to 0, statement caching is disabled. Defaults to None to use SQLAlchemy's default behavior. **Type**: `integer | None` **Default**: `None` **TOML dotted key path**: `server.database.sqlalchemy.connect_args.prepared_statement_cache_size` **Supported environment variables**: `PREFECT_SERVER_DATABASE_SQLALCHEMY_CONNECT_ARGS_PREPARED_STATEMENT_CACHE_SIZE` ### `tls` Settings for controlling SQLAlchemy mTLS behavior **Type**: [SQLAlchemyTLSSettings](#sqlalchemytlssettings) **TOML dotted key path**: `server.database.sqlalchemy.connect_args.tls` *** ## SQLAlchemySettings Settings for controlling SQLAlchemy behavior; note that these settings only take effect when using a PostgreSQL database. ### `connect_args` Settings for controlling SQLAlchemy connection behavior **Type**: [SQLAlchemyConnectArgsSettings](#sqlalchemyconnectargssettings) **TOML dotted key path**: `server.database.sqlalchemy.connect_args` ### `pool_size` Controls connection pool size of database connection pools from the Prefect backend. **Type**: `integer` **Default**: `5` **TOML dotted key path**: `server.database.sqlalchemy.pool_size` **Supported environment variables**: `PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_SIZE`, `PREFECT_SQLALCHEMY_POOL_SIZE` ### `pool_recycle` This setting causes the pool to recycle connections after the given number of seconds has passed; set it to -1 to avoid recycling entirely. **Type**: `integer` **Default**: `3600` **TOML dotted key path**: `server.database.sqlalchemy.pool_recycle` **Supported environment variables**: `PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_RECYCLE` ### `pool_timeout` Number of seconds to wait before giving up on getting a connection from the pool. Defaults to 30 seconds. **Type**: `number | None` **Default**: `30.0` **TOML dotted key path**: `server.database.sqlalchemy.pool_timeout` **Supported environment variables**: `PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_TIMEOUT` ### `max_overflow` Controls maximum overflow of the connection pool. To prevent overflow, set to -1. **Type**: `integer` **Default**: `10` **TOML dotted key path**: `server.database.sqlalchemy.max_overflow` **Supported environment variables**: `PREFECT_SERVER_DATABASE_SQLALCHEMY_MAX_OVERFLOW`, `PREFECT_SQLALCHEMY_MAX_OVERFLOW` *** ## SQLAlchemyTLSSettings Settings for controlling SQLAlchemy mTLS context when using a PostgreSQL database. ### `enabled` Controls whether connected to mTLS enabled PostgreSQL when using a PostgreSQL database with the Prefect backend. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `server.database.sqlalchemy.connect_args.tls.enabled` **Supported environment variables**: `PREFECT_SERVER_DATABASE_SQLALCHEMY_CONNECT_ARGS_TLS_ENABLED` ### `ca_file` This configuration settings option specifies the path to PostgreSQL client certificate authority file. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `server.database.sqlalchemy.connect_args.tls.ca_file` **Supported environment variables**: `PREFECT_SERVER_DATABASE_SQLALCHEMY_CONNECT_ARGS_TLS_CA_FILE` ### `cert_file` This configuration settings option specifies the path to PostgreSQL client certificate file. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `server.database.sqlalchemy.connect_args.tls.cert_file` **Supported environment variables**: `PREFECT_SERVER_DATABASE_SQLALCHEMY_CONNECT_ARGS_TLS_CERT_FILE` ### `key_file` This configuration settings option specifies the path to PostgreSQL client key file. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `server.database.sqlalchemy.connect_args.tls.key_file` **Supported environment variables**: `PREFECT_SERVER_DATABASE_SQLALCHEMY_CONNECT_ARGS_TLS_KEY_FILE` ### `check_hostname` This configuration settings option specifies whether to verify PostgreSQL server hostname. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `server.database.sqlalchemy.connect_args.tls.check_hostname` **Supported environment variables**: `PREFECT_SERVER_DATABASE_SQLALCHEMY_CONNECT_ARGS_TLS_CHECK_HOSTNAME` *** ## ServerAPISettings Settings for controlling API server behavior ### `auth_string` A string to use for basic authentication with the API in the form 'user:password'. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `server.api.auth_string` **Supported environment variables**: `PREFECT_SERVER_API_AUTH_STRING` ### `host` The API's host address (defaults to `127.0.0.1`). **Type**: `string` **Default**: `127.0.0.1` **TOML dotted key path**: `server.api.host` **Supported environment variables**: `PREFECT_SERVER_API_HOST` ### `port` The API's port address (defaults to `4200`). **Type**: `integer` **Default**: `4200` **TOML dotted key path**: `server.api.port` **Supported environment variables**: `PREFECT_SERVER_API_PORT` ### `base_path` The base URL path to serve the API under. **Type**: `string | None` **Default**: `None` **Examples**: * `"/v2/api"` **TOML dotted key path**: `server.api.base_path` **Supported environment variables**: `PREFECT_SERVER_API_BASE_PATH` ### `default_limit` The default limit applied to queries that can return multiple objects, such as `POST /flow_runs/filter`. **Type**: `integer` **Default**: `200` **TOML dotted key path**: `server.api.default_limit` **Supported environment variables**: `PREFECT_SERVER_API_DEFAULT_LIMIT`, `PREFECT_API_DEFAULT_LIMIT` ### `keepalive_timeout` The API's keep alive timeout (defaults to `5`). Refer to [https://www.uvicorn.org/settings/#timeouts](https://www.uvicorn.org/settings/#timeouts) for details. When the API is hosted behind a load balancer, you may want to set this to a value greater than the load balancer's idle timeout. Note this setting only applies when calling `prefect server start`; if hosting the API with another tool you will need to configure this there instead. **Type**: `integer` **Default**: `5` **TOML dotted key path**: `server.api.keepalive_timeout` **Supported environment variables**: `PREFECT_SERVER_API_KEEPALIVE_TIMEOUT` ### `csrf_protection_enabled` Controls the activation of CSRF protection for the Prefect server API. When enabled (`True`), the server enforces CSRF validation checks on incoming state-changing requests (POST, PUT, PATCH, DELETE), requiring a valid CSRF token to be included in the request headers or body. This adds a layer of security by preventing unauthorized or malicious sites from making requests on behalf of authenticated users. It is recommended to enable this setting in production environments where the API is exposed to web clients to safeguard against CSRF attacks. Note: Enabling this setting requires corresponding support in the client for CSRF token management. See PREFECT\_CLIENT\_CSRF\_SUPPORT\_ENABLED for more. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `server.api.csrf_protection_enabled` **Supported environment variables**: `PREFECT_SERVER_API_CSRF_PROTECTION_ENABLED`, `PREFECT_SERVER_CSRF_PROTECTION_ENABLED` ### `csrf_token_expiration` Specifies the duration for which a CSRF token remains valid after being issued by the server. The default expiration time is set to 1 hour, which offers a reasonable compromise. Adjust this setting based on your specific security requirements and usage patterns. **Type**: `string` **Default**: `PT1H` **TOML dotted key path**: `server.api.csrf_token_expiration` **Supported environment variables**: `PREFECT_SERVER_API_CSRF_TOKEN_EXPIRATION`, `PREFECT_SERVER_CSRF_TOKEN_EXPIRATION` ### `cors_allowed_origins` A comma-separated list of origins that are authorized to make cross-origin requests to the API. By default, this is set to `*`, which allows requests from all origins. **Type**: `string` **Default**: `*` **TOML dotted key path**: `server.api.cors_allowed_origins` **Supported environment variables**: `PREFECT_SERVER_API_CORS_ALLOWED_ORIGINS`, `PREFECT_SERVER_CORS_ALLOWED_ORIGINS` ### `cors_allowed_methods` A comma-separated list of methods that are authorized to make cross-origin requests to the API. By default, this is set to `*`, which allows requests from all methods. **Type**: `string` **Default**: `*` **TOML dotted key path**: `server.api.cors_allowed_methods` **Supported environment variables**: `PREFECT_SERVER_API_CORS_ALLOWED_METHODS`, `PREFECT_SERVER_CORS_ALLOWED_METHODS` ### `cors_allowed_headers` A comma-separated list of headers that are authorized to make cross-origin requests to the API. By default, this is set to `*`, which allows requests from all headers. **Type**: `string` **Default**: `*` **TOML dotted key path**: `server.api.cors_allowed_headers` **Supported environment variables**: `PREFECT_SERVER_API_CORS_ALLOWED_HEADERS`, `PREFECT_SERVER_CORS_ALLOWED_HEADERS` *** ## ServerConcurrencySettings ### `lease_storage` The module to use for storing concurrency limit leases. **Type**: `string` **Default**: `prefect.server.concurrency.lease_storage.memory` **TOML dotted key path**: `server.concurrency.lease_storage` **Supported environment variables**: `PREFECT_SERVER_CONCURRENCY_LEASE_STORAGE` *** ## ServerDatabaseSettings Settings for controlling server database behavior ### `sqlalchemy` Settings for controlling SQLAlchemy behavior **Type**: [SQLAlchemySettings](#sqlalchemysettings) **TOML dotted key path**: `server.database.sqlalchemy` ### `connection_url` A database connection URL in a SQLAlchemy-compatible format. Prefect currently supports SQLite and Postgres. Note that all Prefect database engines must use an async driver - for SQLite, use `sqlite+aiosqlite` and for Postgres use `postgresql+asyncpg`. SQLite in-memory databases can be used by providing the url `sqlite+aiosqlite:///file::memory:?cache=shared&uri=true&check_same_thread=false`, which will allow the database to be accessed by multiple threads. Note that in-memory databases can not be accessed from multiple processes and should only be used for simple tests. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `server.database.connection_url` **Supported environment variables**: `PREFECT_SERVER_DATABASE_CONNECTION_URL`, `PREFECT_API_DATABASE_CONNECTION_URL` ### `driver` The database driver to use when connecting to the database. If not set, the driver will be inferred from the connection URL. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `server.database.driver` **Supported environment variables**: `PREFECT_SERVER_DATABASE_DRIVER`, `PREFECT_API_DATABASE_DRIVER` ### `host` The database server host. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `server.database.host` **Supported environment variables**: `PREFECT_SERVER_DATABASE_HOST`, `PREFECT_API_DATABASE_HOST` ### `port` The database server port. **Type**: `integer | None` **Default**: `None` **TOML dotted key path**: `server.database.port` **Supported environment variables**: `PREFECT_SERVER_DATABASE_PORT`, `PREFECT_API_DATABASE_PORT` ### `user` The user to use when connecting to the database. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `server.database.user` **Supported environment variables**: `PREFECT_SERVER_DATABASE_USER`, `PREFECT_API_DATABASE_USER` ### `name` The name of the Prefect database on the remote server, or the path to the database file for SQLite. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `server.database.name` **Supported environment variables**: `PREFECT_SERVER_DATABASE_NAME`, `PREFECT_API_DATABASE_NAME` ### `password` The password to use when connecting to the database. Should be kept secret. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `server.database.password` **Supported environment variables**: `PREFECT_SERVER_DATABASE_PASSWORD`, `PREFECT_API_DATABASE_PASSWORD` ### `echo` If `True`, SQLAlchemy will log all SQL issued to the database. Defaults to `False`. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `server.database.echo` **Supported environment variables**: `PREFECT_SERVER_DATABASE_ECHO`, `PREFECT_API_DATABASE_ECHO` ### `migrate_on_start` If `True`, the database will be migrated on application startup. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `server.database.migrate_on_start` **Supported environment variables**: `PREFECT_SERVER_DATABASE_MIGRATE_ON_START`, `PREFECT_API_DATABASE_MIGRATE_ON_START` ### `timeout` A statement timeout, in seconds, applied to all database interactions made by the Prefect backend. Defaults to 10 seconds. **Type**: `number | None` **Default**: `10.0` **TOML dotted key path**: `server.database.timeout` **Supported environment variables**: `PREFECT_SERVER_DATABASE_TIMEOUT`, `PREFECT_API_DATABASE_TIMEOUT` ### `connection_timeout` A connection timeout, in seconds, applied to database connections. Defaults to `5`. **Type**: `number | None` **Default**: `5.0` **TOML dotted key path**: `server.database.connection_timeout` **Supported environment variables**: `PREFECT_SERVER_DATABASE_CONNECTION_TIMEOUT`, `PREFECT_API_DATABASE_CONNECTION_TIMEOUT` *** ## ServerDeploymentsSettings ### `concurrency_slot_wait_seconds` The number of seconds to wait before retrying when a deployment flow run cannot secure a concurrency slot from the server. **Type**: `number` **Default**: `30.0` **Constraints**: * Minimum: 0.0 **TOML dotted key path**: `server.deployments.concurrency_slot_wait_seconds` **Supported environment variables**: `PREFECT_SERVER_DEPLOYMENTS_CONCURRENCY_SLOT_WAIT_SECONDS`, `PREFECT_DEPLOYMENT_CONCURRENCY_SLOT_WAIT_SECONDS` *** ## ServerEphemeralSettings Settings for controlling ephemeral server behavior ### `enabled` Controls whether or not a subprocess server can be started when no API URL is provided. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `server.ephemeral.enabled` **Supported environment variables**: `PREFECT_SERVER_EPHEMERAL_ENABLED`, `PREFECT_SERVER_ALLOW_EPHEMERAL_MODE` ### `startup_timeout_seconds` The number of seconds to wait for the server to start when ephemeral mode is enabled. Defaults to `20`. **Type**: `integer` **Default**: `20` **TOML dotted key path**: `server.ephemeral.startup_timeout_seconds` **Supported environment variables**: `PREFECT_SERVER_EPHEMERAL_STARTUP_TIMEOUT_SECONDS` *** ## ServerEventsSettings Settings for controlling behavior of the events subsystem ### `stream_out_enabled` Whether or not to stream events out to the API via websockets. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `server.events.stream_out_enabled` **Supported environment variables**: `PREFECT_SERVER_EVENTS_STREAM_OUT_ENABLED`, `PREFECT_API_EVENTS_STREAM_OUT_ENABLED` ### `related_resource_cache_ttl` The number of seconds to cache related resources for in the API. **Type**: `string` **Default**: `PT5M` **TOML dotted key path**: `server.events.related_resource_cache_ttl` **Supported environment variables**: `PREFECT_SERVER_EVENTS_RELATED_RESOURCE_CACHE_TTL`, `PREFECT_API_EVENTS_RELATED_RESOURCE_CACHE_TTL` ### `maximum_labels_per_resource` The maximum number of labels a resource may have. **Type**: `integer` **Default**: `500` **TOML dotted key path**: `server.events.maximum_labels_per_resource` **Supported environment variables**: `PREFECT_SERVER_EVENTS_MAXIMUM_LABELS_PER_RESOURCE`, `PREFECT_EVENTS_MAXIMUM_LABELS_PER_RESOURCE` ### `maximum_related_resources` The maximum number of related resources an Event may have. **Type**: `integer` **Default**: `100` **TOML dotted key path**: `server.events.maximum_related_resources` **Supported environment variables**: `PREFECT_SERVER_EVENTS_MAXIMUM_RELATED_RESOURCES`, `PREFECT_EVENTS_MAXIMUM_RELATED_RESOURCES` ### `maximum_size_bytes` The maximum size of an Event when serialized to JSON **Type**: `integer` **Default**: `1500000` **TOML dotted key path**: `server.events.maximum_size_bytes` **Supported environment variables**: `PREFECT_SERVER_EVENTS_MAXIMUM_SIZE_BYTES`, `PREFECT_EVENTS_MAXIMUM_SIZE_BYTES` ### `expired_bucket_buffer` The amount of time to retain expired automation buckets **Type**: `string` **Default**: `PT1M` **TOML dotted key path**: `server.events.expired_bucket_buffer` **Supported environment variables**: `PREFECT_SERVER_EVENTS_EXPIRED_BUCKET_BUFFER`, `PREFECT_EVENTS_EXPIRED_BUCKET_BUFFER` ### `proactive_granularity` How frequently proactive automations are evaluated **Type**: `string` **Default**: `PT5S` **TOML dotted key path**: `server.events.proactive_granularity` **Supported environment variables**: `PREFECT_SERVER_EVENTS_PROACTIVE_GRANULARITY`, `PREFECT_EVENTS_PROACTIVE_GRANULARITY` ### `retention_period` The amount of time to retain events in the database. **Type**: `string` **Default**: `P7D` **TOML dotted key path**: `server.events.retention_period` **Supported environment variables**: `PREFECT_SERVER_EVENTS_RETENTION_PERIOD`, `PREFECT_EVENTS_RETENTION_PERIOD` ### `maximum_websocket_backfill` The maximum range to look back for backfilling events for a websocket subscriber. **Type**: `string` **Default**: `PT15M` **TOML dotted key path**: `server.events.maximum_websocket_backfill` **Supported environment variables**: `PREFECT_SERVER_EVENTS_MAXIMUM_WEBSOCKET_BACKFILL`, `PREFECT_EVENTS_MAXIMUM_WEBSOCKET_BACKFILL` ### `websocket_backfill_page_size` The page size for the queries to backfill events for websocket subscribers. **Type**: `integer` **Default**: `250` **TOML dotted key path**: `server.events.websocket_backfill_page_size` **Supported environment variables**: `PREFECT_SERVER_EVENTS_WEBSOCKET_BACKFILL_PAGE_SIZE`, `PREFECT_EVENTS_WEBSOCKET_BACKFILL_PAGE_SIZE` ### `messaging_broker` Which message broker implementation to use for the messaging system, should point to a module that exports a Publisher and Consumer class. **Type**: `string` **Default**: `prefect.server.utilities.messaging.memory` **TOML dotted key path**: `server.events.messaging_broker` **Supported environment variables**: `PREFECT_SERVER_EVENTS_MESSAGING_BROKER`, `PREFECT_MESSAGING_BROKER` ### `messaging_cache` Which cache implementation to use for the events system. Should point to a module that exports a Cache class. **Type**: `string` **Default**: `prefect.server.utilities.messaging.memory` **TOML dotted key path**: `server.events.messaging_cache` **Supported environment variables**: `PREFECT_SERVER_EVENTS_MESSAGING_CACHE`, `PREFECT_MESSAGING_CACHE` ### `causal_ordering` Which causal ordering implementation to use for the events system. Should point to a module that exports a CausalOrdering class. **Type**: `string` **Default**: `prefect.server.events.ordering.memory` **TOML dotted key path**: `server.events.causal_ordering` **Supported environment variables**: `PREFECT_SERVER_EVENTS_CAUSAL_ORDERING` ### `maximum_event_name_length` The maximum length of an event name. **Type**: `integer` **Default**: `1024` **TOML dotted key path**: `server.events.maximum_event_name_length` **Supported environment variables**: `PREFECT_SERVER_EVENTS_MAXIMUM_EVENT_NAME_LENGTH` *** ## ServerFlowRunGraphSettings Settings for controlling behavior of the flow run graph ### `max_nodes` The maximum size of a flow run graph on the v2 API **Type**: `integer` **Default**: `10000` **TOML dotted key path**: `server.flow_run_graph.max_nodes` **Supported environment variables**: `PREFECT_SERVER_FLOW_RUN_GRAPH_MAX_NODES`, `PREFECT_API_MAX_FLOW_RUN_GRAPH_NODES` ### `max_artifacts` The maximum number of artifacts to show on a flow run graph on the v2 API **Type**: `integer` **Default**: `10000` **TOML dotted key path**: `server.flow_run_graph.max_artifacts` **Supported environment variables**: `PREFECT_SERVER_FLOW_RUN_GRAPH_MAX_ARTIFACTS`, `PREFECT_API_MAX_FLOW_RUN_GRAPH_ARTIFACTS` *** ## ServerLogsSettings Settings for controlling behavior of the logs subsystem ### `stream_out_enabled` Whether or not to stream logs out to the API via websockets. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `server.logs.stream_out_enabled` **Supported environment variables**: `PREFECT_SERVER_LOGS_STREAM_OUT_ENABLED` ### `stream_publishing_enabled` Whether or not to publish logs to the streaming system. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `server.logs.stream_publishing_enabled` **Supported environment variables**: `PREFECT_SERVER_LOGS_STREAM_PUBLISHING_ENABLED` *** ## ServerServicesCancellationCleanupSettings Settings for controlling the cancellation cleanup service ### `enabled` Whether or not to start the cancellation cleanup service in the server application. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `server.services.cancellation_cleanup.enabled` **Supported environment variables**: `PREFECT_SERVER_SERVICES_CANCELLATION_CLEANUP_ENABLED`, `PREFECT_API_SERVICES_CANCELLATION_CLEANUP_ENABLED` ### `loop_seconds` The cancellation cleanup service will look for non-terminal tasks and subflows this often. Defaults to `20`. **Type**: `number` **Default**: `20` **TOML dotted key path**: `server.services.cancellation_cleanup.loop_seconds` **Supported environment variables**: `PREFECT_SERVER_SERVICES_CANCELLATION_CLEANUP_LOOP_SECONDS`, `PREFECT_API_SERVICES_CANCELLATION_CLEANUP_LOOP_SECONDS` *** ## ServerServicesEventLoggerSettings Settings for controlling the event logger service ### `enabled` Whether or not to start the event logger service in the server application. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `server.services.event_logger.enabled` **Supported environment variables**: `PREFECT_SERVER_SERVICES_EVENT_LOGGER_ENABLED`, `PREFECT_API_SERVICES_EVENT_LOGGER_ENABLED` *** ## ServerServicesEventPersisterSettings Settings for controlling the event persister service ### `enabled` Whether or not to start the event persister service in the server application. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `server.services.event_persister.enabled` **Supported environment variables**: `PREFECT_SERVER_SERVICES_EVENT_PERSISTER_ENABLED`, `PREFECT_API_SERVICES_EVENT_PERSISTER_ENABLED` ### `batch_size` The number of events the event persister will attempt to insert in one batch. **Type**: `integer` **Default**: `20` **TOML dotted key path**: `server.services.event_persister.batch_size` **Supported environment variables**: `PREFECT_SERVER_SERVICES_EVENT_PERSISTER_BATCH_SIZE`, `PREFECT_API_SERVICES_EVENT_PERSISTER_BATCH_SIZE` ### `flush_interval` The maximum number of seconds between flushes of the event persister. **Type**: `number` **Default**: `5` **TOML dotted key path**: `server.services.event_persister.flush_interval` **Supported environment variables**: `PREFECT_SERVER_SERVICES_EVENT_PERSISTER_FLUSH_INTERVAL`, `PREFECT_API_SERVICES_EVENT_PERSISTER_FLUSH_INTERVAL` ### `batch_size_delete` The number of expired events and event resources the event persister will attempt to delete in one batch. **Type**: `integer` **Default**: `10000` **TOML dotted key path**: `server.services.event_persister.batch_size_delete` **Supported environment variables**: `PREFECT_SERVER_SERVICES_EVENT_PERSISTER_BATCH_SIZE_DELETE` *** ## ServerServicesForemanSettings Settings for controlling the foreman service ### `enabled` Whether or not to start the foreman service in the server application. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `server.services.foreman.enabled` **Supported environment variables**: `PREFECT_SERVER_SERVICES_FOREMAN_ENABLED`, `PREFECT_API_SERVICES_FOREMAN_ENABLED` ### `loop_seconds` The foreman service will check for offline workers this often. Defaults to `15`. **Type**: `number` **Default**: `15` **TOML dotted key path**: `server.services.foreman.loop_seconds` **Supported environment variables**: `PREFECT_SERVER_SERVICES_FOREMAN_LOOP_SECONDS`, `PREFECT_API_SERVICES_FOREMAN_LOOP_SECONDS` ### `inactivity_heartbeat_multiple` The number of heartbeats that must be missed before a worker is marked as offline. Defaults to `3`. **Type**: `integer` **Default**: `3` **TOML dotted key path**: `server.services.foreman.inactivity_heartbeat_multiple` **Supported environment variables**: `PREFECT_SERVER_SERVICES_FOREMAN_INACTIVITY_HEARTBEAT_MULTIPLE`, `PREFECT_API_SERVICES_FOREMAN_INACTIVITY_HEARTBEAT_MULTIPLE` ### `fallback_heartbeat_interval_seconds` The number of seconds to use for online/offline evaluation if a worker's heartbeat interval is not set. Defaults to `30`. **Type**: `integer` **Default**: `30` **TOML dotted key path**: `server.services.foreman.fallback_heartbeat_interval_seconds` **Supported environment variables**: `PREFECT_SERVER_SERVICES_FOREMAN_FALLBACK_HEARTBEAT_INTERVAL_SECONDS`, `PREFECT_API_SERVICES_FOREMAN_FALLBACK_HEARTBEAT_INTERVAL_SECONDS` ### `deployment_last_polled_timeout_seconds` The number of seconds before a deployment is marked as not ready if it has not been polled. Defaults to `60`. **Type**: `integer` **Default**: `60` **TOML dotted key path**: `server.services.foreman.deployment_last_polled_timeout_seconds` **Supported environment variables**: `PREFECT_SERVER_SERVICES_FOREMAN_DEPLOYMENT_LAST_POLLED_TIMEOUT_SECONDS`, `PREFECT_API_SERVICES_FOREMAN_DEPLOYMENT_LAST_POLLED_TIMEOUT_SECONDS` ### `work_queue_last_polled_timeout_seconds` The number of seconds before a work queue is marked as not ready if it has not been polled. Defaults to `60`. **Type**: `integer` **Default**: `60` **TOML dotted key path**: `server.services.foreman.work_queue_last_polled_timeout_seconds` **Supported environment variables**: `PREFECT_SERVER_SERVICES_FOREMAN_WORK_QUEUE_LAST_POLLED_TIMEOUT_SECONDS`, `PREFECT_API_SERVICES_FOREMAN_WORK_QUEUE_LAST_POLLED_TIMEOUT_SECONDS` *** ## ServerServicesLateRunsSettings Settings for controlling the late runs service ### `enabled` Whether or not to start the late runs service in the server application. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `server.services.late_runs.enabled` **Supported environment variables**: `PREFECT_SERVER_SERVICES_LATE_RUNS_ENABLED`, `PREFECT_API_SERVICES_LATE_RUNS_ENABLED` ### `loop_seconds` The late runs service will look for runs to mark as late this often. Defaults to `5`. **Type**: `number` **Default**: `5` **TOML dotted key path**: `server.services.late_runs.loop_seconds` **Supported environment variables**: `PREFECT_SERVER_SERVICES_LATE_RUNS_LOOP_SECONDS`, `PREFECT_API_SERVICES_LATE_RUNS_LOOP_SECONDS` ### `after_seconds` The late runs service will mark runs as late after they have exceeded their scheduled start time by this many seconds. Defaults to `5` seconds. **Type**: `string` **Default**: `PT15S` **TOML dotted key path**: `server.services.late_runs.after_seconds` **Supported environment variables**: `PREFECT_SERVER_SERVICES_LATE_RUNS_AFTER_SECONDS`, `PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS` *** ## ServerServicesPauseExpirationsSettings Settings for controlling the pause expiration service ### `enabled` Whether or not to start the paused flow run expiration service in the server application. If disabled, paused flows that have timed out will remain in a Paused state until a resume attempt. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `server.services.pause_expirations.enabled` **Supported environment variables**: `PREFECT_SERVER_SERVICES_PAUSE_EXPIRATIONS_ENABLED`, `PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_ENABLED` ### `loop_seconds` The pause expiration service will look for runs to mark as failed this often. Defaults to `5`. **Type**: `number` **Default**: `5` **TOML dotted key path**: `server.services.pause_expirations.loop_seconds` **Supported environment variables**: `PREFECT_SERVER_SERVICES_PAUSE_EXPIRATIONS_LOOP_SECONDS`, `PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_LOOP_SECONDS` *** ## ServerServicesRepossessorSettings Settings for controlling the repossessor service ### `enabled` Whether or not to start the repossessor service in the server application. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `server.services.repossessor.enabled` **Supported environment variables**: `PREFECT_SERVER_SERVICES_REPOSSESSOR_ENABLED` ### `loop_seconds` The repossessor service will look for expired leases this often. Defaults to `15`. **Type**: `number` **Default**: `15` **TOML dotted key path**: `server.services.repossessor.loop_seconds` **Supported environment variables**: `PREFECT_SERVER_SERVICES_REPOSSESSOR_LOOP_SECONDS` *** ## ServerServicesSchedulerSettings Settings for controlling the scheduler service ### `enabled` Whether or not to start the scheduler service in the server application. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `server.services.scheduler.enabled` **Supported environment variables**: `PREFECT_SERVER_SERVICES_SCHEDULER_ENABLED`, `PREFECT_API_SERVICES_SCHEDULER_ENABLED` ### `loop_seconds` The scheduler loop interval, in seconds. This determines how often the scheduler will attempt to schedule new flow runs, but has no impact on how quickly either flow runs or task runs are actually executed. Defaults to `60`. **Type**: `number` **Default**: `60` **TOML dotted key path**: `server.services.scheduler.loop_seconds` **Supported environment variables**: `PREFECT_SERVER_SERVICES_SCHEDULER_LOOP_SECONDS`, `PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS` ### `deployment_batch_size` The number of deployments the scheduler will attempt to schedule in a single batch. If there are more deployments than the batch size, the scheduler immediately attempts to schedule the next batch; it does not sleep for `scheduler_loop_seconds` until it has visited every deployment once. Defaults to `100`. **Type**: `integer` **Default**: `100` **TOML dotted key path**: `server.services.scheduler.deployment_batch_size` **Supported environment variables**: `PREFECT_SERVER_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE`, `PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE` ### `max_runs` The scheduler will attempt to schedule up to this many auto-scheduled runs in the future. Note that runs may have fewer than this many scheduled runs, depending on the value of `scheduler_max_scheduled_time`. Defaults to `100`. **Type**: `integer` **Default**: `100` **TOML dotted key path**: `server.services.scheduler.max_runs` **Supported environment variables**: `PREFECT_SERVER_SERVICES_SCHEDULER_MAX_RUNS`, `PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS` ### `min_runs` The scheduler will attempt to schedule at least this many auto-scheduled runs in the future. Note that runs may have more than this many scheduled runs, depending on the value of `scheduler_min_scheduled_time`. Defaults to `3`. **Type**: `integer` **Default**: `3` **TOML dotted key path**: `server.services.scheduler.min_runs` **Supported environment variables**: `PREFECT_SERVER_SERVICES_SCHEDULER_MIN_RUNS`, `PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS` ### `max_scheduled_time` The scheduler will create new runs up to this far in the future. Note that this setting will take precedence over `scheduler_max_runs`: if a flow runs once a month and `scheduler_max_scheduled_time` is three months, then only three runs will be scheduled. Defaults to 100 days (`8640000` seconds). **Type**: `string` **Default**: `P100D` **TOML dotted key path**: `server.services.scheduler.max_scheduled_time` **Supported environment variables**: `PREFECT_SERVER_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME`, `PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME` ### `min_scheduled_time` The scheduler will create new runs at least this far in the future. Note that this setting will take precedence over `scheduler_min_runs`: if a flow runs every hour and `scheduler_min_scheduled_time` is three hours, then three runs will be scheduled even if `scheduler_min_runs` is 1. Defaults to **Type**: `string` **Default**: `PT1H` **TOML dotted key path**: `server.services.scheduler.min_scheduled_time` **Supported environment variables**: `PREFECT_SERVER_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME`, `PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME` ### `insert_batch_size` The number of runs the scheduler will attempt to insert in a single batch. Defaults to `500`. **Type**: `integer` **Default**: `500` **TOML dotted key path**: `server.services.scheduler.insert_batch_size` **Supported environment variables**: `PREFECT_SERVER_SERVICES_SCHEDULER_INSERT_BATCH_SIZE`, `PREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE` ### `recent_deployments_loop_seconds` The number of seconds the recent deployments scheduler will wait between checking for recently updated deployments. Defaults to `5`. **Type**: `number` **Default**: `5` **TOML dotted key path**: `server.services.scheduler.recent_deployments_loop_seconds` **Supported environment variables**: `PREFECT_SERVER_SERVICES_SCHEDULER_RECENT_DEPLOYMENTS_LOOP_SECONDS` *** ## ServerServicesSettings Settings for controlling server services ### `cancellation_cleanup` **Type**: [ServerServicesCancellationCleanupSettings](#serverservicescancellationcleanupsettings) **TOML dotted key path**: `server.services.cancellation_cleanup` ### `event_persister` **Type**: [ServerServicesEventPersisterSettings](#serverserviceseventpersistersettings) **TOML dotted key path**: `server.services.event_persister` ### `event_logger` **Type**: [ServerServicesEventLoggerSettings](#serverserviceseventloggersettings) **TOML dotted key path**: `server.services.event_logger` ### `foreman` **Type**: [ServerServicesForemanSettings](#serverservicesforemansettings) **TOML dotted key path**: `server.services.foreman` ### `late_runs` **Type**: [ServerServicesLateRunsSettings](#serverserviceslaterunssettings) **TOML dotted key path**: `server.services.late_runs` ### `scheduler` **Type**: [ServerServicesSchedulerSettings](#serverservicesschedulersettings) **TOML dotted key path**: `server.services.scheduler` ### `pause_expirations` **Type**: [ServerServicesPauseExpirationsSettings](#serverservicespauseexpirationssettings) **TOML dotted key path**: `server.services.pause_expirations` ### `repossessor` **Type**: [ServerServicesRepossessorSettings](#serverservicesrepossessorsettings) **TOML dotted key path**: `server.services.repossessor` ### `task_run_recorder` **Type**: [ServerServicesTaskRunRecorderSettings](#serverservicestaskrunrecordersettings) **TOML dotted key path**: `server.services.task_run_recorder` ### `triggers` **Type**: [ServerServicesTriggersSettings](#serverservicestriggerssettings) **TOML dotted key path**: `server.services.triggers` *** ## ServerServicesTaskRunRecorderSettings Settings for controlling the task run recorder service ### `enabled` Whether or not to start the task run recorder service in the server application. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `server.services.task_run_recorder.enabled` **Supported environment variables**: `PREFECT_SERVER_SERVICES_TASK_RUN_RECORDER_ENABLED`, `PREFECT_API_SERVICES_TASK_RUN_RECORDER_ENABLED` *** ## ServerServicesTriggersSettings Settings for controlling the triggers service ### `enabled` Whether or not to start the triggers service in the server application. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `server.services.triggers.enabled` **Supported environment variables**: `PREFECT_SERVER_SERVICES_TRIGGERS_ENABLED`, `PREFECT_API_SERVICES_TRIGGERS_ENABLED` ### `pg_notify_reconnect_interval_seconds` The number of seconds to wait before reconnecting to the PostgreSQL NOTIFY/LISTEN connection after an error. Only used when using PostgreSQL as the database. Defaults to `10`. **Type**: `integer` **Default**: `10` **TOML dotted key path**: `server.services.triggers.pg_notify_reconnect_interval_seconds` **Supported environment variables**: `PREFECT_SERVER_SERVICES_TRIGGERS_PG_NOTIFY_RECONNECT_INTERVAL_SECONDS` ### `pg_notify_heartbeat_interval_seconds` The number of seconds between heartbeat checks for the PostgreSQL NOTIFY/LISTEN connection to ensure it's still alive. Only used when using PostgreSQL as the database. Defaults to `5`. **Type**: `integer` **Default**: `5` **TOML dotted key path**: `server.services.triggers.pg_notify_heartbeat_interval_seconds` **Supported environment variables**: `PREFECT_SERVER_SERVICES_TRIGGERS_PG_NOTIFY_HEARTBEAT_INTERVAL_SECONDS` *** ## ServerSettings Settings for controlling server behavior ### `logging_level` The default logging level for the Prefect API server. **Type**: `string` **Default**: `WARNING` **Constraints**: * Allowed values: 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL' **TOML dotted key path**: `server.logging_level` **Supported environment variables**: `PREFECT_SERVER_LOGGING_LEVEL`, `PREFECT_LOGGING_SERVER_LEVEL` ### `analytics_enabled` When enabled, Prefect sends anonymous data (e.g. count of flow runs, package version) on server startup to help us improve our product. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `server.analytics_enabled` **Supported environment variables**: `PREFECT_SERVER_ANALYTICS_ENABLED` ### `metrics_enabled` Whether or not to enable Prometheus metrics in the API. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `server.metrics_enabled` **Supported environment variables**: `PREFECT_SERVER_METRICS_ENABLED`, `PREFECT_API_ENABLE_METRICS` ### `log_retryable_errors` If `True`, log retryable errors in the API and it's services. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `server.log_retryable_errors` **Supported environment variables**: `PREFECT_SERVER_LOG_RETRYABLE_ERRORS`, `PREFECT_API_LOG_RETRYABLE_ERRORS` ### `register_blocks_on_start` If set, any block types that have been imported will be registered with the backend on application startup. If not set, block types must be manually registered. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `server.register_blocks_on_start` **Supported environment variables**: `PREFECT_SERVER_REGISTER_BLOCKS_ON_START`, `PREFECT_API_BLOCKS_REGISTER_ON_START` ### `memoize_block_auto_registration` Controls whether or not block auto-registration on start **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `server.memoize_block_auto_registration` **Supported environment variables**: `PREFECT_SERVER_MEMOIZE_BLOCK_AUTO_REGISTRATION`, `PREFECT_MEMOIZE_BLOCK_AUTO_REGISTRATION` ### `memo_store_path` Path to the memo store file. Defaults to \$PREFECT\_HOME/memo\_store.toml **Type**: `string` **TOML dotted key path**: `server.memo_store_path` **Supported environment variables**: `PREFECT_SERVER_MEMO_STORE_PATH`, `PREFECT_MEMO_STORE_PATH` ### `deployment_schedule_max_scheduled_runs` The maximum number of scheduled runs to create for a deployment. **Type**: `integer` **Default**: `50` **TOML dotted key path**: `server.deployment_schedule_max_scheduled_runs` **Supported environment variables**: `PREFECT_SERVER_DEPLOYMENT_SCHEDULE_MAX_SCHEDULED_RUNS`, `PREFECT_DEPLOYMENT_SCHEDULE_MAX_SCHEDULED_RUNS` ### `api` **Type**: [ServerAPISettings](#serverapisettings) **TOML dotted key path**: `server.api` ### `concurrency` Settings for controlling server-side concurrency limit handling **Type**: [ServerConcurrencySettings](#serverconcurrencysettings) **TOML dotted key path**: `server.concurrency` ### `database` **Type**: [ServerDatabaseSettings](#serverdatabasesettings) **TOML dotted key path**: `server.database` ### `deployments` Settings for controlling server deployments behavior **Type**: [ServerDeploymentsSettings](#serverdeploymentssettings) **TOML dotted key path**: `server.deployments` ### `ephemeral` **Type**: [ServerEphemeralSettings](#serverephemeralsettings) **TOML dotted key path**: `server.ephemeral` ### `events` Settings for controlling server events behavior **Type**: [ServerEventsSettings](#servereventssettings) **TOML dotted key path**: `server.events` ### `flow_run_graph` Settings for controlling flow run graph behavior **Type**: [ServerFlowRunGraphSettings](#serverflowrungraphsettings) **TOML dotted key path**: `server.flow_run_graph` ### `logs` Settings for controlling server logs behavior **Type**: [ServerLogsSettings](#serverlogssettings) **TOML dotted key path**: `server.logs` ### `services` Settings for controlling server services behavior **Type**: [ServerServicesSettings](#serverservicessettings) **TOML dotted key path**: `server.services` ### `tasks` Settings for controlling server tasks behavior **Type**: [ServerTasksSettings](#servertaskssettings) **TOML dotted key path**: `server.tasks` ### `ui` Settings for controlling server UI behavior **Type**: [ServerUISettings](#serveruisettings) **TOML dotted key path**: `server.ui` *** ## ServerTasksSchedulingSettings Settings for controlling server-side behavior related to task scheduling ### `max_scheduled_queue_size` The maximum number of scheduled tasks to queue for submission. **Type**: `integer` **Default**: `1000` **TOML dotted key path**: `server.tasks.scheduling.max_scheduled_queue_size` **Supported environment variables**: `PREFECT_SERVER_TASKS_SCHEDULING_MAX_SCHEDULED_QUEUE_SIZE`, `PREFECT_TASK_SCHEDULING_MAX_SCHEDULED_QUEUE_SIZE` ### `max_retry_queue_size` The maximum number of retries to queue for submission. **Type**: `integer` **Default**: `100` **TOML dotted key path**: `server.tasks.scheduling.max_retry_queue_size` **Supported environment variables**: `PREFECT_SERVER_TASKS_SCHEDULING_MAX_RETRY_QUEUE_SIZE`, `PREFECT_TASK_SCHEDULING_MAX_RETRY_QUEUE_SIZE` ### `pending_task_timeout` How long before a PENDING task are made available to another task worker. **Type**: `string` **Default**: `PT0S` **TOML dotted key path**: `server.tasks.scheduling.pending_task_timeout` **Supported environment variables**: `PREFECT_SERVER_TASKS_SCHEDULING_PENDING_TASK_TIMEOUT`, `PREFECT_TASK_SCHEDULING_PENDING_TASK_TIMEOUT` *** ## ServerTasksSettings Settings for controlling server-side behavior related to tasks ### `tag_concurrency_slot_wait_seconds` The number of seconds to wait before retrying when a task run cannot secure a concurrency slot from the server. **Type**: `number` **Default**: `30` **Constraints**: * Minimum: 0 **TOML dotted key path**: `server.tasks.tag_concurrency_slot_wait_seconds` **Supported environment variables**: `PREFECT_SERVER_TASKS_TAG_CONCURRENCY_SLOT_WAIT_SECONDS`, `PREFECT_TASK_RUN_TAG_CONCURRENCY_SLOT_WAIT_SECONDS` ### `max_cache_key_length` The maximum number of characters allowed for a task run cache key. **Type**: `integer` **Default**: `2000` **TOML dotted key path**: `server.tasks.max_cache_key_length` **Supported environment variables**: `PREFECT_SERVER_TASKS_MAX_CACHE_KEY_LENGTH`, `PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH` ### `scheduling` **Type**: [ServerTasksSchedulingSettings](#servertasksschedulingsettings) **TOML dotted key path**: `server.tasks.scheduling` *** ## ServerUISettings ### `enabled` Whether or not to serve the Prefect UI. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `server.ui.enabled` **Supported environment variables**: `PREFECT_SERVER_UI_ENABLED`, `PREFECT_UI_ENABLED` ### `api_url` The connection url for communication from the UI to the API. Defaults to `PREFECT_API_URL` if set. Otherwise, the default URL is generated from `PREFECT_SERVER_API_HOST` and `PREFECT_SERVER_API_PORT`. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `server.ui.api_url` **Supported environment variables**: `PREFECT_SERVER_UI_API_URL`, `PREFECT_UI_API_URL` ### `serve_base` The base URL path to serve the Prefect UI from. **Type**: `string` **Default**: `/` **TOML dotted key path**: `server.ui.serve_base` **Supported environment variables**: `PREFECT_SERVER_UI_SERVE_BASE`, `PREFECT_UI_SERVE_BASE` ### `static_directory` The directory to serve static files from. This should be used when running into permissions issues when attempting to serve the UI from the default directory (for example when running in a Docker container). **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `server.ui.static_directory` **Supported environment variables**: `PREFECT_SERVER_UI_STATIC_DIRECTORY`, `PREFECT_UI_STATIC_DIRECTORY` *** ## TasksRunnerSettings ### `thread_pool_max_workers` The maximum number of workers for ThreadPoolTaskRunner. **Type**: `integer | None` **Default**: `None` **TOML dotted key path**: `tasks.runner.thread_pool_max_workers` **Supported environment variables**: `PREFECT_TASKS_RUNNER_THREAD_POOL_MAX_WORKERS`, `PREFECT_TASK_RUNNER_THREAD_POOL_MAX_WORKERS` ### `process_pool_max_workers` The maximum number of workers for ProcessPoolTaskRunner. **Type**: `integer | None` **Default**: `None` **TOML dotted key path**: `tasks.runner.process_pool_max_workers` **Supported environment variables**: `PREFECT_TASKS_RUNNER_PROCESS_POOL_MAX_WORKERS` *** ## TasksSchedulingSettings ### `default_storage_block` The `block-type/block-document` slug of a block to use as the default storage for autonomous tasks. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `tasks.scheduling.default_storage_block` **Supported environment variables**: `PREFECT_TASKS_SCHEDULING_DEFAULT_STORAGE_BLOCK`, `PREFECT_TASK_SCHEDULING_DEFAULT_STORAGE_BLOCK` ### `delete_failed_submissions` Whether or not to delete failed task submissions from the database. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `tasks.scheduling.delete_failed_submissions` **Supported environment variables**: `PREFECT_TASKS_SCHEDULING_DELETE_FAILED_SUBMISSIONS`, `PREFECT_TASK_SCHEDULING_DELETE_FAILED_SUBMISSIONS` *** ## TasksSettings ### `refresh_cache` If `True`, enables a refresh of cached results: re-executing the task will refresh the cached results. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `tasks.refresh_cache` **Supported environment variables**: `PREFECT_TASKS_REFRESH_CACHE` ### `default_no_cache` If `True`, sets the default cache policy on all tasks to `NO_CACHE`. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `tasks.default_no_cache` **Supported environment variables**: `PREFECT_TASKS_DEFAULT_NO_CACHE` ### `disable_caching` If `True`, disables caching on all tasks regardless of cache policy. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `tasks.disable_caching` **Supported environment variables**: `PREFECT_TASKS_DISABLE_CACHING` ### `default_retries` This value sets the default number of retries for all tasks. **Type**: `integer` **Default**: `0` **Constraints**: * Minimum: 0 **TOML dotted key path**: `tasks.default_retries` **Supported environment variables**: `PREFECT_TASKS_DEFAULT_RETRIES`, `PREFECT_TASK_DEFAULT_RETRIES` ### `default_retry_delay_seconds` This value sets the default retry delay seconds for all tasks. **Type**: `string | integer | number | array | None` **Default**: `0` **TOML dotted key path**: `tasks.default_retry_delay_seconds` **Supported environment variables**: `PREFECT_TASKS_DEFAULT_RETRY_DELAY_SECONDS`, `PREFECT_TASK_DEFAULT_RETRY_DELAY_SECONDS` ### `default_persist_result` If `True`, results will be persisted by default for all tasks. Set to `False` to disable persistence by default. Note that setting to `False` will override the behavior set by a parent flow or task. **Type**: `boolean | None` **Default**: `None` **TOML dotted key path**: `tasks.default_persist_result` **Supported environment variables**: `PREFECT_TASKS_DEFAULT_PERSIST_RESULT` ### `runner` Settings for controlling task runner behavior **Type**: [TasksRunnerSettings](#tasksrunnersettings) **TOML dotted key path**: `tasks.runner` ### `scheduling` Settings for controlling client-side task scheduling behavior **Type**: [TasksSchedulingSettings](#tasksschedulingsettings) **TOML dotted key path**: `tasks.scheduling` *** ## TestingSettings ### `test_mode` If `True`, places the API in test mode. This may modify behavior to facilitate testing. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `testing.test_mode` **Supported environment variables**: `PREFECT_TESTING_TEST_MODE`, `PREFECT_TEST_MODE` ### `unit_test_mode` This setting only exists to facilitate unit testing. If `True`, code is executing in a unit test context. Defaults to `False`. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `testing.unit_test_mode` **Supported environment variables**: `PREFECT_TESTING_UNIT_TEST_MODE`, `PREFECT_UNIT_TEST_MODE` ### `unit_test_loop_debug` If `True` turns on debug mode for the unit testing event loop. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `testing.unit_test_loop_debug` **Supported environment variables**: `PREFECT_TESTING_UNIT_TEST_LOOP_DEBUG`, `PREFECT_UNIT_TEST_LOOP_DEBUG` ### `test_setting` This setting only exists to facilitate unit testing. If in test mode, this setting will return its value. Otherwise, it returns `None`. **Type**: `None` **Default**: `FOO` **TOML dotted key path**: `testing.test_setting` **Supported environment variables**: `PREFECT_TESTING_TEST_SETTING`, `PREFECT_TEST_SETTING` *** ## WorkerSettings ### `heartbeat_seconds` Number of seconds a worker should wait between sending a heartbeat. **Type**: `number` **Default**: `30` **TOML dotted key path**: `worker.heartbeat_seconds` **Supported environment variables**: `PREFECT_WORKER_HEARTBEAT_SECONDS` ### `query_seconds` Number of seconds a worker should wait between queries for scheduled work. **Type**: `number` **Default**: `10` **TOML dotted key path**: `worker.query_seconds` **Supported environment variables**: `PREFECT_WORKER_QUERY_SECONDS` ### `prefetch_seconds` The number of seconds into the future a worker should query for scheduled work. **Type**: `number` **Default**: `10` **TOML dotted key path**: `worker.prefetch_seconds` **Supported environment variables**: `PREFECT_WORKER_PREFETCH_SECONDS` ### `webserver` Settings for a worker's webserver **Type**: [WorkerWebserverSettings](#workerwebserversettings) **TOML dotted key path**: `worker.webserver` *** ## WorkerWebserverSettings ### `host` The host address the worker's webserver should bind to. **Type**: `string` **Default**: `0.0.0.0` **TOML dotted key path**: `worker.webserver.host` **Supported environment variables**: `PREFECT_WORKER_WEBSERVER_HOST` ### `port` The port the worker's webserver should bind to. **Type**: `integer` **Default**: `8080` **TOML dotted key path**: `worker.webserver.port` **Supported environment variables**: `PREFECT_WORKER_WEBSERVER_PORT` *** # Artifacts Source: https://docs-3.prefect.io/v3/concepts/artifacts Artifacts are persisted outputs designed for human consumption and available in the UI. Prefect artifacts: * are visually rich annotations on flow and task runs * are human-readable visual metadata defined in code * come in standardized formats such as tables, progress indicators, images, Markdown, and links * are stored in Prefect Cloud or Prefect server and rendered in the Prefect UI * make it easy to visualize outputs or side effects that your runs produce, and capture updates over time Markdown artifact sales report screenshot Common use cases for artifacts include: * **Progress** indicators: Publish progress indicators for long-running tasks. This helps monitor the progress of your tasks and flows and ensure they are running as expected. * **Debugging**: Publish data that you care about in the UI to easily see when and where your results were written. If an artifact doesn't look the way you expect, you can find out which flow run last updated it, and you can click through a link in the artifact to a storage location (such as an S3 bucket). * **Data quality checks**: Publish data quality checks from in-progress tasks to ensure that data quality is maintained throughout a pipeline. Artifacts make for great performance graphs. For example, you can visualize a long-running machine learning model training run. You can also track artifact versions, making it easier to identify changes in your data. * **Documentation**: Publish documentation and sample data to help you keep track of your work and share information. For example, add a description to signify why a piece of data is important. ## Artifact types There are five artifact types: * links * Markdown * progress * images * tables Each artifact created within a task is displayed individually in the Prefect UI. This means that each call to `create_link_artifact()` or `create_markdown_artifact()` generates a distinct artifact. Unlike the Python `print()` function (where you can concatenate multiple calls to include additional items in a report), these artifact creation functions must be called multiple times, if necessary. To create artifacts such as reports or summaries using `create_markdown_artifact()`, define your message string and then pass it to `create_markdown_artifact()` to create the artifact. For more information on how to create and use artifacts, see the [how to produce workflow artifacts](/v3/how-to-guides/workflows/artifacts/) guide. # Assets Source: https://docs-3.prefect.io/v3/concepts/assets Assets represent objects your workflows produce. Assets in Prefect represent any outcome or output of your Prefect workflows. They provide an interface to model all forms of data and model lineage, track dependencies between data transformations, and monitor the health of pipelines at the asset level rather than just the compute level. ## Core concepts An asset is fundamentally defined by its **key**, a URI that uniquely identifies an asset, often specifying an external storage system in which that asset lives. Asset keys serve as both identifiers and organizational structuresβ€”assets are automatically grouped by their URI scheme (e.g., `s3://`, `postgres://`, `snowflake://`) and can be hierarchically organized based on their path structure. Assets exist in three primary states within Prefect: * **Materialized**: The asset has been created, updated, or overwritten by a Prefect workflow * **Referenced**: The asset is consumed as input by a workflow but not produced by it * **External**: The asset exists outside the Prefect ecosystem but is referenced as a dependency ## Asset lifecycle ### Materializations A **materialization** occurs when a workflow mutates an asset through creation, updating, or overwriting. Materializations are declared using the `@materialize` decorator, which functions as a specialized task decorator that tracks asset creation intent. The materialization process operates on an "intent to materialize" model: when a function decorated with `@materialize` executes, Prefect records the materialization attempt. Success or failure of the materialization is determined by the underlying task's execution state. ```python from prefect.assets import materialize @materialize("s3://data-lake/processed/customer-data.csv") def process_customer_data(): # Asset materialization logic pass ``` ### References A **reference** occurs when an asset appears as an upstream dependency in another asset's materialization. References are automatically inferred from the task execution graphβ€”when the output of one materialization flows as input to another, the dependency relationship is captured. References can also be explicitly declared through the `asset_deps` parameter, which is particularly useful for modeling dependencies on external systems or when the task graph alone doesn't fully capture the data dependencies. ### Metadata Asset definitions include optional metadata about that asset. These asset properties should have one source of truth to avoid conflicts. When you materialize an asset with properties, those properties perform a complete overwrite of all metadata fields for that asset. Updates to asset metadata occur at runtime from any workflow that specifies metadata fields. ## Dependency modeling Asset dependencies are determined through two complementary mechanisms: **Task graph inference**: When materialized assets flow through task parameters, Prefect automatically constructs the dependency graph. Each materialization acts as a dependency accumulation point, gathering all upstream assets and serving as the foundation for downstream materializations. **Explicit declaration**: The `asset_deps` parameter allows direct specification of asset dependencies, enabling modeling of relationships that aren't captured in the task execution flow. ```python from prefect.assets import materialize @materialize( "s3://warehouse/enriched-data.csv", asset_deps=["postgres://db/reference-tables", "s3://external/vendor-data.csv"] ) def enrich_data(): # Explicitly depends on external database and vendor data pass ``` The backend will track these dependencies *across workflow boundaries*, exposing a global view of asset dependencies within your workspace. ## Asset metadata and properties Assets support rich metadata through the `AssetProperties` class, which provides organizational context and improves discoverability: * **Name**: Human-readable identifier for the asset * **Description**: Detailed documentation supporting Markdown formatting * **Owners**: Responsible parties, with special UI treatment for Prefect users and teams * **URL**: Web location for accessing or viewing the asset Additionally, assets support dynamic metadata through the `add_asset_metadata()` function, allowing runtime information like row counts, processing times, and data quality metrics to be attached to materialization events. ## Asset health monitoring Currently asset health provides a *visual* indicator of the operational status of data artifacts based on their most recent materialization attempt: * **Green**: Last materialization succeeded * **Red**: Last materialization failed * **Gray**: No materialization recorded, or asset has only been referenced This health model enables data teams to quickly identify problematic data pipelines at the artifact level, complementing traditional task-level monitoring with data-centric observability. Soon these statuses will be backed by a corresponding event. ## Event emission and integration Assets integrate deeply with Prefect's event system, automatically emitting structured events that enable downstream automation and monitoring: ### Event types * **Materialization events**: These events look like `prefect.asset.materialization.{succeeded|failed}` and are emitted when assets are referenced by the `@materialize` decorator, with status determined by the underlying task execution state. * **Reference events**: These events look like `prefect.asset.referenced` and are emitted for all upstream assets when a materialization occurs, independent of success or failure. ### Event emission rules Asset events follow specific emission patterns based on task execution state: * **Completed states**: Emit `prefect.asset.materialization.succeeded` for downstream assets and `prefect.asset.referenced` for upstream assets * **Failed states**: Emit `prefect.asset.materialization.failed` for downstream assets and `prefect.asset.referenced` for upstream assets * **Cached states**: No asset events are emitted, as cached executions don't represent new asset state changes Reference events are always emitted for upstream assets regardless of materialization success, enabling comprehensive dependency tracking even when downstream processes fail. ### Event payloads Materialization events include any metadata added during task execution through `add_asset_metadata()`, while reference events contain basic asset identification information. This enables rich event-driven automation based on both asset state changes and associated metadata. ## Asset organization and discovery Assets are automatically organized in the Prefect UI based on their URI structure: * **Grouping by scheme**: Assets with the same URI scheme (e.g., `s3://`, `postgres://`) are grouped together * **Hierarchical organization**: URI paths create nested organization structures * **Search and filtering**: Asset metadata enables discovery through names, descriptions, and ownership information ## Further Reading * [How to use assets to track workflow outputs](/v3/how-to-guides/workflows/assets) * [How to customize asset metadata](/v3/advanced/assets) # Automations Source: https://docs-3.prefect.io/v3/concepts/automations Learn how to automatically take action in response to events. Automations enable you to configure [actions](#actions) that execute automatically based on [trigger](#triggers) conditions. Potential triggers include the occurrence of events from changes in a flow run's stateβ€”or the absence of such events. You can define your own custom trigger to fire based on a custom [event](/v3/concepts/event-triggers/) defined in Python code. With Prefect Cloud you can even create [webhooks](/v3/automate/events/webhook-triggers/) that can receive data for use in actions. Actions you can take upon a trigger include: * creating flow runs from existing deployments * pausing and resuming schedules or work pools * sending custom notifications ### Triggers Triggers specify the conditions under which your action should be performed. The Prefect UI includes templates for many common conditions, such as: * Flow run state change (Flow Run Tags are only evaluated with `OR` criteria) * Work pool status * Work queue status * Deployment status * Metric thresholds, such as average duration, lateness, or completion percentage * Custom event triggers Importantly, you can configure the triggers not only in reaction to events, but also proactively: in the absence of an expected event. Configuring a trigger for an automation in Prefect Cloud. For example, in the case of flow run state change triggers, you might expect production flows to finish in no longer than thirty minutes. But transient infrastructure or network issues could cause your flow to get β€œstuck” in a running state. A trigger could kick off an action if the flow stays in a running state for more than 30 minutes. This action could be taken on the flow itself, such as cancelling or restarting it. Or the action could take the form of a notification for someone to take manual remediation steps. Or you could set both actions to take place when the trigger occurs. ### Actions Actions specify what your automation does when its trigger criteria are met. Current action types include: * Cancel a flow run * Pause or resume a schedule * Run a deployment * Pause or resume a deployment schedule * Pause or resume a work pool * Pause or resume a work queue * Pause or resume an automation * Send a [notification](#sending-notifications-with-automations) * Call a webhook * Suspend a flow run * Change the state of a flow run Configuring an action for an automation in Prefect Cloud. ### Selected and inferred action targets Some actions require you to either select the target of the action, or specify that the target of the action should be inferred. Selected targets are simple and useful for when you know exactly what object your action should act on. For example, the case of a cleanup flow you want to run or a specific notification you want to send. Inferred targets are deduced from the trigger itself. For example, if a trigger fires on a flow run that is stuck in a running state, and the action is to cancel an inferred flow runβ€”the flow run that caused the trigger to fire. Similarly, if a trigger fires on a work queue event and the corresponding action is to pause an inferred work queue, the inferred work queue is the one that emitted the event. Prefect infers the relevant event whenever possible, but sometimes one does not exist. Specify a name and, optionally, a description for the automation. ## Sending notifications with automations Automations support sending notifications through any predefined block that is capable of and configured to send a message, including: * Slack message to a channel * Microsoft Teams message to a channel * Email to an email address Configuring notifications for an automation in Prefect Cloud. ## Templating with Jinja You can access templated variables with automation actions through [Jinja](https://palletsprojects.com/p/jinja/) syntax. Templated variables enable you to dynamically include details from an automation trigger, such as a flow or pool name. Jinja templated variable syntax wraps the variable name in double curly brackets, like this: `{{ variable }}`. You can access properties of the underlying flow run objects including: * [flow\_run](https://reference.prefect.io/prefect/server/schemas/core/#prefect.server.schemas.core.FlowRun) * [flow](https://reference.prefect.io/prefect/server/schemas/core/#prefect.server.schemas.core.Flow) * [deployment](https://reference.prefect.io/prefect/server/schemas/core/#prefect.server.schemas.core.Deployment) * [work\_queue](https://reference.prefect.io/prefect/server/schemas/core/#prefect.server.schemas.core.WorkQueue) * [work\_pool](https://reference.prefect.io/prefect/server/schemas/core/#prefect.server.schemas.core.WorkPool) In addition to its native properties, each object includes an `id` along with `created` and `updated` timestamps. The `flow_run|ui_url` token returns the URL to view the flow run in the UI. Here's an example relevant to a flow run state-based notification: ``` Flow run {{ flow_run.name }} entered state {{ flow_run.state.name }}. Timestamp: {{ flow_run.state.timestamp }} Flow ID: {{ flow_run.flow_id }} Flow Run ID: {{ flow_run.id }} State message: {{ flow_run.state.message }} ``` The resulting Slack webhook notification looks something like this: Configuring notifications for an automation in Prefect Cloud. You could include `flow` and `deployment` properties: ``` Flow run {{ flow_run.name }} for flow {{ flow.name }} entered state {{ flow_run.state.name }} with message {{ flow_run.state.message }} Flow tags: {{ flow_run.tags }} Deployment name: {{ deployment.name }} Deployment version: {{ deployment.version }} Deployment parameters: {{ deployment.parameters }} ``` An automation that reports on work pool status might include notifications using `work_pool` properties: ``` Work pool status alert! Name: {{ work_pool.name }} Last polled: {{ work_pool.last_polled }} ``` In addition to those shortcuts for flows, deployments, and work pools, you have access to the automation and the event that triggered the automation. See the [Automations API](https://app.prefect.cloud/api/docs#tag/Automations) for additional details. ``` Automation: {{ automation.name }} Description: {{ automation.description }} Event: {{ event.id }} Resource: {% for label, value in event.resource %} {{ label }}: {{ value }} {% endfor %} Related Resources: {% for related in event.related %} Role: {{ related.role }} {% for label, value in related %} {{ label }}: {{ value }} {% endfor %} {% endfor %} ``` Note that this example also illustrates the ability to use Jinja features such as iterator and for loop [control structures](https://jinja.palletsprojects.com/en/3.1.x/templates/#list-of-control-structures) when templating notifications. For more on the common use case of passing an upstream flow run's parameters to the flow run invoked by the automation, see the [Passing parameters to a flow run](/v3/how-to-guides/automations/access-parameters-in-templates/) guide. ## Further reading * To learn more about Prefect events, which can trigger automations, see the [events docs](/v3/concepts/events/). * See the [webhooks guide](/v3/how-to-guides/cloud/create-a-webhook/) to learn how to create webhooks and receive external events. # Blocks Source: https://docs-3.prefect.io/v3/concepts/blocks Prefect blocks allow you to manage configuration schemas, infrastructure, and secrets for use with deployments or flow scripts. export const blocks = { cli: "https://docs.prefect.io/v3/api-ref/cli/block", api: "https://app.prefect.cloud/api/docs#tag/Blocks", tf: "https://registry.terraform.io/providers/PrefectHQ/prefect/latest/docs/resources/block" }; export const TF = ({name, href}) =>

You can manage {name} with the Terraform provider for Prefect.

; export const CLI = ({name, href}) =>

You can manage {name} with the Prefect CLI.

; Prefect blocks store typed configuration that can be used across workflows and deployments. The most common use case for blocks is storing credentials used to access external systems such as AWS or GCP. Prefect supports [a large number of common blocks](#pre-registered-blocks) and provides a Python SDK for creating your own. **Blocks and parameters** Blocks are useful for sharing configuration across flow runs and between flows. For configuration that will change between flow runs, we recommend using [parameters](/v3/develop/write-flows/#parameters). ## How blocks work There are three layers to a block: its *type*, a *document*, and a Python *class*. ### Block type A block *type* is essentially a schema registered with the Prefect API. This schema can be inspected and discovered in the UI on the **Blocks** page. To see block types available for configuration, use `prefect block type ls` from the CLI or navigate to the **Blocks** page in the UI and click **+**. The block catalogue in the UI These types separate blocks from [Prefect variables](/v3/develop/variables/), which are unstructured JSON documents. In addition, block schemas allow for fields of `SecretStr` type which are stored with additional encryption and not displayed by default in the UI. Block types are identified by a *slug* that is not configurable. {/* pmd-metadata: notest */} ```python from prefect.blocks.core import Block class Cube(Block): edge_length_inches: float # register the block type under the slug 'cube' Cube.register_type_and_schema() ``` Users should rarely need to register types in this way - saving a block document will also automatically register its type. ### Block document A block *document* is an instantiation of the schema, or block type. A document contains *specific* values for each field defined in the schema. All block types allow for the creation of as many documents as you wish. Building on our example above: {/* pmd-metadata: notest */} ```python from prefect.blocks.core import Block class Cube(Block): edge_length_inches: float # instantiate the type with specific values rubiks_cube = Cube(edge_length_inches=2.25) # store those values in a block document # on the server for future use rubiks_cube.save("rubiks-cube") # instantiate and save another block document tiny_cube = Cube(edge_length_inches=0.001) tiny_cube.save("tiny") ``` Block documents can also be created and updated in the UI or API for easy change management. This allows you to work with slowly changing configuration without having to redeploy all workflows that rely on it; for example, you may use this to rotate credentials on a regular basis without touching your deployments. ### Block class A block *class* is the primary user-facing object; it is a Python class whose attributes are loaded from a block document. Most Prefect blocks encapsulate additional functionality built on top of the block document. For example, an `S3Bucket` block contains methods for downloading data from, or upload data to, an S3 bucket; a `SnowflakeConnector` block contains methods for querying Snowflake databases. Returning to our `Cube` example from above: {/* pmd-metadata: notest */} ```python from prefect.blocks.core import Block class Cube(Block): edge_length_inches: float def get_volume(self): return self.edge_length_inches ** 3 def get_surface_area(self): return 6 * self.edge_length_inches ** 2 rubiks_cube = Cube.load("rubiks-cube") rubiks_cube.get_volume() # 11.390625 ``` The class itself is *not* stored server-side when registering block types and block documents. For this reason, we highly recommend loading block documents by first importing the block class and then calling its `load` method with the relevant document name. ## Pre-registered blocks ### Built-in blocks Commonly used block types come built-in with Prefect. You can create and use these block types through the UI without installing any additional packages. | Block | Slug | Description | | ----------------------- | -------------------- | ------------------------------------------------------------------------------------------------ | | Custom Webhook | `custom-webhook` | Call custom webhooks. | | Discord Webhook | `discord-webhook` | Call Discord webhooks. | | Local File System | `local-file-system` | Store data as a file on a local file system. | | Mattermost Webhook | `mattermost-webhook` | Send notifications through a provided Mattermost webhook. | | Microsoft Teams Webhook | `ms-teams-webhook` | Send notifications through a provided Microsoft Teams webhook. | | Opsgenie Webhook | `opsgenie-webhook` | Send notifications through a provided Opsgenie webhook. | | Pager Duty Webhook | `pager-duty-webhook` | Send notifications through a provided PagerDuty webhook. | | Remote File System | `remote-file-system` | Access files on a remote file system. | | Secret | `secret` | Store a secret value. The value will be obfuscated when this block is logged or shown in the UI. | | Sendgrid Email | `sendgrid-email` | Send notifications through Sendgrid email. | | Slack Webhook | `slack-webhook` | Send notifications through a provided Slack webhook. | | SMB | `smb` | Store data as a file on a SMB share. | | Twilio SMS | `twilio-sms` | Send notifications through Twilio SMS. | Built-in blocks should be registered the first time you start a Prefect server. If the auto-registration fails, you can manually register the blocks using `prefect blocks register`. For example, to register all built-in notification blocks, run `prefect block register -m prefect.blocks.notifications`. ### Blocks in Prefect integration libraries Some block types that appear in the UI can be created immediately, with the corresponding integration library installed for use. For example, an AWS Secret block can be created, but not used until the [`prefect-aws` library](/integrations/prefect-aws/) is installed. Find available block types in many of the published [Prefect integrations libraries](/integrations/). If a block type is not available in the UI, you can [register it](#register-blocks) through the CLI. | Block | Slug | Integration | | ------------------------------------ | -------------------------------------- | ------------------------------------------------------- | | ECS Task | `ecs-task` | [prefect-aws](/integrations/prefect-aws/) | | MinIO Credentials | `minio-credentials` | [prefect-aws](/integrations/prefect-aws/) | | S3 Bucket | `s3-bucket` | [prefect-aws](/integrations/prefect-aws/) | | Azure Blob Storage Credentials | `azure-blob-storage-credentials` | [prefect-azure](/integrations/prefect-azure/) | | Azure Container Instance Credentials | `azure-container-instance-credentials` | [prefect-azure](/integrations/prefect-azure/) | | Azure Container Instance Job | `azure-container-instance-job` | [prefect-azure](/integrations/prefect-azure/) | | Azure Cosmos DB Credentials | `azure-cosmos-db-credentials` | [prefect-azure](/integrations/prefect-azure/) | | AzureML Credentials | `azureml-credentials` | [prefect-azure](/integrations/prefect-azure/) | | BitBucket Credentials | `bitbucket-credentials` | [prefect-bitbucket](/integrations/prefect-bitbucket/) | | BitBucket Repository | `bitbucket-repository` | [prefect-bitbucket](/integrations/prefect-bitbucket/) | | Databricks Credentials | `databricks-credentials` | [prefect-databricks](/integrations/prefect-databricks/) | | dbt CLI BigQuery Target Configs | `dbt-cli-bigquery-target-configs` | [prefect-dbt](/integrations/prefect-dbt/) | | dbt CLI Profile | `dbt-cli-profile` | [prefect-dbt](/integrations/prefect-dbt/) | | dbt Cloud Credentials | `dbt-cloud-credentials` | [prefect-dbt](/integrations/prefect-dbt/) | | dbt CLI Global Configs | `dbt-cli-global-configs` | [prefect-dbt](/integrations/prefect-dbt/) | | dbt CLI Postgres Target Configs | `dbt-cli-postgres-target-configs` | [prefect-dbt](/integrations/prefect-dbt/) | | dbt CLI Snowflake Target Configs | `dbt-cli-snowflake-target-configs` | [prefect-dbt](/integrations/prefect-dbt/) | | dbt CLI Target Configs | `dbt-cli-target-configs` | [prefect-dbt](/integrations/prefect-dbt/) | | Docker Container | `docker-container` | [prefect-docker](/integrations/prefect-docker/) | | Docker Host | `docker-host` | [prefect-docker](/integrations/prefect-docker/) | | Docker Registry Credentials | `docker-registry-credentials` | [prefect-docker](/integrations/prefect-docker/) | | Email Server Credentials | `email-server-credentials` | [prefect-email](/integrations/prefect-email/) | | BigQuery Warehouse | `bigquery-warehouse` | [prefect-gcp](/integrations/prefect-gcp/) | | GCP Cloud Run Job | `cloud-run-job` | [prefect-gcp](/integrations/prefect-gcp/) | | GCP Credentials | `gcp-credentials` | [prefect-gcp](/integrations/prefect-gcp/) | | GcpSecret | `gcpsecret` | [prefect-gcp](/integrations/prefect-gcp/) | | GCS Bucket | `gcs-bucket` | [prefect-gcp](/integrations/prefect-gcp/) | | Vertex AI Custom Training Job | `vertex-ai-custom-training-job` | [prefect-gcp](/integrations/prefect-gcp/) | | GitHub Credentials | `github-credentials` | [prefect-github](/integrations/prefect-github/) | | GitHub Repository | `github-repository` | [prefect-github](/integrations/prefect-github/) | | GitLab Credentials | `gitlab-credentials` | [prefect-gitlab](/integrations/prefect-gitlab/) | | GitLab Repository | `gitlab-repository` | [prefect-gitlab](/integrations/prefect-gitlab/) | | Kubernetes Cluster Config | `kubernetes-cluster-config` | [prefect-kubernetes](/integrations/prefect-kubernetes/) | | Kubernetes Credentials | `kubernetes-credentials` | [prefect-kubernetes](/integrations/prefect-kubernetes/) | | Kubernetes Job | `kubernetes-job` | [prefect-kubernetes](/integrations/prefect-kubernetes/) | | Shell Operation | `shell-operation` | [prefect-shell](/integrations/prefect-shell/) | | Slack Credentials | `slack-credentials` | [prefect-slack](/integrations/prefect-slack/) | | Slack Incoming Webhook | `slack-incoming-webhook` | [prefect-slack](/integrations/prefect-slack/) | | Snowflake Connector | `snowflake-connector` | [prefect-snowflake](/integrations/prefect-snowflake/) | | Snowflake Credentials | `snowflake-credentials` | [prefect-snowflake](/integrations/prefect-snowflake/) | | Database Credentials | `database-credentials` | [prefect-sqlalchemy](/integrations/prefect-sqlalchemy/) | | SQLAlchemy Connector | `sqlalchemy-connector` | [prefect-sqlalchemy](/integrations/prefect-sqlalchemy/) | Anyone can create a custom block type and, optionally, share it with the community. ## Additional resources # Caching Source: https://docs-3.prefect.io/v3/concepts/caching Caching refers to the ability of a task run to enter a `Completed` state and return a predetermined value without actually running the code that defines the task. Caching allows you to efficiently reuse [results of tasks](/v3/develop/results/) that may be expensive to compute and ensure that your pipelines are idempotent when retrying them due to unexpected failure. By default Prefect's caching logic is based on the following attributes of a task invocation: * the inputs provided to the task * the code definition of the task * the prevailing flow run ID, or if executed autonomously, the prevailing task run ID These values are hashed to compute the task's *cache key*. This implies that, by default, calling the same task with the same inputs more than once within a flow will result in cached behavior for all calls after the first. This behavior can be configured - see [customizing the cache](/v3/develop/write-tasks#customizing-the-cache) below. **Caching requires result persistence** Caching requires result persistence, which is off by default. To turn on result persistence for all of your tasks use the `PREFECT_RESULTS_PERSIST_BY_DEFAULT` setting: ``` prefect config set PREFECT_RESULTS_PERSIST_BY_DEFAULT=true ``` See [managing results](/v3/develop/results/) for more details on managing your result configuration, and [settings](/v3/develop/settings-and-profiles) for more details on managing Prefect settings. ## Cache keys To determine whether a task run should retrieve a cached state, Prefect uses the concept of a "cache key". A cache key is a computed string value that determines where the task's return value will be persisted within its configured result storage. When a task run begins, Prefect first computes its cache key and uses this key to lookup a record in the task's result storage. If an unexpired record is found, this result is returned and the task does not run, but instead, enters a `Cached` state with the corresponding result value. Cache keys can be shared by the same task across different flows, and even among different tasks, so long as they all share a common result storage location. By default Prefect stores results locally in `~/.prefect/storage/`. The filenames in this directory will correspond exactly to computed cache keys from your task runs. **Relationship with result persistence** Task caching and result persistence are intimately related. Because task caching relies on loading a known result, task caching will only work when your task can persist its output to a fixed and known location. Therefore any configuration which explicitly avoids result persistence will result in your task never using a cache, for example setting `persist_result=False`. ## Cache policies Cache key computation can be configured through the use of *cache policies*. A cache policy is a recipe for computing cache keys for a given task. Prefect comes prepackaged with a few common cache policies: * `DEFAULT`: this cache policy uses the task's inputs, its code definition, as well as the prevailing flow run ID to compute the task's cache key. * `INPUTS`: this cache policy uses *only* the task's inputs to compute the cache key. * `TASK_SOURCE`: this cache policy only considers raw lines of code in the task (and not the source code of nested tasks) to compute the cache key. * `FLOW_PARAMETERS`: this cache policy uses *only* the parameter values provided to the parent flow run to compute the cache key. * `NO_CACHE`: this cache policy always returns `None` and therefore avoids caching and result persistence altogether. These policies can be set using the `cache_policy` keyword on the [task decorator](https://reference.prefect.io/prefect/tasks/#prefect.tasks.task). ## Customizing the cache Prefect allows you to configure task caching behavior in numerous ways. ### Cache expiration All cache keys can optionally be given an *expiration* through the `cache_expiration` keyword on the [task decorator](https://reference.prefect.io/prefect/tasks/#prefect.tasks.task). This keyword accepts a `datetime.timedelta` specifying a duration for which the cached value should be considered valid. Providing an expiration value results in Prefect persisting an expiration timestamp alongside the result record for the task. This expiration is then applied to *all* other tasks that may share this cache key. ### Cache policies Cache policies can be composed and altered using basic Python syntax to form more complex policies. For example, all task policies except for `NO_CACHE` can be *added* together to form new policies that combine the individual policies' logic into a larger cache key computation. Combining policies in this way results in caches that are *easier* to invalidate. For example: ```python from prefect import task from prefect.cache_policies import TASK_SOURCE, INPUTS @task(cache_policy=TASK_SOURCE + INPUTS) def my_cached_task(x: int): return x + 42 ``` This task will rerun anytime you provide new values for `x`, *or* anytime you change the underlying code. The `INPUTS` policy is a special policy that allows you to *subtract* string values to ignore certain task inputs: ```python from prefect import task from prefect.cache_policies import INPUTS my_custom_policy = INPUTS - 'debug' @task(cache_policy=my_custom_policy) def my_cached_task(x: int, debug: bool = False): print('running...') return x + 42 my_cached_task(1) my_cached_task(1, debug=True) # still uses the cache ``` ### Cache key functions You can configure custom cache policy logic through the use of cache key functions. A cache key function is a function that accepts two positional arguments: * The first argument corresponds to the `TaskRunContext`, which stores task run metadata. For example, this object has attributes `task_run_id`, `flow_run_id`, and `task`, all of which can be used in your custom logic. * The second argument corresponds to a dictionary of input values to the task. For example, if your task has the signature `fn(x, y, z)` then the dictionary will have keys "x", "y", and "z" with corresponding values that can be used to compute your cache key. This function can then be specified using the `cache_key_fn` argument on the [task decorator](https://reference.prefect.io/prefect/tasks/#prefect.tasks.task). For example: ```python from prefect import task def static_cache_key(context, parameters): # return a constant return "static cache key" @task(cache_key_fn=static_cache_key) def my_cached_task(x: int): return x + 1 ``` ### Cache storage By default, cache records are collocated with task results and files containing task results will include metadata used for caching. Configuring a cache policy with a `key_storage` argument allows cache records to be stored separately from task results. When cache key storage is configured, persisted task results will only include the return value of your task and cache records can be deleted or modified without effecting your task results. You can configure where cache records are stored by using the `.configure` method with a `key_storage` argument on a cache policy. The `key_storage` argument accepts either a path to a local directory or a storage block. ### Cache isolation Cache isolation controls how concurrent task runs interact with cache records. Prefect supports two isolation levels: `READ_COMMITTED` and `SERIALIZABLE`. By default, cache records operate with a `READ_COMMITTED` isolation level. This guarantees that reading a cache record will see the latest committed cache value, but allows multiple executions of the same task to occur simultaneously. For stricter isolation, you can use the `SERIALIZABLE` isolation level. This ensures that only one execution of a task occurs at a time for a given cache record via a locking mechanism. To configure the isolation level, use the `.configure` method with an `isolation_level` argument on a cache policy. When using `SERIALIZABLE`, you must also provide a `lock_manager` that implements locking logic for your system. #### Recommended Lock Managers by Execution Context We recommend using a locking implementation that matches how you are running your work concurrently. | Execution Context | Recommended Lock Manager | Notes | | ------------------ | ------------------------ | ------------------------------------------------------------ | | Threads/Coroutines | `MemoryLockManager` | In-memory locking suitable for single-process execution | | Processes | `FileSystemLockManager` | File-based locking for multiple processes on same machine | | Multiple Machines | `RedisLockManager` | Distributed locking via Redis for cross-machine coordination | ## Multi-task caching There are some situations in which multiple tasks need to always run together or not at all. This can be achieved in Prefect by configuring these tasks to always write to their caches within a single [*transaction*](/v3/develop/transactions). # Deployments Source: https://docs-3.prefect.io/v3/concepts/deployments Learn how to use deployments to trigger flow runs remotely. Deployments allow you to run flows on a [schedule](/v3/concepts/schedules) and trigger runs based on [events](/v3/how-to-guides/automations/creating-deployment-triggers/). Deployments are server-side representations of flows. They store the crucial metadata for remote orchestration including when, where, and how a workflow should run. In addition to manually triggering and managing flow runs, deploying a flow exposes an API and UI that allow you to: * trigger new runs, [cancel active runs](/v3/how-to-guides/workflows/write-and-run#cancel-a-flow-run), pause scheduled runs, [customize parameters](/v3/concepts/flows#specify-flow-parameters), and more * remotely configure [schedules](/v3/concepts/schedules) and [automation rules](/v3/how-to-guides/automations/creating-deployment-triggers) * dynamically provision infrastructure with [work pools](/v3/deploy/infrastructure-concepts/work-pools) - optionally with templated guardrails for other users In Prefect Cloud, deployment configuration is versioned, and a new [deployment version](/v3/how-to-guides/deployments/versioning) is created each time a deployment is updated. ### Work pools [Work pools](/v3/concepts/work-pools) allow you to switch between different types of infrastructure and to create a template for deployments. Data platform teams find work pools especially useful for managing infrastructure configuration across teams of data professionals. Common work pool types include [Docker](/v3/how-to-guides/deployment_infra/docker), [Kubernetes](/v3/how-to-guides/deployment_infra/kubernetes), and serverless options such as [AWS ECS](/integrations/prefect-aws/ecs_guide#ecs-worker-guide), [Azure ACI](/integrations/prefect-azure/aci_worker), [GCP Vertex AI](/integrations/prefect-gcp/index#run-flows-on-google-cloud-run-or-vertex-ai), or [GCP Google Cloud Run](/integrations/prefect-gcp/gcp-worker-guide). ### Work pool-based deployment requirements Deployments created through the Python SDK that use a work pool require a `name`. This value becomes the deployment name. A `work_pool_name` is also required. Your flow code location can be specified in a few ways: 1. Bake it into your Docker image (for work-pools that use Docker images). As shown in the example above,Prefect facilitates this as the default method for deployments created with the Python SDK. This method requires that you specify the `image` argument in the `deploy` method. 2. Call `from_source` on a flow and specify one of the following: 1. the git-based cloud provider location (for example, GitHub) 2. the cloud provider storage location (for example, AWS S3) 3. the local path (an option for Process work pools) See the [Retrieve code from storage docs](/v3/how-to-guides/deployments/store-flow-code) for more information about flow code storage. ## Run a deployment You can set a deployment to run manually, on a [schedule](/v3/how-to-guides/deployments/create-schedules), or [in response to an event](/v3/how-to-guides/automations/creating-deployment-triggers). The deployment inherits the infrastructure configuration from the work pool, and can be overridden at deployment creation time or at runtime. ### Work pools that require a worker To run a deployment with a hybrid work pool type, such as Docker or Kubernetes, you must start a [worker](/v3/concepts/workers/). A [Prefect worker](/v3/concepts/workers) is a client-side process that checks for scheduled flow runs in the work pool that it matches. When a scheduled run is found, the worker kicks off a flow run on the specified infrastructure and monitors the flow run until completion. ### Work pools that don't require a worker Prefect Cloud offers [push work pools](/v3/how-to-guides/deployment_infra/serverless#automatically-create-a-new-push-work-pool-and-provision-infrastructure) that run flows on Cloud provider serverless infrastructure without a worker and that can be set up quickly. Prefect Cloud also provides the option to run work flows on Prefect's infrastructure through a [Prefect Managed work pool](/v3/how-to-guides/deployment_infra/managed). These work pool types do not require a worker to run flows. However, they do require sharing a bit more information with Prefect, which can be a challenge depending upon the security posture of your organization. ## Static vs. dynamic infrastructure You can deploy your flows on long-lived static infrastructure or on dynamic infrastructure that is able to scale horizontally. The best choice depends on your use case. ### Static infrastructure When you have several flows running regularly, [the `serve` method](/v3/how-to-guides/deployment_infra/run-flows-in-local-processes#serve-a-flow) of the `Flow` object or [the `serve` utility](/v3/how-to-guides/deployment_infra/run-flows-in-local-processes#serve-multiple-flows-at-once) is a great option for managing multiple flows simultaneously. Once you have authored your flow and decided on its deployment settings, run this long-running process in a location of your choosing. The process stays in communication with the Prefect API, monitoring for work and submitting each run within an individual subprocess. Because runs are submitted to subprocesses, any external infrastructure configuration must be set up beforehand and kept associated with this process. Benefits to this approach include: * Users are in complete control of their infrastructure, and anywhere the "serve" Python process can run is a suitable deployment environment. * It is simple to reason about. * Creating deployments requires a minimal set of decisions. * Iteration speed is fast. ### Dynamic infrastructure Consider running flows on dynamically provisioned infrastructure with work pools when you have any of the following: * Flows that require expensive infrastructure due to the long-running process. * Flows with heterogeneous infrastructure needs across runs. * Large volumes of deployments. * An internal organizational structure in which deployment authors or runners are not members of the team that manages the infrastructure. [Work pools](/v3/concepts/work-pools/) allow Prefect to exercise greater control of the infrastructure on which flows run. Options for [serverless work pools](/v3/how-to-guides/deployment_infra/serverless/) allow you to scale to zero when workflows aren't running. Prefect even provides you with the ability to [provision cloud infrastructure via a single CLI command](/v3/how-to-guides/deployment_infra/serverless/#automatically-create-a-new-push-work-pool-and-provision-infrastructure), if you use a Prefect Cloud push work pool option. With work pools: * You can configure and monitor infrastructure configuration within the Prefect UI. * Infrastructure is ephemeral and dynamically provisioned. * Prefect is more infrastructure-aware and collects more event data from your infrastructure by default. * Highly decoupled setups are possible. **You don't have to commit to one approach** You can mix and match approaches based on the needs of each flow. You can also change the deployment approach for a particular flow as its needs evolve. For example, you might use workers for your expensive machine learning pipelines, but use the serve mechanics for smaller, more frequent file-processing pipelines. ## Deployment schema {/* pmd-metadata: notest */} ```python class Deployment: """ Structure of the schema defining a deployment """ # required defining data name: str flow_id: UUID entrypoint: str path: str | None = None # workflow scheduling and parametrization parameters: dict[str, Any] | None = None parameter_openapi_schema: dict[str, Any] | None = None schedules: list[Schedule] | None = None paused: bool = False trigger: Trigger | None = None # concurrency limiting concurrency_limit: int | None = None concurrency_options: ConcurrencyOptions(collision_strategy=Literal['ENQUEUE', 'CANCEL_NEW']) | None = None # metadata for bookkeeping version: str | None = None version_type: VersionType | None = None description: str | None = None tags: list | None = None # worker-specific fields work_pool_name: str | None = None work_queue_name: str | None = None job_variables: dict[str, Any] | None = None pull_steps: dict[str, Any] | None = None ``` All methods for creating Prefect deployments are interfaces for populating this schema. ### Required defining data Deployments require a `name` and a reference to an underlying `Flow`. The deployment name is not required to be unique across all deployments, but is required to be unique for a given flow ID. This means you will often see references to the deployment's unique identifying name `{FLOW_NAME}/{DEPLOYMENT_NAME}`. You can trigger deployment runs in multiple ways. For a complete guide, see [Run deployments](/v3/how-to-guides/deployments/run-deployments). Quick examples: From the CLI: ```bash prefect deployment run my-first-flow/my-first-deployment ``` From Python: ```python from prefect.deployments import run_deployment run_deployment(name="my-first-flow/my-first-deployment") ``` The other two fields are: * **`path`**: think of the path as the runtime working directory for the flow. For example, if a deployment references a workflow defined within a Docker image, the `path` is the absolute path to the parent directory where that workflow will run anytime the deployment is triggered. This interpretation is more subtle in the case of flows defined in remote filesystems. * **`entrypoint`**: the entrypoint of a deployment is a relative reference to a function decorated as a flow that exists on some filesystem. It is always specified relative to the `path`. Entrypoints use Python's standard path-to-object syntax (for example, `path/to/file.py:function_name` or simply `path:object`). The entrypoint must reference the same flow as the flow ID. Prefect requires that deployments reference flows defined *within Python files*. Flows defined within interactive REPLs or notebooks cannot currently be deployed as such. They are still valid flows that will be monitored by the API and observable in the UI whenever they are run, but Prefect cannot trigger them. **Deployments do not contain code definitions** Deployment metadata references code that exists in potentially diverse locations within your environment. This separation means that your flow code stays within your storage and execution infrastructure. This is key to the Prefect hybrid model: there's a boundary between your proprietary assets, such as your flow code, and the Prefect backend (including [Prefect Cloud](/v3/how-to-guides/cloud/connect-to-cloud)). ### Workflow scheduling and parametrization One of the primary motivations for creating deployments of flows is to remotely *schedule* and *trigger* them. Just as you can call flows as functions with different input values, deployments can be triggered or scheduled with different values through parameters. These are the fields to capture the required metadata for those actions: * **`schedules`**: a list of [schedule objects](/v3/concepts/schedules). Most of the convenient interfaces for creating deployments allow users to avoid creating this object themselves. For example, when [updating a deployment schedule in the UI](/v3/concepts/schedules) basic information such as a cron string or interval is all that's required. * **`parameter_openapi_schema`**: an [OpenAPI compatible schema](https://swagger.io/specification/) that defines the types and defaults for the flow's parameters. This is used by the UI and the backend to expose options for creating manual runs as well as type validation. * **`parameters`**: default values of flow parameters that this deployment will pass on each run. These can be overwritten through a trigger or when manually creating a custom run. * **`enforce_parameter_schema`**: a boolean flag that determines whether the API should validate the parameters passed to a flow run against the schema defined by `parameter_openapi_schema`. **Scheduling is asynchronous and decoupled** Pausing a schedule, updating your deployment, and other actions reset your auto-scheduled runs. ### Concurrency limiting Prefect supports managing concurrency at the deployment level to enable limiting how many runs of a deployment can be active at once. To enable this behavior, deployments have the following fields: * **`concurrency_limit`**: an integer that sets the maximum number of concurrent flow runs for the deployment. * **`collision_strategy`**: configure the behavior for runs once the concurrency limit is reached. Falls back to `ENQUEUE` if unset. * `ENQUEUE`: new runs transition to `AwaitingConcurrencySlot` and execute as slots become available. * `CANCEL_NEW`: new runs are canceled until a slot becomes available. ```sh prefect deploy prefect deploy ... --concurrency-limit 3 --collision-strategy ENQUEUE ``` ```python flow.deploy() from prefect.client.schemas.objects import ( ConcurrencyLimitConfig, ConcurrencyLimitStrategy ) my_flow.deploy(..., concurrency_limit=3) my_flow.deploy( ..., concurrency_limit=ConcurrencyLimitConfig( limit=3, collision_strategy=ConcurrencyLimitStrategy.CANCEL_NEW ), ) ``` ```python flow.serve() from prefect.client.schemas.objects import ( ConcurrencyLimitConfig, ConcurrencyLimitStrategy ) my_flow.serve(..., global_limit=3) my_flow.serve( ..., global_limit=ConcurrencyLimitConfig( limit=3, collision_strategy=ConcurrencyLimitStrategy.CANCEL_NEW ), ) ``` ### Metadata for bookkeeping Important information for the versions, descriptions, and tags fields: * **`version`**: versions are always set by the client and can be any arbitrary string. We recommend tightly coupling this field on your deployments to your software development lifecycle and choosing human-readable version strings. If left unset, the version field will be automatically populated in one of two ways: * If deploying from a directory inside a Git repository or from a CI environment on a supported version control provider, `version` will be the first eight characters of your commit hash. * In all other circumstances, `version` will be your flow's version, which if not assigned in the flow decorator (`@flow(version="my-version")`) will be a hash of the file the flow is defined in. * **`version_type`**: When a deployment is created or updated, Prefect will attempt to infer version information from your environment. Providing a `version_type` instructs Prefect to only attempt version information collection from an environment of that type. The following version types are available: `vcs:github`, `vcs:gitlab`, `vcs:bitbucket`, `vcs:azuredevops`, `vcs:git`, or `prefect:simple`. `vcs:git` offers similar versioning detail to officially supported version control platforms, but does not support direct linking to commits from the Prefect Cloud UI. It is meant as a fallback option in case your version control platform is not supported. `prefect:simple` is for any deployment version created where no Git context is available. If left unset, Prefect will automatically select the appropriate `version_type` based on the detected environment. * **`description`**: provide reference material such as intended use and parameter documentation. Markdown is accepted. The docstring of your flow function is the default value. * **`tags`**: group related work together across a diverse set of objects. Tags set on a deployment are inherited by that deployment's flow runs. Filter, customize views, and searching by tag. **Everything has a version** Deployments have a version attached; and flows and tasks also have versions set through their respective decorators. These versions are sent to the API anytime the flow or task runs, allowing you to audit changes. ### Worker-specific fields [Work pools](/v3/concepts/work-pools/) and [workers](/v3/concepts/workers/) are an advanced deployment pattern that allow you to dynamically provision infrastructure for each flow run. The work pool job template interface allows users to create and govern opinionated interfaces to their workflow infrastructure. To do this, a deployment using workers needs the following fields: * **`work_pool_name`**: the name of the work pool this deployment is associated with. Work pool types mirror infrastructure types, which means this field impacts the options available for the other fields. * **`work_queue_name`**: if you are using work queues to either manage priority or concurrency, you can associate a deployment with a specific queue within a work pool using this field. * **`job_variables`**: this field allows deployment authors to customize whatever infrastructure options have been exposed on this work pool. This field is often used for Docker image names, Kubernetes annotations and limits, and environment variables. * **`pull_steps`**: a JSON description of steps that retrieves flow code or configuration, and prepares the runtime environment for workflow execution. Pull steps allow users to highly decouple their workflow architecture. For example, a common use of pull steps is to dynamically pull code from remote filesystems such as GitHub with each run of their deployment. # Define event triggers Source: https://docs-3.prefect.io/v3/concepts/event-triggers Define a custom trigger to react to many kinds of events and metrics. export const events = { cli: "https://docs.prefect.io/v3/api-ref/cli/event", api: "https://app.prefect.cloud/api/docs#tag/Events", tf: "https://registry.terraform.io/providers/PrefectHQ/prefect/latest/docs/resources/automation" }; export const API = ({name, href}) =>

You can manage {name} with the Prefect API.

; export const TF = ({name, href}) =>

You can manage {name} with the Terraform provider for Prefect.

; When you need a trigger beyond what the templates in the UI trigger builder provide, you can define a custom trigger in JSON. With custom triggers, you have access to the full capabilities of Prefect's automation systemβ€”allowing you to react to many kinds of events and metrics in your workspace. Each automation has a single trigger that, when fired, causes all of its associated actions to run. That single trigger may be a reactive or proactive event trigger, a trigger monitoring the value of a metric, or a composite trigger that combines several underlying triggers. ### Event triggers Event triggers are the most common type of trigger. They are intended to react to the presence or absence of an event. Event triggers are indicated with `{"type": "event"}`. Viewing a custom trigger for automations in the UI This is the schema that defines an event trigger: | Name | Type | Supports trailing wildcards | Description | | ------------------ | ------------------------- | --------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **match** | object | βœ… | Labels for resources which this Automation will match. | | **match\_related** | object OR array of object | βœ… | Labels for related resources which this Automation will match. | | **posture** | string enum | N/A | The posture of this Automation, either Reactive or Proactive. Reactive automations respond to the presence of the expected events, while Proactive automations respond to the absence of those expected events. | | **after** | array of strings | βœ… | Event(s), one of which must have first been seen to start this automation. | | **expect** | array of strings | βœ… | The event(s) this automation expects to see. If empty, this automation will evaluate any matched event. | | **for\_each** | array of strings | ❌ | Evaluate the Automation separately for each distinct value of these labels on the resource. By default, labels refer to the primary resource of the triggering event. You may also refer to labels from related resources by specifying `related::