# Code of conduct Source: https://docs.prefect.io/contribute/code-of-conduct Learn about the standards we hold ourselves and our community to. ## Our pledge In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone. This is regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. ## Our standards Examples of behavior that contribute to creating a positive environment include: * Using welcoming and inclusive language * Being respectful of differing viewpoints and experiences * Gracefully accepting constructive criticism * Focusing on what is best for the community * Showing empathy towards other community members Examples of unacceptable behavior by participants include: * The use of sexualized language or imagery and unwelcome sexual attention or advances * Trolling, insulting/derogatory comments, and personal or political attacks * Public or private harassment * Publishing others' private information, such as a physical or electronic address, without explicit permission * Other conduct which could reasonably be considered inappropriate in a professional setting ## Our responsibilities Project maintainers are responsible for clarifying the standards of acceptable behavior. They are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct. They may ban—temporarily or permanently—any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. ## Scope This Code of Conduct applies within all project spaces. It also applies when an individual represents the project or its community in public spaces. Examples of representing a project or community include using an official project email address, posting through an official social media account, or acting as an appointed representative at an online or offline event. Project maintainers may further clarify what "representation of a project" means. ## Enforcement Report instances of abusive, harassing, or otherwise unacceptable behavior by contacting Chris White at [chris@prefect.io](mailto:chris@prefect.io). All complaints are reviewed and investigated. Each complaint will receive a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions, as determined by other members of the project's leadership. ## Attribution This Code of Conduct is adapted from the [Contributor Covenant, version 1.4](https://www.contributor-covenant.org/version/1/4/code-of-conduct.html). See the [Contributor Covenant FAQ](https://www.contributor-covenant.org/faq) for more information. # Contribute to integrations Source: https://docs.prefect.io/contribute/contribute-integrations Prefect welcomes contributions to existing integrations. Thinking about making your own integration? Feel free to [create a new discussion](https://github.com/PrefectHQ/prefect/discussions/new?category=ideas) to flesh out your idea with other contributors. ## Contributing to existing integrations All integrations are hosted in the [Prefect GitHub repository](https://github.com/PrefectHQ/prefect) under `src/integrations`. To contribute to an existing integration, please follow these steps: Fork the [Prefect GitHub repository](https://github.com/PrefectHQ/prefect) ```bash theme={null} git clone https://github.com/your-username/prefect.git ``` ```bash theme={null} git checkout -b my-new-branch ``` Move to the integration directory and install the dependencies: ```bash theme={null} cd src/integrations/my-integration uv venv --python 3.12 source .venv/bin/activate uv sync ``` Make the necessary changes to the integration code. If you're adding new functionality, please add tests. You can run the tests with: ```bash theme={null} pytest tests ``` ```bash theme={null} git add . git commit -m "My new integration" git push origin my-new-branch ``` Submit your pull request upstream through the GitHub interface. # Develop on Prefect Source: https://docs.prefect.io/contribute/dev-contribute Learn how to set up Prefect for development, experimentation and code contributions. ## Make a code contribution We welcome all forms of contributions to Prefect, whether it's small typo fixes in [our documentation](/contribute/docs-contribute), bug fixes or feature enhancements! If this is your first time making an open source contribution we will be glad to work with you and help you get up to speed. For small changes such as typo fixes you can simply open a pull request - we typically review small changes like these within the day. For larger changes including all bug fixes, we ask that you first open [an issue](https://github.com/PrefectHQ/prefect/issues) or comment on the issue that you are planning to work on. ## Fork the repository All contributions to Prefect need to start on [a fork of the repository](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/fork-a-repo). Once you have successfully forked [the Prefect repo](https://github.com/PrefectHQ/prefect), clone a local version to your machine: ```bash theme={null} git clone https://github.com/GITHUB-USERNAME/prefect.git cd prefect ``` Create a branch with an informative name: ``` git checkout -b fix-for-issue-NUM ``` After committing your changes to this branch, you can then open a pull request from your fork that we will review with you. ## Install Prefect for development Once you have cloned your fork of the repo you can install [an editable version](https://setuptools.pypa.io/en/latest/userguide/development_mode.html) of Prefect for quick iteration. We recommend using `uv` for dependency management when developing. Refer to the [`uv` docs for installation instructions](https://docs.astral.sh/uv/getting-started/installation/). To set up a virtual environment and install a development version of `prefect`: ```bash uv theme={null} uv sync ``` ```bash pip and venv theme={null} python -m venv .venv source .venv/bin/activate # Installs the package with development dependencies pip install --group dev -e . ``` To verify `prefect` was installed correctly: ```bash uv theme={null} uv run prefect --version ``` ```bash pip and venv theme={null} prefect --version ``` To ensure your changes comply with our linting policies, set up `pre-commit` and `pre-push` hooks to run with every commit: ```bash theme={null} uv run pre-commit install ``` To manually run the `pre-commit` hooks against all files: ```bash theme={null} uv run pre-commit run --all-files ``` To manually run the pre-push hooks: ```bash theme={null} uv run pre-commit run --hook-stage pre-push --all-files ``` If you're using `uv`, you can run commands with the project's dependencies by prefixing the command with `uv run`. ## Write tests Prefect relies on unit testing to ensure proposed changes don't negatively impact any functionality. For all code changes, including bug fixes, we ask that you write at least one corresponding test. One rule of thumb - especially for bug fixes - is that you should write a test that fails prior to your changes and passes with your changes. This ensures the test will fail and prevent the bug from resurfacing if other changes are made in the future. All tests can be found in the `tests/` directory of the repository. You can run the test suite with `pytest`: ```bash theme={null} # run all tests pytest tests # run a specific file pytest tests/test_flows.py # run all tests that match a pattern pytest tests/test_tasks.py -k cache_policy ``` ## Working with a development UI If you plan to use the UI during development, you will need to build a development version of the UI first. Using the Prefect UI in development requires installation of [npm](https://github.com/npm/cli). We recommend using [nvm](https://github.com/nvm-sh/nvm) to manage Node.js versions. Once installed, run `nvm use` from the root of the Prefect repository to initialize the proper version of `npm` and `node`. Start a development UI that reloads on code changes: ```bash theme={null} prefect dev ui ``` This command is most useful if you are working directly on the UI codebase. Alternatively, you can build a static UI that will be served when running `prefect server start`: ```bash theme={null} prefect dev build-ui ``` ## Working with a development server The Prefect CLI provides several helpful commands to aid development of server-side changes. You can start all services with hot-reloading on code changes (note that this requires installation of UI dependencies): ```bash theme={null} prefect dev start ``` Start a Prefect API that reloads on code changes: ```bash theme={null} prefect dev api ``` ## Add database migrations If your code changes necessitate modifications to a database table, first update the SQLAlchemy model in `src/prefect/server/database/orm_models.py`. For example, to add a new column to the `flow_run` table, add a new column to the `FlowRun` model: ```python theme={null} # src/prefect/server/database/orm_models.py class FlowRun(Run): """SQLAlchemy model of a flow run.""" ... new_column: Mapped[Union[str, None]] = mapped_column(sa.String, nullable=True) # <-- add this line ``` Next, generate new migration files. Generate a new migration file for each database type. Migrations are generated for whichever database type `PREFECT_API_DATABASE_CONNECTION_URL` is set to. See [how to set the database connection URL](/v3/api-ref/settings-ref#connection-url) for each database type. To generate a new migration file, run: ```bash theme={null} prefect server database revision --autogenerate -m "" ``` Make the migration name brief but descriptive. For example: * `add_flow_run_new_column` * `add_flow_run_new_column_idx` * `rename_flow_run_old_column_to_new_column` The `--autogenerate` flag automatically generates a migration file based on the changes to the models. **Always inspect the output of `--autogenerate`** `--autogenerate` generates a migration file based on the changes to the models. However, it is not perfect. Check the file to ensure it only includes the desired changes. The new migration is in the `src/prefect/server/database/migrations/versions/` directory. Each database type has its own subdirectory. For example, the SQLite migrations are stored in `src/prefect/server/database/migrations/versions/sqlite/`. After inspecting the migration file, apply the migration to the database by running: ```bash theme={null} prefect server database upgrade -y ``` After successfully creating migrations for all database types, update `MIGRATION-NOTES.md` to document the changes. # Contribute to documentation Source: https://docs.prefect.io/contribute/docs-contribute Learn how to contribute to the Prefect docs. We use [Mintlify](https://mintlify.com/) to host and build the Prefect documentation. The main branch of the [prefecthq/prefect](https://github.com/PrefectHQ/prefect) GitHub repository is used to build the Prefect 3.0 docs at [docs.prefect.io](https://docs.prefect.io). The 2.x docs are hosted at [docs-2.prefect.io](https://docs-2.prefect.io) and built from the 2.x branch of the repository. ## Fork the repository All contributions to Prefect need to start on [a fork of the repository](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/fork-a-repo). Once you have successfully forked [the Prefect repo](https://github.com/PrefectHQ/prefect), clone a local version to your machine: ```bash theme={null} git clone https://github.com/GITHUB-USERNAME/prefect.git cd prefect ``` Create a branch with an informative name: ``` git checkout -b fix-for-issue-NUM ``` After committing your changes to this branch, you can then open a pull request from your fork that we will review with you. ## Set up your local environment We provide a `justfile` with common commands to simplify development. We recommend using [just](https://just.systems/) to run these commands. **Installing just** To install just: * **macOS**: `brew install just` or `cargo install just` * **Linux**: `cargo install just` or check your package manager * **Windows**: `scoop install just` or `cargo install just` For more installation options, see the [just documentation](https://github.com/casey/just#installation). ### Using just (recommended) 1. Clone this repository. 2. Run `just docs` to start the documentation server. Your docs should now be available at `http://localhost:3000`. ### Manual setup If you prefer not to use just, you can set up manually: 1. Clone this repository. 2. Make sure you have a recent version of Node.js installed. We recommend using [nvm](https://github.com/nvm-sh/nvm) to manage Node.js versions. 3. Run `cd docs` to navigate to the docs directory. 4. Run `nvm use node` to use the correct Node.js version. 5. Run `npm i -g mintlify` to install Mintlify. 6. Run `mintlify dev` to start the development server. Your docs should now be available at `http://localhost:3000`. See the [Mintlify documentation](https://mintlify.com/docs/development) for more information on how to install Mintlify, build previews, and use Mintlify's features while writing docs. All documentation is written in `.mdx` files, which are Markdown files that can contain JavaScript and React components. ## Contributing examples Examples are Python files that demonstrate Prefect concepts and patterns and how they work together with other tools to solve real-world problems. They live in the `examples/` directory and are used to automatically generate documentation pages. ### Example structure Each example should be a standalone Python file with: 1. **YAML frontmatter** (in Python comments) at the top with metadata: ```python theme={null} # --- # title: Your Example Title # description: Brief description of what this example demonstrates # icon: play # Choose from available icons (play, database, globe, etc.) # dependencies: ["prefect", "pandas", "requests"] # Required packages # cmd: ["python", "path/to/your_example.py"] # How to run it # keywords: ["getting_started", "etl", "api"] # Keywords to help with search and filtering # draft: false # Set to true to hide from docs # --- ``` 2. **Explanatory comments** throughout the code which will be used to generate the body of the documentation page. 3. **Runnable code** that works out of the box with the specified dependencies. See the [hello world example](https://github.com/PrefectHQ/prefect/blob/main/examples/hello_world.py) as a guide. ### Adding an example To add an example, follow these steps: 1. Create your Python file in the `examples/` directory 2. Follow the structure above with frontmatter and comments 3. Test that your example runs successfully 4. Run `just generate-examples` to update the documentation pages 5. Review the generated documentation to ensure it renders correctly Once it all looks good, commit your changes and open a pull request. ## Considerations Keep in mind the following when writing documentation. ### External references Prefect resources can be managed in several ways, including through the CLI, UI, Terraform, Helm, and API. When documenting a resource, consider including external references that describe how to manage the resource in other ways. Snippets are available to provide these references in a consistent format. For example, the [Deployment documentation](/v3/deploy) includes a snippet for the Terraform provider: ```javascript theme={null} import { TF } from "/snippets/resource-management/terraform.mdx" import { deployments } from "/snippets/resource-management/vars.mdx" ``` For more information on how to use snippets, see the [Mintlify documentation](https://mintlify.com/docs/reusable-snippets). # Contribute Source: https://docs.prefect.io/contribute/index Join the community, improve Prefect, and share knowledge We welcome all forms of engagement, and love to learn from our users. There are many ways to get involved with the Prefect community: * Join nearly 30,000 engineers in the [Prefect Slack community](https://prefect.io/slack) * [Give Prefect a ⭐️ on GitHub](https://github.com/PrefectHQ/prefect) * Make a contribution to [Prefect's documentation](/contribute/docs-contribute) * Make a code contribution to [Prefect's open source libraries](/contribute/dev-contribute) * Support or create a new [Prefect integration](/contribute/contribute-integrations) ## Report an issue To report a bug, make a feature request, and more, visit our [issues page on GitHub](https://github.com/PrefectHQ/prefect/issues/new/choose). ## Code of conduct See our [code of conduct](/contribute/code-of-conduct) for becoming a valued contributor. # Code and development style guide Source: https://docs.prefect.io/contribute/styles-practices Generally, we follow the [Google Python Style Guide](https://google.github.io/styleguide/pyguide.html). This document covers Prefect-specific styles and practices. ## Imports This is a brief collection of rules and guidelines for handling imports in this repository. ### Imports in `__init__` files Leave `__init__` files empty unless exposing an interface. If you must expose objects to present a simpler API, please follow these rules. #### Exposing objects from submodules If importing objects from submodules, the `__init__` file should use a relative import. This is [required for type checkers](https://github.com/microsoft/pyright/blob/main/docs/typed-libraries.md#library-interface) to understand the exposed interface. ```python theme={null} # Correct from .flows import flow ``` ```python theme={null} # Wrong from prefect.flows import flow ``` #### Exposing submodules Generally, submodules should *not* be imported in the `__init__` file. You should only expose submodules when the module is designed to be imported and used as a namespaced object. For example, we do this for our schema and model modules. This is because it's important to know if you are working with an API schema or database model—both of which may have similar names. ```python theme={null} import prefect.server.schemas as schemas # The full module is accessible now schemas.core.FlowRun ``` If exposing a submodule, use a relative import like when you're exposing an object. ```python theme={null} # Correct from . import flows ``` ```python theme={null} # Wrong import prefect.flows ``` #### Importing to run side-effects Another use case for importing submodules is to perform global side-effects that occur when they are imported. Often, global side-effects on import are a dangerous pattern. But there are a couple acceptable use cases for this: * To register dispatchable types, for example, `prefect.serializers`. * To extend a CLI app, for example, `prefect.cli`. ### Imports in modules #### Importing other modules The `from` syntax is recommended for importing objects from modules. You should not import modules with the `from` syntax. ```python theme={null} # Correct import prefect.server.schemas # use with the full name import prefect.server.schemas as schemas # use the shorter name ``` ```python theme={null} # Wrong from prefect.server import schemas ``` You should not use relative imports unless it's in an `__init__.py` file. ```python theme={null} # Correct from prefect.utilities.foo import bar ``` ```python theme={null} # Wrong from .utilities.foo import bar ``` You should never use imports that are dependent on file location without explicit indication it is relative. This avoids confusion about the source of a module. ```python theme={null} # Correct from . import test ``` #### Resolving circular dependencies Sometimes, you must defer an import and perform it *within* a function to avoid a circular dependency: ```python theme={null} ## This function in `settings.py` requires a method from the global `context` but the context ## uses settings def from_context(): from prefect.context import get_profile_context ... ``` Avoid circular dependencies. They often reveal entanglement in the design. Place all deferred imports at the top of the function. If you are just using the imported object for a type signature, use the `TYPE_CHECKING` flag: ```python theme={null} # Correct from typing import TYPE_CHECKING if TYPE_CHECKING: from prefect.server.schemas.states import State def foo(state: "State"): pass ``` Usage of the type within the module requires quotes; for example, `"State"`, since it is not available at runtime. #### Importing optional requirements We do not have a best practice for this yet. See the `kubernetes`, `docker`, and `distributed` implementations for now. #### Delaying expensive imports Sometimes imports are slow, but it's important to keep the `prefect` module import times fast. In these cases, lazily import the slow module by deferring import to the relevant function body. For modules consumed by many functions, use the optional requirements pattern instead. ## Command line interface (CLI) output messages When executing a command that creates an object, the output message should offer: * A short description of what the command just did. * A bullet point list, rehashing user inputs, if possible. * Next steps, like the next command to run, if applicable. * Other relevant, pre-formatted commands that can be copied and pasted, if applicable. * A new line before the first line, and after the last line. Output Example: ```js theme={null} $ prefect work-queue create testing Created work queue with properties: name - 'abcde' uuid - 940f9828-c820-4148-9526-ea8107082bda tags - None deployment_ids - None Start an agent to pick up flows from the created work queue: prefect agent start -q 'abcde' Inspect the created work queue: prefect work-queue inspect 'abcde' ``` Additionally: * Wrap generated arguments in apostrophes (') to ensure validity by using suffixing formats with `!r`. * Indent example commands, instead of wrapping in backticks (\`). * Use placeholders if you cannot completely pre-format the example. * Capitalize placeholder labels and wrap them in less than (\<) and greater than (>) signs. * Utilize `textwrap.dedent` to remove extraneous spacing for strings with triple quotes ("""). Placeholder Example: ``` Create a work queue with tags: prefect work-queue create '' -t '' -t '' ``` Dedent Example: ```python theme={null} from textwrap import dedent ... output_msg = dedent( f""" Created work queue with properties: name - {name!r} uuid - {result} tags - {tags or None} deployment_ids - {deployment_ids or None} Start an agent to pick up flows from the created work queue: prefect agent start -q {name!r} Inspect the created work queue: prefect work-queue inspect {name!r} """ ) ``` ## API versioning ### Client and server communication You can run the Prefect client separately from Prefect server, and communicate entirely through an API. The Prefect client includes anything that runs task or flow code, (for example, agents and the Python client); or any consumer of Prefect metadata (for example, the Prefect UI and CLI). Prefect server stores this metadata and serves it through the REST API. ### API version header Sometimes, we have to make breaking changes to the API. To check a Prefect client's compatibility with the API it's making requests to, every API call the client makes includes a three-component `API_VERSION` header with major, minor, and patch versions. For example, a request with the `X-PREFECT-API-VERSION=3.2.1` header has a major version of `3`, minor version `2`, and patch version `1`. Change this version header by modifying the `API_VERSION` constant in `prefect.server.api.server`. ### Breaking changes to the API A breaking change means that your code needs to change to use a new version of Prefect. We avoid breaking changes whenever possible. When making a breaking change to the API, we consider if the change is *backwards compatible for clients*. This means that the previous version of the client can still make calls against the updated version of the server code. This might happen if the changes are purely additive, such as adding a non-critical API route. In these cases, we aim to bump the patch version. In almost all other cases, we bump the minor version, which denotes a non-backwards-compatible API change. We have reserved the major version changes to denote a backwards compatible change that is significant in some way, such as a major release milestone. ### Version composition Versions are composed of three parts: MAJOR.MINOR.PATCH. For example, the version 2.5.0 has a major version of 2, a minor version of 5, and patch version of 0. Occasionally, we add a suffix to the version such as `rc`, `a`, or `b`. These indicate pre-release versions that users can opt into for testing and experimentation prior to a generally available release. Each release will increase one of the version numbers. If we increase a number other than the patch version, the versions to the right of it reset to zero. ## Prefect's versioning scheme Prefect increases the major version when significant and widespread changes are made to the core product. Prefect increases the minor version when: * Introducing a new concept that changes how to use Prefect * Changing an existing concept in a way that fundamentally alters its usage * Removing a deprecated feature Prefect increases the patch version when: * Making enhancements to existing features * Fixing behavior in existing features * Adding new capabilities to existing concepts * Updating dependencies ## Deprecation At times, Prefect will deprecate a feature. A feature is deprecated when it will no longer be maintained. Frequently, a deprecated feature will have a new and improved alternative. Deprecated features will be retained for at least **3** minor version increases or **6 months**, whichever is longer. We may retain deprecated features longer than this time period. Prefect will sometimes include changes to behavior to fix a bug. These changes are not categorized as breaking changes. ## Client compatibility with Prefect When running a Prefect server, you are in charge of ensuring the version is compatible with those of the clients that are using the server. Prefect aims to maintain backwards compatibility with old clients for each server release. In contrast, sometimes you cannot use new clients with an old server. The new client may expect the server to support capabilities that it does not yet include. For this reason, we recommend that all clients are the same version as the server or older. For example, you can use a client on 2.1.0 with a server on 2.5.0. You cannot use a client on 2.5.0 with a server on 2.1.0. ## Client compatibility with Cloud Prefect Cloud targets compatibility with all versions of Prefect clients. If you encounter a compatibility issue, please [file a bug report](https://github.com/prefectHQ/prefect/issues/new/choose). # Integrations Source: https://docs.prefect.io/integrations/integrations Prefect integrations are PyPI packages you can install to help you build integrate your workflows with third parties. prefect-aws Maintained by Prefect prefect-azure Maintained by Prefect prefect-bitbucket Maintained by Prefect coiled Maintained by Coiled prefect-dask Maintained by Prefect prefect-databricks Maintained by Prefect prefect-dbt Maintained by Prefect prefect-docker Maintained by Prefect prefect-email Maintained by Prefect prefect-fivetran Maintained by Fivetran prefect-gcp Maintained by Prefect prefect-github Maintained by Prefect prefect-gitlab Maintained by Prefect prefect-kubernetes Maintained by Prefect prefect-ray Maintained by Prefect prefect-shell Maintained by Prefect prefect-slack Maintained by Prefect prefect-slurm Maintained by EBI Metagenomics prefect-snowflake Maintained by Prefect prefect-sqlalchemy Maintained by Prefect # assume_role_parameters Source: https://docs.prefect.io/integrations/prefect-aws/api-ref/prefect_aws-assume_role_parameters # `prefect_aws.assume_role_parameters` Module handling Assume Role parameters ## Classes ### `AssumeRoleParameters` Model used to manage parameters for the AWS STS assume\_role call. Refer to the [boto3 STS assume\_role docs](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sts/client/assume_role.html) for more information about the possible assume role configurations. **Attributes:** * `RoleSessionName`: An identifier for the assumed role session. This value is used to uniquely identify a session when the same role is assumed by different principals or for different reasons. If not provided, a default will be generated. * `DurationSeconds`: The duration, in seconds, of the role session. The value can range from 900 seconds (15 minutes) to 43,200 seconds (12 hours). * `Policy`: An IAM policy in JSON format that you want to use as an inline session policy. * `PolicyArns`: The ARNs of the IAM managed policies to use as managed session policies. Each item should be a dict with an 'arn' key. * `Tags`: A list of session tags. Each tag should be a dict with 'Key' and 'Value' keys. * `TransitiveTagKeys`: A list of keys for session tags that you want to set as transitive. Transitive tags persist during role chaining. * `ExternalId`: A unique identifier that is used by third parties to assume a role in their customers' accounts. * `SerialNumber`: The identification number of the MFA device that is associated with the user who is making the AssumeRole call. * `TokenCode`: The value provided by the MFA device, if MFA authentication is required. * `SourceIdentity`: The source identity specified by the principal that is calling the AssumeRole operation. * `ProvidedContexts`: A list of context information. Each context should be a dict with 'ProviderArn' and 'ContextAssertion' keys. **Methods:** #### `get_params_override` ```python theme={null} get_params_override(self) -> Dict[str, Any] ``` Return the dictionary of the parameters to override. The parameters to override are the ones which are not None. # batch Source: https://docs.prefect.io/integrations/prefect-aws/api-ref/prefect_aws-batch # `prefect_aws.batch` Tasks for interacting with AWS Batch ## Functions ### `abatch_submit` ```python theme={null} abatch_submit(job_name: str, job_queue: str, job_definition: str, aws_credentials: AwsCredentials, **batch_kwargs: Optional[Dict[str, Any]]) -> str ``` Asynchronously submit a job to the AWS Batch job service. **Args:** * `job_name`: The AWS batch job name. * `job_queue`: Name of the AWS batch job queue. * `job_definition`: The AWS batch job definition. * `aws_credentials`: Credentials to use for authentication with AWS. * `**batch_kwargs`: Additional keyword arguments to pass to the boto3 `submit_job` function. See the documentation for [submit\_job](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/batch.html#Batch.Client.submit_job) for more details. **Returns:** * The id corresponding to the job. ### `batch_submit` ```python theme={null} batch_submit(job_name: str, job_queue: str, job_definition: str, aws_credentials: AwsCredentials, **batch_kwargs: Optional[Dict[str, Any]]) -> str ``` Submit a job to the AWS Batch job service. **Args:** * `job_name`: The AWS batch job name. * `job_queue`: Name of the AWS batch job queue. * `job_definition`: The AWS batch job definition. * `aws_credentials`: Credentials to use for authentication with AWS. * `**batch_kwargs`: Additional keyword arguments to pass to the boto3 `submit_job` function. See the documentation for [submit\_job](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/batch.html#Batch.Client.submit_job) for more details. **Returns:** * The id corresponding to the job. # client_parameters Source: https://docs.prefect.io/integrations/prefect-aws/api-ref/prefect_aws-client_parameters # `prefect_aws.client_parameters` Module handling Client parameters ## Classes ### `AwsClientParameters` Model used to manage extra parameters that you can pass when you initialize the Client. If you want to find more information, see [boto3 docs](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/core/session.html) for more info about the possible client configurations. **Attributes:** * `api_version`: The API version to use. By default, botocore will use the latest API version when creating a client. You only need to specify this parameter if you want to use a previous API version of the client. * `use_ssl`: Whether or not to use SSL. By default, SSL is used. Note that not all services support non-ssl connections. * `verify`: Whether or not to verify SSL certificates. By default SSL certificates are verified. If False, SSL will still be used (unless use\_ssl is False), but SSL certificates will not be verified. Passing a file path to this is deprecated. * `verify_cert_path`: A filename of the CA cert bundle to use. You can specify this argument if you want to use a different CA cert bundle than the one used by botocore. * `endpoint_url`: The complete URL to use for the constructed client. Normally, botocore will automatically construct the appropriate URL to use when communicating with a service. You can specify a complete URL (including the "http/https" scheme) to override this behavior. If this value is provided, then `use_ssl` is ignored. * `config`: Advanced configuration for Botocore clients. See [botocore docs](https://botocore.amazonaws.com/v1/documentation/api/latest/reference/config.html) for more details. **Methods:** #### `deprecated_verify_cert_path` ```python theme={null} deprecated_verify_cert_path(cls, values: Dict[str, Any]) -> Dict[str, Any] ``` If verify is not a bool, raise a warning. #### `get_params_override` ```python theme={null} get_params_override(self) -> Dict[str, Any] ``` Return the dictionary of the parameters to override. The parameters to override are the one which are not None. #### `instantiate_config` ```python theme={null} instantiate_config(cls, value: Union[Config, Dict[str, Any]]) -> Dict[str, Any] ``` Casts lists to Config instances. #### `verify_cert_path_and_verify` ```python theme={null} verify_cert_path_and_verify(cls, values: Dict[str, Any]) -> Dict[str, Any] ``` If verify\_cert\_path is set but verify is False, raise a warning. # client_waiter Source: https://docs.prefect.io/integrations/prefect-aws/api-ref/prefect_aws-client_waiter # `prefect_aws.client_waiter` Task for waiting on a long-running AWS job ## Functions ### `aclient_waiter` ```python theme={null} aclient_waiter(client: str, waiter_name: str, aws_credentials: AwsCredentials, waiter_definition: Optional[Dict[str, Any]] = None, **waiter_kwargs: Optional[Dict[str, Any]]) ``` Asynchronously uses the underlying boto3 waiter functionality. **Args:** * `client`: The AWS client on which to wait (e.g., 'client\_wait', 'ec2', etc). * `waiter_name`: The name of the waiter to instantiate. You may also use a custom waiter name, if you supply an accompanying waiter definition dict. * `aws_credentials`: Credentials to use for authentication with AWS. * `waiter_definition`: A valid custom waiter model, as a dict. Note that if you supply a custom definition, it is assumed that the provided 'waiter\_name' is contained within the waiter definition dict. * `**waiter_kwargs`: Arguments to pass to the `waiter.wait(...)` method. Will depend upon the specific waiter being called. ### `client_waiter` ```python theme={null} client_waiter(client: str, waiter_name: str, aws_credentials: AwsCredentials, waiter_definition: Optional[Dict[str, Any]] = None, **waiter_kwargs: Optional[Dict[str, Any]]) ``` Uses the underlying boto3 waiter functionality. **Args:** * `client`: The AWS client on which to wait (e.g., 'client\_wait', 'ec2', etc). * `waiter_name`: The name of the waiter to instantiate. You may also use a custom waiter name, if you supply an accompanying waiter definition dict. * `aws_credentials`: Credentials to use for authentication with AWS. * `waiter_definition`: A valid custom waiter model, as a dict. Note that if you supply a custom definition, it is assumed that the provided 'waiter\_name' is contained within the waiter definition dict. * `**waiter_kwargs`: Arguments to pass to the `waiter.wait(...)` method. Will depend upon the specific waiter being called. # credentials Source: https://docs.prefect.io/integrations/prefect-aws/api-ref/prefect_aws-credentials # `prefect_aws.credentials` Module handling AWS credentials ## Classes ### `ClientType` The supported boto3 clients. ### `AwsCredentials` Block used to manage authentication with AWS. AWS authentication is handled via the `boto3` module. Refer to the [boto3 docs](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html) for more info about the possible credential configurations. **Methods:** #### `get_boto3_session` ```python theme={null} get_boto3_session(self) -> boto3.Session ``` Returns an authenticated boto3 session that can be used to create clients for AWS services. If `assume_role_arn` is provided, this method will assume the specified IAM role and return a session with the temporary credentials from the assumed role. #### `get_client` ```python theme={null} get_client(self, client_type: Union[str, ClientType]) ``` Helper method to dynamically get a client type. **Args:** * `client_type`: The client's service name. **Returns:** * An authenticated client. **Raises:** * `ValueError`: if the client is not supported. #### `get_s3_client` ```python theme={null} get_s3_client(self) -> 'S3Client' ``` Gets an authenticated S3 client. **Returns:** * An authenticated S3 client. #### `get_secrets_manager_client` ```python theme={null} get_secrets_manager_client(self) -> 'SecretsManagerClient' ``` Gets an authenticated Secrets Manager client. **Returns:** * An authenticated Secrets Manager client. ### `MinIOCredentials` Block used to manage authentication with MinIO. Refer to the [MinIO docs](https://docs.min.io/docs/minio-server-configuration-guide.html) for more info about the possible credential configurations. **Attributes:** * `minio_root_user`: Admin or root user. * `minio_root_password`: Admin or root password. * `region_name`: Location of server, e.g. "us-east-1". **Methods:** #### `get_boto3_session` ```python theme={null} get_boto3_session(self) -> boto3.Session ``` Returns an authenticated boto3 session that can be used to create clients and perform object operations on MinIO server. #### `get_client` ```python theme={null} get_client(self, client_type: Union[str, ClientType]) ``` Helper method to dynamically get a client type. **Args:** * `client_type`: The client's service name. **Returns:** * An authenticated client. **Raises:** * `ValueError`: if the client is not supported. #### `get_s3_client` ```python theme={null} get_s3_client(self) -> 'S3Client' ``` Gets an authenticated S3 client. **Returns:** * An authenticated S3 client. # __init__ Source: https://docs.prefect.io/integrations/prefect-aws/api-ref/prefect_aws-deployments-__init__ # `prefect_aws.deployments` *This module is empty or contains only private/internal implementations.* # steps Source: https://docs.prefect.io/integrations/prefect-aws/api-ref/prefect_aws-deployments-steps # `prefect_aws.deployments.steps` Prefect deployment steps for code storage and retrieval in S3 and S3 compatible services. ## Functions ### `push_to_s3` ```python theme={null} push_to_s3(bucket: str, folder: str, credentials: Optional[dict[str, Any]] = None, client_parameters: Optional[dict[str, Any]] = None, ignore_file: Optional[str] = '.prefectignore') -> PushToS3Output ``` Pushes the contents of the current working directory to an S3 bucket, excluding files and folders specified in the ignore\_file. **Args:** * `bucket`: The name of the S3 bucket where files will be uploaded. * `folder`: The folder in the S3 bucket where files will be uploaded. * `credentials`: A dictionary of AWS credentials (aws\_access\_key\_id, aws\_secret\_access\_key, aws\_session\_token) or MinIO credentials (minio\_root\_user, minio\_root\_password). * `client_parameters`: A dictionary of additional parameters to pass to the boto3 client. * `ignore_file`: The name of the file containing ignore patterns. **Returns:** * A dictionary containing the bucket and folder where files were uploaded. **Examples:** Push files to an S3 bucket: ```yaml theme={null} push: - prefect_aws.deployments.steps.push_to_s3: requires: prefect-aws bucket: my-bucket folder: my-project ``` Push files to an S3 bucket using credentials stored in a block: ```yaml theme={null} push: - prefect_aws.deployments.steps.push_to_s3: requires: prefect-aws bucket: my-bucket folder: my-project credentials: "{{ prefect.blocks.aws-credentials.dev-credentials }}" ``` ### `pull_from_s3` ```python theme={null} pull_from_s3(bucket: str, folder: str, credentials: Optional[dict[str, Any]] = None, client_parameters: Optional[dict[str, Any]] = None) -> PullFromS3Output ``` Pulls the contents of an S3 bucket folder to the current working directory. **Args:** * `bucket`: The name of the S3 bucket where files are stored. * `folder`: The folder in the S3 bucket where files are stored. * `credentials`: A dictionary of AWS credentials (aws\_access\_key\_id, aws\_secret\_access\_key, aws\_session\_token) or MinIO credentials (minio\_root\_user, minio\_root\_password). * `client_parameters`: A dictionary of additional parameters to pass to the boto3 client. **Returns:** * A dictionary containing the bucket, folder, and local directory where files were downloaded. **Examples:** Pull files from S3 using the default credentials and client parameters: ```yaml theme={null} pull: - prefect_aws.deployments.steps.pull_from_s3: requires: prefect-aws bucket: my-bucket folder: my-project ``` Pull files from S3 using credentials stored in a block: ```yaml theme={null} pull: - prefect_aws.deployments.steps.pull_from_s3: requires: prefect-aws bucket: my-bucket folder: my-project credentials: "{{ prefect.blocks.aws-credentials.dev-credentials }}" ``` ## Classes ### `PushToS3Output` The output of the `push_to_s3` step. ### `PullFromS3Output` The output of the `pull_from_s3` step. # __init__ Source: https://docs.prefect.io/integrations/prefect-aws/api-ref/prefect_aws-experimental-__init__ # `prefect_aws.experimental` *This module is empty or contains only private/internal implementations.* # __init__ Source: https://docs.prefect.io/integrations/prefect-aws/api-ref/prefect_aws-experimental-bundles-__init__ # `prefect_aws.experimental.bundles` *This module is empty or contains only private/internal implementations.* # execute Source: https://docs.prefect.io/integrations/prefect-aws/api-ref/prefect_aws-experimental-bundles-execute # `prefect_aws.experimental.bundles.execute` ## Functions ### `download_bundle_from_s3` ```python theme={null} download_bundle_from_s3(bucket: str, key: str, output_dir: str | None = None, aws_credentials_block_name: Optional[str] = None) -> DownloadResult ``` Downloads a bundle from an S3 bucket. **Args:** * `bucket`: S3 bucket name * `key`: S3 object key * `output_dir`: Local directory to save the bundle (if None, uses a temp directory) * `aws_credentials_block_name`: Name of the AWS credentials block to use. If None, credentials will be inferred from the environment using boto3's standard credential resolution. **Returns:** * A dictionary containing: * local\_path: Path where the bundle was downloaded ### `execute_bundle_from_s3` ```python theme={null} execute_bundle_from_s3(bucket: str, key: str, aws_credentials_block_name: Optional[str] = None) -> None ``` Downloads a bundle from S3 and executes it. This step: 1. Downloads the bundle from S3 2. Extracts and deserializes the bundle 3. Downloads and extracts included files (if present) 4. Executes the flow in a subprocess **Args:** * `bucket`: S3 bucket name * `key`: S3 object key * `aws_credentials_block_name`: Name of the AWS credentials block to use. If None, credentials will be inferred from the environment using boto3's standard credential resolution. ## Classes ### `DownloadResult` Result of downloading a bundle from S3. # upload Source: https://docs.prefect.io/integrations/prefect-aws/api-ref/prefect_aws-experimental-bundles-upload # `prefect_aws.experimental.bundles.upload` S3 bundle steps for Prefect. These steps allow uploading and downloading flow/task bundles to and from S3. ## Functions ### `upload_bundle_to_s3` ```python theme={null} upload_bundle_to_s3(local_filepath: str, bucket: str, key: str, aws_credentials_block_name: Optional[str] = None) -> UploadResult ``` Uploads a bundle file to an S3 bucket. **Args:** * `local_filepath`: Local path to the bundle file * `bucket`: S3 bucket name * `key`: S3 object key (if None, uses the bundle filename) * `aws_credentials_block_name`: Name of the AWS credentials block to use. If None, credentials will be inferred from the environment using boto3's standard credential resolution. **Returns:** * A dictionary containing: * bucket: The S3 bucket the bundle was uploaded to * key: The S3 key (path) the bundle was uploaded to * url: The full S3 URL of the uploaded bundle (s3://bucket/key) **Raises:** * `ValueError`: If the local file does not exist * `RuntimeError`: If the upload fails ## Classes ### `UploadResult` Result of uploading a bundle to S3. # decorators Source: https://docs.prefect.io/integrations/prefect-aws/api-ref/prefect_aws-experimental-decorators # `prefect_aws.experimental.decorators` ## Functions ### `ecs` ```python theme={null} ecs(work_pool: str, include_files: Sequence[str] | None = None, **job_variables: Any) -> Callable[[Flow[P, R]], InfrastructureBoundFlow[P, R]] ``` Decorator that binds execution of a flow to an ECS work pool **Args:** * `work_pool`: The name of the ECS work pool to use * `include_files`: Optional sequence of file patterns to include in the bundle. Patterns are relative to the flow file location. Supports glob patterns (e.g., "*.yaml", "data/\*\*/*.csv"). Files matching these patterns will be bundled and available in the remote execution environment. * `**job_variables`: Additional job variables to use for infrastructure configuration # glue_job Source: https://docs.prefect.io/integrations/prefect-aws/api-ref/prefect_aws-glue_job # `prefect_aws.glue_job` Integrations with the AWS Glue Job. ## Classes ### `GlueJobRun` Execute a Glue Job **Methods:** #### `fetch_result` ```python theme={null} fetch_result(self) -> str ``` fetch glue job state #### `wait_for_completion` ```python theme={null} wait_for_completion(self) -> None ``` Wait for the job run to complete and get exit code ### `GlueJobBlock` Execute a job to the AWS Glue Job service. **Attributes:** * `job_name`: The name of the job definition to use. * `arguments`: The job arguments associated with this run. For this job run, they replace the default arguments set in the job definition itself. You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes. Job arguments may be logged. Do not pass plaintext secrets as arguments. Retrieve secrets from a Glue Connection, Secrets Manager or other secret management mechanism if you intend to keep them within the Job. [doc](https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-glue-arguments.html) * `job_watch_poll_interval`: The amount of time to wait between AWS API calls while monitoring the state of a Glue Job. default is 60s because of jobs that use AWS Glue versions 2.0 and later have a 1-minute minimum. [AWS Glue Pricing](https://aws.amazon.com/glue/pricing/?nc1=h_ls) **Methods:** #### `trigger` ```python theme={null} trigger(self) -> GlueJobRun ``` trigger for GlueJobRun # lambda_function Source: https://docs.prefect.io/integrations/prefect-aws/api-ref/prefect_aws-lambda_function # `prefect_aws.lambda_function` Integrations with AWS Lambda. Examples: Run a lambda function with a payload ```python theme={null} LambdaFunction( function_name="test-function", aws_credentials=aws_credentials, ).invoke(payload={"foo": "bar"}) ``` Specify a version of a lambda function ```python theme={null} LambdaFunction( function_name="test-function", qualifier="1", aws_credentials=aws_credentials, ).invoke() ``` Invoke a lambda function asynchronously ```python theme={null} LambdaFunction( function_name="test-function", aws_credentials=aws_credentials, ).invoke(invocation_type="Event") ``` Invoke a lambda function and return the last 4 KB of logs ```python theme={null} LambdaFunction( function_name="test-function", aws_credentials=aws_credentials, ).invoke(tail=True) ``` Invoke a lambda function with a client context ```python theme={null} LambdaFunction( function_name="test-function", aws_credentials=aws_credentials, ).invoke(client_context={"bar": "foo"}) ``` ## Classes ### `LambdaFunction` Invoke a Lambda function. This block is part of the prefect-aws collection. Install prefect-aws with `pip install prefect-aws` to use this block. **Attributes:** * `function_name`: The name, ARN, or partial ARN of the Lambda function to run. This must be the name of a function that is already deployed to AWS Lambda. * `qualifier`: The version or alias of the Lambda function to use when invoked. If not specified, the latest (unqualified) version of the Lambda function will be used. * `aws_credentials`: The AWS credentials to use to connect to AWS Lambda with a default factory of AwsCredentials. **Methods:** #### `ainvoke` ```python theme={null} ainvoke(self, payload: Optional[dict] = None, invocation_type: Literal['RequestResponse', 'Event', 'DryRun'] = 'RequestResponse', tail: bool = False, client_context: Optional[dict] = None) -> dict ``` Asynchronously invoke the Lambda function with the given payload. **Args:** * `payload`: The payload to send to the Lambda function. * `invocation_type`: The invocation type of the Lambda function. This can be one of "RequestResponse", "Event", or "DryRun". Uses "RequestResponse" by default. * `tail`: If True, the response will include the base64-encoded last 4 KB of log data produced by the Lambda function. * `client_context`: The client context to send to the Lambda function. Limited to 3583 bytes. **Returns:** * The response from the Lambda function. **Examples:** ```python theme={null} from prefect import flow from prefect_aws.lambda_function import LambdaFunction from prefect_aws.credentials import AwsCredentials @flow async def example_flow(): credentials = AwsCredentials() lambda_function = LambdaFunction( function_name="test_lambda_function", aws_credentials=credentials, ) response = await lambda_function.ainvoke( payload={"foo": "bar"}, invocation_type="RequestResponse", ) return response["Payload"].read() ``` #### `invoke` ```python theme={null} invoke(self, payload: Optional[dict] = None, invocation_type: Literal['RequestResponse', 'Event', 'DryRun'] = 'RequestResponse', tail: bool = False, client_context: Optional[dict] = None) -> dict ``` Invoke the Lambda function with the given payload. **Args:** * `payload`: The payload to send to the Lambda function. * `invocation_type`: The invocation type of the Lambda function. This can be one of "RequestResponse", "Event", or "DryRun". Uses "RequestResponse" by default. * `tail`: If True, the response will include the base64-encoded last 4 KB of log data produced by the Lambda function. * `client_context`: The client context to send to the Lambda function. Limited to 3583 bytes. **Returns:** * The response from the Lambda function. **Examples:** ```python theme={null} from prefect_aws.lambda_function import LambdaFunction from prefect_aws.credentials import AwsCredentials credentials = AwsCredentials() lambda_function = LambdaFunction( function_name="test_lambda_function", aws_credentials=credentials, ) response = lambda_function.invoke( payload={"foo": "bar"}, invocation_type="RequestResponse", ) response["Payload"].read() ``` # plugins Source: https://docs.prefect.io/integrations/prefect-aws/api-ref/prefect_aws-plugins # `prefect_aws.plugins` ## Functions ### `set_database_connection_params` ```python theme={null} set_database_connection_params(connection_url: str, settings: Any) -> Mapping[str, Any] ``` # s3 Source: https://docs.prefect.io/integrations/prefect-aws/api-ref/prefect_aws-s3 # `prefect_aws.s3` Tasks for interacting with AWS S3 ## Functions ### `get_s3_client` ```python theme={null} get_s3_client(credentials: Optional[dict[str, Any]] = None, client_parameters: Optional[dict[str, Any]] = None) -> dict[str, Any] ``` Get a boto3 S3 client with the given credentials and client parameters. **Args:** * `credentials`: A dictionary of credentials to use for authentication with AWS. * `client_parameters`: A dictionary of parameters to use for the boto3 client initialization. **Returns:** * A boto3 S3 client. ### `adownload_from_bucket` ```python theme={null} adownload_from_bucket(bucket: str, key: str, aws_credentials: AwsCredentials, aws_client_parameters: AwsClientParameters = AwsClientParameters()) -> bytes ``` Downloads an object with a given key from a given S3 bucket. Added in prefect-aws==0.5.3. **Args:** * `bucket`: Name of bucket to download object from. Required if a default value was not supplied when creating the task. * `key`: Key of object to download. Required if a default value was not supplied when creating the task. * `aws_credentials`: Credentials to use for authentication with AWS. * `aws_client_parameters`: Custom parameter for the boto3 client initialization. **Returns:** * A `bytes` representation of the downloaded object. ### `download_from_bucket` ```python theme={null} download_from_bucket(bucket: str, key: str, aws_credentials: AwsCredentials, aws_client_parameters: AwsClientParameters = AwsClientParameters()) -> bytes ``` Downloads an object with a given key from a given S3 bucket. **Args:** * `bucket`: Name of bucket to download object from. Required if a default value was not supplied when creating the task. * `key`: Key of object to download. Required if a default value was not supplied when creating the task. * `aws_credentials`: Credentials to use for authentication with AWS. * `aws_client_parameters`: Custom parameter for the boto3 client initialization. **Returns:** * A `bytes` representation of the downloaded object. ### `aupload_to_bucket` ```python theme={null} aupload_to_bucket(data: bytes, bucket: str, aws_credentials: AwsCredentials, aws_client_parameters: AwsClientParameters = AwsClientParameters(), key: Optional[str] = None) -> str ``` Asynchronously uploads data to an S3 bucket. Added in prefect-aws==0.5.3. **Args:** * `data`: Bytes representation of data to upload to S3. * `bucket`: Name of bucket to upload data to. Required if a default value was not supplied when creating the task. * `aws_credentials`: Credentials to use for authentication with AWS. * `aws_client_parameters`: Custom parameter for the boto3 client initialization.. * `key`: Key of object to download. Defaults to a UUID string. **Returns:** * The key of the uploaded object ### `upload_to_bucket` ```python theme={null} upload_to_bucket(data: bytes, bucket: str, aws_credentials: AwsCredentials, aws_client_parameters: AwsClientParameters = AwsClientParameters(), key: Optional[str] = None) -> str ``` Uploads data to an S3 bucket. **Args:** * `data`: Bytes representation of data to upload to S3. * `bucket`: Name of bucket to upload data to. Required if a default value was not supplied when creating the task. * `aws_credentials`: Credentials to use for authentication with AWS. * `aws_client_parameters`: Custom parameter for the boto3 client initialization.. * `key`: Key of object to download. Defaults to a UUID string. **Returns:** * The key of the uploaded object ### `acopy_objects` ```python theme={null} acopy_objects(source_path: str, target_path: str, source_bucket_name: str, aws_credentials: AwsCredentials, target_bucket_name: Optional[str] = None, **copy_kwargs) -> str ``` Asynchronously uses S3's internal [CopyObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html) to copy objects within or between buckets. To copy objects between buckets, the credentials must have permission to read the source object and write to the target object. If the credentials do not have those permissions, try using `S3Bucket.stream_from`. Added in prefect-aws==0.5.3. **Args:** * `source_path`: The path to the object to copy. Can be a string or `Path`. * `target_path`: The path to copy the object to. Can be a string or `Path`. * `source_bucket_name`: The bucket to copy the object from. * `aws_credentials`: Credentials to use for authentication with AWS. * `target_bucket_name`: The bucket to copy the object to. If not provided, defaults to `source_bucket`. * `**copy_kwargs`: Additional keyword arguments to pass to `S3Client.copy_object`. **Returns:** * The path that the object was copied to. Excludes the bucket name. Examples: Copy notes.txt from s3://my-bucket/my\_folder/notes.txt to s3://my-bucket/my\_folder/notes\_copy.txt. ```python theme={null} from prefect import flow from prefect_aws import AwsCredentials from prefect_aws.s3 import acopy_objects aws_credentials = AwsCredentials.load("my-creds") @flow async def example_copy_flow(): await acopy_objects( source_path="my_folder/notes.txt", target_path="my_folder/notes_copy.txt", source_bucket_name="my-bucket", aws_credentials=aws_credentials, ) await example_copy_flow() ``` Copy notes.txt from s3://my-bucket/my\_folder/notes.txt to s3://other-bucket/notes\_copy.txt. ```python theme={null} from prefect import flow from prefect_aws import AwsCredentials from prefect_aws.s3 import acopy_objects aws_credentials = AwsCredentials.load("shared-creds") @flow async def example_copy_flow(): await acopy_objects( source_path="my_folder/notes.txt", target_path="notes_copy.txt", source_bucket_name="my-bucket", aws_credentials=aws_credentials, target_bucket_name="other-bucket", ) await example_copy_flow() ``` ### `copy_objects` ```python theme={null} copy_objects(source_path: str, target_path: str, source_bucket_name: str, aws_credentials: AwsCredentials, target_bucket_name: Optional[str] = None, **copy_kwargs) -> str ``` Uses S3's internal [CopyObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html) to copy objects within or between buckets. To copy objects between buckets, the credentials must have permission to read the source object and write to the target object. If the credentials do not have those permissions, try using `S3Bucket.stream_from`. **Args:** * `source_path`: The path to the object to copy. Can be a string or `Path`. * `target_path`: The path to copy the object to. Can be a string or `Path`. * `source_bucket_name`: The bucket to copy the object from. * `aws_credentials`: Credentials to use for authentication with AWS. * `target_bucket_name`: The bucket to copy the object to. If not provided, defaults to `source_bucket`. * `**copy_kwargs`: Additional keyword arguments to pass to `S3Client.copy_object`. **Returns:** * The path that the object was copied to. Excludes the bucket name. Examples: Copy notes.txt from s3://my-bucket/my\_folder/notes.txt to s3://my-bucket/my\_folder/notes\_copy.txt. ```python theme={null} from prefect import flow from prefect_aws import AwsCredentials from prefect_aws.s3 import copy_objects aws_credentials = AwsCredentials.load("my-creds") @flow def example_copy_flow(): copy_objects( source_path="my_folder/notes.txt", target_path="my_folder/notes_copy.txt", source_bucket_name="my-bucket", aws_credentials=aws_credentials, ) example_copy_flow() ``` Copy notes.txt from s3://my-bucket/my\_folder/notes.txt to s3://other-bucket/notes\_copy.txt. ```python theme={null} from prefect import flow from prefect_aws import AwsCredentials from prefect_aws.s3 import copy_objects aws_credentials = AwsCredentials.load("shared-creds") @flow def example_copy_flow(): copy_objects( source_path="my_folder/notes.txt", target_path="notes_copy.txt", source_bucket_name="my-bucket", aws_credentials=aws_credentials, target_bucket_name="other-bucket", ) example_copy_flow() ``` ### `amove_objects` ```python theme={null} amove_objects(source_path: str, target_path: str, source_bucket_name: str, aws_credentials: AwsCredentials, target_bucket_name: Optional[str] = None) -> str ``` Asynchronously moves an object from one S3 location to another. To move objects between buckets, the credentials must have permission to read and delete the source object and write to the target object. If the credentials do not have those permissions, this method will raise an error. If the credentials have permission to read the source object but not delete it, the object will be copied but not deleted. Added in prefect-aws==0.5.3. **Args:** * `source_path`: The path of the object to move * `target_path`: The path to move the object to * `source_bucket_name`: The name of the bucket containing the source object * `aws_credentials`: Credentials to use for authentication with AWS. * `target_bucket_name`: The bucket to copy the object to. If not provided, defaults to `source_bucket`. **Returns:** * The path that the object was moved to. Excludes the bucket name. ### `move_objects` ```python theme={null} move_objects(source_path: str, target_path: str, source_bucket_name: str, aws_credentials: AwsCredentials, target_bucket_name: Optional[str] = None) -> str ``` Move an object from one S3 location to another. To move objects between buckets, the credentials must have permission to read and delete the source object and write to the target object. If the credentials do not have those permissions, this method will raise an error. If the credentials have permission to read the source object but not delete it, the object will be copied but not deleted. **Args:** * `source_path`: The path of the object to move * `target_path`: The path to move the object to * `source_bucket_name`: The name of the bucket containing the source object * `aws_credentials`: Credentials to use for authentication with AWS. * `target_bucket_name`: The bucket to copy the object to. If not provided, defaults to `source_bucket`. **Returns:** * The path that the object was moved to. Excludes the bucket name. ### `alist_objects` ```python theme={null} alist_objects(bucket: str, aws_credentials: AwsCredentials, aws_client_parameters: AwsClientParameters = AwsClientParameters(), prefix: str = '', delimiter: str = '', page_size: Optional[int] = None, max_items: Optional[int] = None, jmespath_query: Optional[str] = None) -> List[Dict[str, Any]] ``` Asynchronously lists details of objects in a given S3 bucket. Added in prefect-aws==0.5.3. **Args:** * `bucket`: Name of bucket to list items from. Required if a default value was not supplied when creating the task. * `aws_credentials`: Credentials to use for authentication with AWS. * `aws_client_parameters`: Custom parameter for the boto3 client initialization.. * `prefix`: Used to filter objects with keys starting with the specified prefix. * `delimiter`: Character used to group keys of listed objects. * `page_size`: Number of objects to return in each request to the AWS API. * `max_items`: Maximum number of objects that to be returned by task. * `jmespath_query`: Query used to filter objects based on object attributes refer to the [boto3 docs](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/paginators.html#filtering-results-with-jmespath) for more information on how to construct queries. **Returns:** * A list of dictionaries containing information about the objects retrieved. Refer to the boto3 docs for an example response. ### `list_objects` ```python theme={null} list_objects(bucket: str, aws_credentials: AwsCredentials, aws_client_parameters: AwsClientParameters = AwsClientParameters(), prefix: str = '', delimiter: str = '', page_size: Optional[int] = None, max_items: Optional[int] = None, jmespath_query: Optional[str] = None) -> List[Dict[str, Any]] ``` Lists details of objects in a given S3 bucket. **Args:** * `bucket`: Name of bucket to list items from. Required if a default value was not supplied when creating the task. * `aws_credentials`: Credentials to use for authentication with AWS. * `aws_client_parameters`: Custom parameter for the boto3 client initialization.. * `prefix`: Used to filter objects with keys starting with the specified prefix. * `delimiter`: Character used to group keys of listed objects. * `page_size`: Number of objects to return in each request to the AWS API. * `max_items`: Maximum number of objects that to be returned by task. * `jmespath_query`: Query used to filter objects based on object attributes refer to the [boto3 docs](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/paginators.html#filtering-results-with-jmespath) for more information on how to construct queries. **Returns:** * A list of dictionaries containing information about the objects retrieved. Refer to the boto3 docs for an example response. ## Classes ### `S3Bucket` Block used to store data using AWS S3 or S3-compatible object storage like MinIO. **Attributes:** * `bucket_name`: Name of your bucket. * `credentials`: A block containing your credentials to AWS or MinIO. * `bucket_folder`: A default path to a folder within the S3 bucket to use for reading and writing objects. **Methods:** #### `adownload_folder_to_path` ```python theme={null} adownload_folder_to_path(self, from_folder: str, to_folder: Optional[Union[str, Path]] = None, **download_kwargs: Dict[str, Any]) -> Path ``` Asynchronously downloads objects *within* a folder (excluding the folder itself) from the S3 bucket to a folder. **Args:** * `from_folder`: The path to the folder to download from. * `to_folder`: The path to download the folder to. * `**download_kwargs`: Additional keyword arguments to pass to `Client.download_file`. **Returns:** * The absolute path that the folder was downloaded to. **Examples:** Download my\_folder to a local folder named my\_folder. ```python theme={null} from prefect_aws.s3 import S3Bucket s3_bucket = S3Bucket.load("my-bucket") await s3_bucket.adownload_folder_to_path("my_folder", "my_folder") ``` #### `adownload_object_to_file_object` ```python theme={null} adownload_object_to_file_object(self, from_path: str, to_file_object: BinaryIO, **download_kwargs: Dict[str, Any]) -> BinaryIO ``` Asynchronously downloads an object from the object storage service to a file-like object, which can be a BytesIO object or a BufferedWriter. **Args:** * `from_path`: The path to the object to download from; this gets prefixed with the bucket\_folder. * `to_file_object`: The file-like object to download the object to. * `**download_kwargs`: Additional keyword arguments to pass to `Client.download_fileobj`. **Returns:** * The file-like object that the object was downloaded to. **Examples:** Download my\_folder/notes.txt object to a BytesIO object. ```python theme={null} from io import BytesIO from prefect_aws.s3 import S3Bucket s3_bucket = S3Bucket.load("my-bucket") with BytesIO() as buf: await s3_bucket.adownload_object_to_file_object("my_folder/notes.txt", buf) ``` Download my\_folder/notes.txt object to a BufferedWriter. ```python theme={null} from prefect_aws.s3 import S3Bucket s3_bucket = S3Bucket.load("my-bucket") with open("notes.txt", "wb") as f: await s3_bucket.adownload_object_to_file_object("my_folder/notes.txt", f) ``` #### `adownload_object_to_path` ```python theme={null} adownload_object_to_path(self, from_path: str, to_path: Optional[Union[str, Path]], **download_kwargs: Dict[str, Any]) -> Path ``` Asynchronously downloads an object from the S3 bucket to a path. **Args:** * `from_path`: The path to the object to download; this gets prefixed with the bucket\_folder. * `to_path`: The path to download the object to. If not provided, the object's name will be used. * `**download_kwargs`: Additional keyword arguments to pass to `Client.download_file`. **Returns:** * The absolute path that the object was downloaded to. **Examples:** Download my\_folder/notes.txt object to notes.txt. ```python theme={null} from prefect_aws.s3 import S3Bucket s3_bucket = S3Bucket.load("my-bucket") await s3_bucket.adownload_object_to_path("my_folder/notes.txt", "notes.txt") ``` #### `aget_directory` ```python theme={null} aget_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> None ``` Asynchronously copies a folder from the configured S3 bucket to a local directory. Defaults to copying the entire contents of the block's basepath to the current working directory. **Args:** * `from_path`: Path in S3 bucket to download from. Defaults to the block's configured basepath. * `local_path`: Local path to download S3 contents to. Defaults to the current working directory. #### `alist_objects` ```python theme={null} alist_objects(self, folder: str = '', delimiter: str = '', page_size: Optional[int] = None, max_items: Optional[int] = None, jmespath_query: Optional[str] = None) -> List[Dict[str, Any]] ``` Asynchronously lists objects in the S3 bucket. **Args:** * `folder`: Folder to list objects from. * `delimiter`: Character used to group keys of listed objects. * `page_size`: Number of objects to return in each request to the AWS API. * `max_items`: Maximum number of objects that to be returned by task. * `jmespath_query`: Query used to filter objects based on object attributes refer to the [boto3 docs](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/paginators.html#filtering-results-with-jmespath) for more information on how to construct queries. **Returns:** * List of objects and their metadata in the bucket. **Examples:** List objects under the `base_folder`. ```python theme={null} from prefect_aws.s3 import S3Bucket s3_bucket = S3Bucket.load("my-bucket") await s3_bucket.alist_objects("base_folder") ``` #### `amove_object` ```python theme={null} amove_object(self, from_path: Union[str, Path], to_path: Union[str, Path], to_bucket: Optional[Union['S3Bucket', str]] = None) -> str ``` Asynchronously uses S3's internal CopyObject and DeleteObject to move objects within or between buckets. To move objects between buckets, `self`'s credentials must have permission to read and delete the source object and write to the target object. If the credentials do not have those permissions, this method will raise an error. If the credentials have permission to read the source object but not delete it, the object will be copied but not deleted. **Args:** * `from_path`: The path of the object to move. * `to_path`: The path to move the object to. * `to_bucket`: The bucket to move to. Defaults to the current bucket. **Returns:** * The path that the object was moved to. Excludes the bucket name. Examples: Move notes.txt from my\_folder/notes.txt to my\_folder/notes\_copy.txt. ```python theme={null} from prefect_aws.s3 import S3Bucket s3_bucket = S3Bucket.load("my-bucket") await s3_bucket.amove_object("my_folder/notes.txt", "my_folder/notes_copy.txt") ``` Move notes.txt from my\_folder/notes.txt to my\_folder/notes\_copy.txt in another bucket. ```python theme={null} from prefect_aws.s3 import S3Bucket s3_bucket = S3Bucket.load("my-bucket") await s3_bucket.amove_object( "my_folder/notes.txt", "my_folder/notes_copy.txt", to_bucket="other-bucket" ) ``` #### `aput_directory` ```python theme={null} aput_directory(self, local_path: Optional[str] = None, to_path: Optional[str] = None, ignore_file: Optional[str] = None) -> int ``` Asynchronously uploads a directory from a given local path to the configured S3 bucket in a given folder. Defaults to uploading the entire contents the current working directory to the block's basepath. **Args:** * `local_path`: Path to local directory to upload from. * `to_path`: Path in S3 bucket to upload to. Defaults to block's configured basepath. * `ignore_file`: Path to file containing gitignore style expressions for filepaths to ignore. #### `aread_path` ```python theme={null} aread_path(self, path: str) -> bytes ``` Asynchronously reads the contents of a specified path from the S3 bucket. Provide the entire path to the key in S3. **Args:** * `path`: Entire path to (and including) the key. #### `astream_from` ```python theme={null} astream_from(self, bucket: 'S3Bucket', from_path: str, to_path: Optional[str] = None, **upload_kwargs: Dict[str, Any]) -> str ``` Asynchronously streams an object from another bucket to this bucket. Requires the object to be downloaded and uploaded in chunks. If `self`'s credentials allow for writes to the other bucket, try using `S3Bucket.copy_object`. Added in version 0.5.3. **Args:** * `bucket`: The bucket to stream from. * `from_path`: The path of the object to stream. * `to_path`: The path to stream the object to. Defaults to the object's name. * `**upload_kwargs`: Additional keyword arguments to pass to `Client.upload_fileobj`. **Returns:** * The path that the object was uploaded to. **Examples:** Stream notes.txt from your-bucket/notes.txt to my-bucket/landed/notes.txt. ```python theme={null} from prefect_aws.s3 import S3Bucket your_s3_bucket = S3Bucket.load("your-bucket") my_s3_bucket = S3Bucket.load("my-bucket") await my_s3_bucket.astream_from( your_s3_bucket, "notes.txt", to_path="landed/notes.txt" ) ``` #### `aupload_from_file_object` ```python theme={null} aupload_from_file_object(self, from_file_object: BinaryIO, to_path: str, **upload_kwargs: Dict[str, Any]) -> str ``` Asynchronously uploads an object to the S3 bucket from a file-like object, which can be a BytesIO object or a BufferedReader. **Args:** * `from_file_object`: The file-like object to upload from. * `to_path`: The path to upload the object to. * `**upload_kwargs`: Additional keyword arguments to pass to `Client.upload_fileobj`. **Returns:** * The path that the object was uploaded to. **Examples:** Upload BytesIO object to my\_folder/notes.txt. ```python theme={null} from io import BytesIO from prefect_aws.s3 import S3Bucket s3_bucket = S3Bucket.load("my-bucket") with open("notes.txt", "rb") as f: await s3_bucket.aupload_from_file_object(f, "my_folder/notes.txt") ``` Upload BufferedReader object to my\_folder/notes.txt. ```python theme={null} from prefect_aws.s3 import S3Bucket s3_bucket = S3Bucket.load("my-bucket") with open("notes.txt", "rb") as f: s3_bucket.upload_from_file_object( f, "my_folder/notes.txt" ) ``` #### `aupload_from_folder` ```python theme={null} aupload_from_folder(self, from_folder: Union[str, Path], to_folder: Optional[str] = None, **upload_kwargs: Dict[str, Any]) -> Union[str, None] ``` Asynchronously uploads files *within* a folder (excluding the folder itself) to the object storage service folder. Added in version prefect-aws==0.5.3. **Args:** * `from_folder`: The path to the folder to upload from. * `to_folder`: The path to upload the folder to. * `**upload_kwargs`: Additional keyword arguments to pass to `Client.upload_fileobj`. **Returns:** * The path that the folder was uploaded to. **Examples:** Upload contents from my\_folder to new\_folder. ```python theme={null} from prefect_aws.s3 import S3Bucket s3_bucket = S3Bucket.load("my-bucket") await s3_bucket.aupload_from_folder("my_folder", "new_folder") ``` #### `aupload_from_path` ```python theme={null} aupload_from_path(self, from_path: Union[str, Path], to_path: Optional[str] = None, **upload_kwargs: Dict[str, Any]) -> str ``` Asynchronously uploads an object from a path to the S3 bucket. Added in version 0.5.3. **Args:** * `from_path`: The path to the file to upload from. * `to_path`: The path to upload the file to. * `**upload_kwargs`: Additional keyword arguments to pass to `Client.upload`. **Returns:** * The path that the object was uploaded to. **Examples:** Upload notes.txt to my\_folder/notes.txt. ```python theme={null} from prefect_aws.s3 import S3Bucket s3_bucket = S3Bucket.load("my-bucket") await s3_bucket.aupload_from_path("notes.txt", "my_folder/notes.txt") ``` #### `awrite_path` ```python theme={null} awrite_path(self, path: str, content: bytes) -> str ``` Asynchronously writes to an S3 bucket. Args: path: The key name. Each object in your bucket has a unique key (or key name). content: What you are uploading to S3. Example: Write data to the path `dogs/small_dogs/havanese` in an S3 Bucket: ```python theme={null} from prefect_aws import MinioCredentials from prefect_aws.s3 import S3Bucket minio_creds = MinIOCredentials( minio_root_user = "minioadmin", minio_root_password = "minioadmin", ) s3_bucket_block = S3Bucket( bucket_name="bucket", minio_credentials=minio_creds, bucket_folder="dogs/smalldogs", endpoint_url="http://localhost:9000", ) s3_havanese_path = await s3_bucket_block.awrite_path(path="havanese", content=data) ``` #### `basepath` ```python theme={null} basepath(self) -> str ``` The base path of the S3 bucket. **Returns:** * The base path of the S3 bucket. #### `basepath` ```python theme={null} basepath(self, value: str) -> None ``` #### `copy_object` ```python theme={null} copy_object(self, from_path: Union[str, Path], to_path: Union[str, Path], to_bucket: Optional[Union['S3Bucket', str]] = None, **copy_kwargs) -> str ``` Uses S3's internal [CopyObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html) to copy objects within or between buckets. To copy objects between buckets, `self`'s credentials must have permission to read the source object and write to the target object. If the credentials do not have those permissions, try using `S3Bucket.stream_from`. **Args:** * `from_path`: The path of the object to copy. * `to_path`: The path to copy the object to. * `to_bucket`: The bucket to copy to. Defaults to the current bucket. * `**copy_kwargs`: Additional keyword arguments to pass to `S3Client.copy_object`. **Returns:** * The path that the object was copied to. Excludes the bucket name. Examples: Copy notes.txt from my\_folder/notes.txt to my\_folder/notes\_copy.txt. ```python theme={null} from prefect_aws.s3 import S3Bucket s3_bucket = S3Bucket.load("my-bucket") s3_bucket.copy_object("my_folder/notes.txt", "my_folder/notes_copy.txt") ``` Copy notes.txt from my\_folder/notes.txt to my\_folder/notes\_copy.txt in another bucket. ```python theme={null} from prefect_aws.s3 import S3Bucket s3_bucket = S3Bucket.load("my-bucket") s3_bucket.copy_object( "my_folder/notes.txt", "my_folder/notes_copy.txt", to_bucket="other-bucket" ) ``` #### `download_folder_to_path` ```python theme={null} download_folder_to_path(self, from_folder: str, to_folder: Optional[Union[str, Path]] = None, **download_kwargs: Dict[str, Any]) -> Path ``` Downloads objects *within* a folder (excluding the folder itself) from the S3 bucket to a folder. Changed in version 0.6.0. **Args:** * `from_folder`: The path to the folder to download from. * `to_folder`: The path to download the folder to. * `**download_kwargs`: Additional keyword arguments to pass to `Client.download_file`. **Returns:** * The absolute path that the folder was downloaded to. **Examples:** Download my\_folder to a local folder named my\_folder. ```python theme={null} from prefect_aws.s3 import S3Bucket s3_bucket = S3Bucket.load("my-bucket") s3_bucket.download_folder_to_path("my_folder", "my_folder") ``` #### `download_object_to_file_object` ```python theme={null} download_object_to_file_object(self, from_path: str, to_file_object: BinaryIO, **download_kwargs: Dict[str, Any]) -> BinaryIO ``` Downloads an object from the object storage service to a file-like object, which can be a BytesIO object or a BufferedWriter. **Args:** * `from_path`: The path to the object to download from; this gets prefixed with the bucket\_folder. * `to_file_object`: The file-like object to download the object to. * `**download_kwargs`: Additional keyword arguments to pass to `Client.download_fileobj`. **Returns:** * The file-like object that the object was downloaded to. **Examples:** Download my\_folder/notes.txt object to a BytesIO object. ```python theme={null} from io import BytesIO from prefect_aws.s3 import S3Bucket s3_bucket = S3Bucket.load("my-bucket") with BytesIO() as buf: s3_bucket.download_object_to_file_object("my_folder/notes.txt", buf) ``` Download my\_folder/notes.txt object to a BufferedWriter. ```python theme={null} from prefect_aws.s3 import S3Bucket s3_bucket = S3Bucket.load("my-bucket") with open("notes.txt", "wb") as f: s3_bucket.download_object_to_file_object("my_folder/notes.txt", f) ``` #### `download_object_to_path` ```python theme={null} download_object_to_path(self, from_path: str, to_path: Optional[Union[str, Path]], **download_kwargs: Dict[str, Any]) -> Path ``` Downloads an object from the S3 bucket to a path. **Args:** * `from_path`: The path to the object to download; this gets prefixed with the bucket\_folder. * `to_path`: The path to download the object to. If not provided, the object's name will be used. * `**download_kwargs`: Additional keyword arguments to pass to `Client.download_file`. **Returns:** * The absolute path that the object was downloaded to. **Examples:** Download my\_folder/notes.txt object to notes.txt. ```python theme={null} from prefect_aws.s3 import S3Bucket s3_bucket = S3Bucket.load("my-bucket") s3_bucket.download_object_to_path("my_folder/notes.txt", "notes.txt") ``` #### `get_directory` ```python theme={null} get_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> None ``` Copies a folder from the configured S3 bucket to a local directory. Defaults to copying the entire contents of the block's basepath to the current working directory. **Args:** * `from_path`: Path in S3 bucket to download from. Defaults to the block's configured basepath. * `local_path`: Local path to download S3 contents to. Defaults to the current working directory. #### `list_objects` ```python theme={null} list_objects(self, folder: str = '', delimiter: str = '', page_size: Optional[int] = None, max_items: Optional[int] = None, jmespath_query: Optional[str] = None) -> List[Dict[str, Any]] ``` **Args:** * `folder`: Folder to list objects from. * `delimiter`: Character used to group keys of listed objects. * `page_size`: Number of objects to return in each request to the AWS API. * `max_items`: Maximum number of objects that to be returned by task. * `jmespath_query`: Query used to filter objects based on object attributes refer to the [boto3 docs](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/paginators.html#filtering-results-with-jmespath) for more information on how to construct queries. **Returns:** * List of objects and their metadata in the bucket. **Examples:** List objects under the `base_folder`. ```python theme={null} from prefect_aws.s3 import S3Bucket s3_bucket = S3Bucket.load("my-bucket") s3_bucket.list_objects("base_folder") ``` #### `move_object` ```python theme={null} move_object(self, from_path: Union[str, Path], to_path: Union[str, Path], to_bucket: Optional[Union['S3Bucket', str]] = None) -> str ``` Uses S3's internal CopyObject and DeleteObject to move objects within or between buckets. To move objects between buckets, `self`'s credentials must have permission to read and delete the source object and write to the target object. If the credentials do not have those permissions, this method will raise an error. If the credentials have permission to read the source object but not delete it, the object will be copied but not deleted. **Args:** * `from_path`: The path of the object to move. * `to_path`: The path to move the object to. * `to_bucket`: The bucket to move to. Defaults to the current bucket. **Returns:** * The path that the object was moved to. Excludes the bucket name. Examples: Move notes.txt from my\_folder/notes.txt to my\_folder/notes\_copy.txt. ```python theme={null} from prefect_aws.s3 import S3Bucket s3_bucket = S3Bucket.load("my-bucket") s3_bucket.move_object("my_folder/notes.txt", "my_folder/notes_copy.txt") ``` Move notes.txt from my\_folder/notes.txt to my\_folder/notes\_copy.txt in another bucket. ```python theme={null} from prefect_aws.s3 import S3Bucket s3_bucket = S3Bucket.load("my-bucket") s3_bucket.move_object( "my_folder/notes.txt", "my_folder/notes_copy.txt", to_bucket="other-bucket" ) ``` #### `put_directory` ```python theme={null} put_directory(self, local_path: Optional[str] = None, to_path: Optional[str] = None, ignore_file: Optional[str] = None) -> int ``` Uploads a directory from a given local path to the configured S3 bucket in a given folder. Defaults to uploading the entire contents the current working directory to the block's basepath. **Args:** * `local_path`: Path to local directory to upload from. * `to_path`: Path in S3 bucket to upload to. Defaults to block's configured basepath. * `ignore_file`: Path to file containing gitignore style expressions for filepaths to ignore. #### `read_path` ```python theme={null} read_path(self, path: str) -> bytes ``` Read specified path from S3 and return contents. Provide the entire path to the key in S3. **Args:** * `path`: Entire path to (and including) the key. #### `stream_from` ```python theme={null} stream_from(self, bucket: 'S3Bucket', from_path: str, to_path: Optional[str] = None, **upload_kwargs: Dict[str, Any]) -> str ``` Streams an object from another bucket to this bucket. Requires the object to be downloaded and uploaded in chunks. If `self`'s credentials allow for writes to the other bucket, try using `S3Bucket.copy_object`. **Args:** * `bucket`: The bucket to stream from. * `from_path`: The path of the object to stream. * `to_path`: The path to stream the object to. Defaults to the object's name. * `**upload_kwargs`: Additional keyword arguments to pass to `Client.upload_fileobj`. **Returns:** * The path that the object was uploaded to. **Examples:** Stream notes.txt from your-bucket/notes.txt to my-bucket/landed/notes.txt. ```python theme={null} from prefect_aws.s3 import S3Bucket your_s3_bucket = S3Bucket.load("your-bucket") my_s3_bucket = S3Bucket.load("my-bucket") my_s3_bucket.stream_from( your_s3_bucket, "notes.txt", to_path="landed/notes.txt" ) ``` #### `upload_from_file_object` ```python theme={null} upload_from_file_object(self, from_file_object: BinaryIO, to_path: str, **upload_kwargs: Dict[str, Any]) -> str ``` Uploads an object to the S3 bucket from a file-like object, which can be a BytesIO object or a BufferedReader. **Args:** * `from_file_object`: The file-like object to upload from. * `to_path`: The path to upload the object to. * `**upload_kwargs`: Additional keyword arguments to pass to `Client.upload_fileobj`. **Returns:** * The path that the object was uploaded to. **Examples:** Upload BytesIO object to my\_folder/notes.txt. ```python theme={null} from io import BytesIO from prefect_aws.s3 import S3Bucket s3_bucket = S3Bucket.load("my-bucket") with open("notes.txt", "rb") as f: s3_bucket.upload_from_file_object(f, "my_folder/notes.txt") ``` Upload BufferedReader object to my\_folder/notes.txt. ```python theme={null} from prefect_aws.s3 import S3Bucket s3_bucket = S3Bucket.load("my-bucket") with open("notes.txt", "rb") as f: s3_bucket.upload_from_file_object( f, "my_folder/notes.txt" ) ``` #### `upload_from_folder` ```python theme={null} upload_from_folder(self, from_folder: Union[str, Path], to_folder: Optional[str] = None, **upload_kwargs: Dict[str, Any]) -> Union[str, None] ``` Uploads files *within* a folder (excluding the folder itself) to the object storage service folder. **Args:** * `from_folder`: The path to the folder to upload from. * `to_folder`: The path to upload the folder to. * `**upload_kwargs`: Additional keyword arguments to pass to `Client.upload_fileobj`. **Returns:** * The path that the folder was uploaded to. **Examples:** Upload contents from my\_folder to new\_folder. ```python theme={null} from prefect_aws.s3 import S3Bucket s3_bucket = S3Bucket.load("my-bucket") s3_bucket.upload_from_folder("my_folder", "new_folder") ``` #### `upload_from_path` ```python theme={null} upload_from_path(self, from_path: Union[str, Path], to_path: Optional[str] = None, **upload_kwargs: Dict[str, Any]) -> str ``` Uploads an object from a path to the S3 bucket. **Args:** * `from_path`: The path to the file to upload from. * `to_path`: The path to upload the file to. * `**upload_kwargs`: Additional keyword arguments to pass to `Client.upload`. **Returns:** * The path that the object was uploaded to. **Examples:** Upload notes.txt to my\_folder/notes.txt. ```python theme={null} from prefect_aws.s3 import S3Bucket s3_bucket = S3Bucket.load("my-bucket") s3_bucket.upload_from_path("notes.txt", "my_folder/notes.txt") ``` #### `validate_credentials` ```python theme={null} validate_credentials(cls, value, field) ``` #### `write_path` ```python theme={null} write_path(self, path: str, content: bytes) -> str ``` Writes to an S3 bucket. Args: path: The key name. Each object in your bucket has a unique key (or key name). content: What you are uploading to S3. Example: Write data to the path `dogs/small_dogs/havanese` in an S3 Bucket: ```python theme={null} from prefect_aws import MinioCredentials from prefect_aws.s3 import S3Bucket minio_creds = MinIOCredentials( minio_root_user = "minioadmin", minio_root_password = "minioadmin", ) s3_bucket_block = S3Bucket( bucket_name="bucket", minio_credentials=minio_creds, bucket_folder="dogs/smalldogs", endpoint_url="http://localhost:9000", ) s3_havanese_path = s3_bucket_block.write_path(path="havanese", content=data) ``` # secrets_manager Source: https://docs.prefect.io/integrations/prefect-aws/api-ref/prefect_aws-secrets_manager # `prefect_aws.secrets_manager` Tasks for interacting with AWS Secrets Manager ## Functions ### `read_secret` ```python theme={null} read_secret(secret_name: str, aws_credentials: AwsCredentials, version_id: Optional[str] = None, version_stage: Optional[str] = None) -> Union[str, bytes] ``` Reads the value of a given secret from AWS Secrets Manager. **Args:** * `secret_name`: Name of stored secret. * `aws_credentials`: Credentials to use for authentication with AWS. * `version_id`: Specifies version of secret to read. Defaults to the most recent version if not given. * `version_stage`: Specifies the version stage of the secret to read. Defaults to AWS\_CURRENT if not given. **Returns:** * The secret values as a `str` or `bytes` depending on the format in which the secret was stored. ### `update_secret` ```python theme={null} update_secret(secret_name: str, secret_value: Union[str, bytes], aws_credentials: AwsCredentials, description: Optional[str] = None) -> Dict[str, str] ``` Updates the value of a given secret in AWS Secrets Manager. **Args:** * `secret_name`: Name of secret to update. * `secret_value`: Desired value of the secret. Can be either `str` or `bytes`. * `aws_credentials`: Credentials to use for authentication with AWS. * `description`: Desired description of the secret. **Returns:** * A dict containing the secret ARN (Amazon Resource Name), name, and current version ID. ```python theme={null} { "ARN": str, "Name": str, "VersionId": str } ``` ### `create_secret` ```python theme={null} create_secret(secret_name: str, secret_value: Union[str, bytes], aws_credentials: AwsCredentials, description: Optional[str] = None, tags: Optional[List[Dict[str, str]]] = None) -> Dict[str, str] ``` Creates a secret in AWS Secrets Manager. **Args:** * `secret_name`: The name of the secret to create. * `secret_value`: The value to store in the created secret. * `aws_credentials`: Credentials to use for authentication with AWS. * `description`: A description for the created secret. * `tags`: A list of tags to attach to the secret. Each tag should be specified as a dictionary in the following format: ```python theme={null} { "Key"\: str, "Value"\: str } ``` **Returns:** * A dict containing the secret ARN (Amazon Resource Name), name, and current version ID. ```python theme={null} { "ARN": str, "Name": str, "VersionId": str } ``` Example: Create a secret: ```python theme={null} from prefect import flow from prefect_aws import AwsCredentials from prefect_aws.secrets_manager import create_secret @flow def example_create_secret(): aws_credentials = AwsCredentials( aws_access_key_id="access_key_id", aws_secret_access_key="secret_access_key" ) create_secret( secret_name="life_the_universe_and_everything", secret_value="42", aws_credentials=aws_credentials ) example_create_secret() ``` ### `delete_secret` ```python theme={null} delete_secret(secret_name: str, aws_credentials: AwsCredentials, recovery_window_in_days: int = 30, force_delete_without_recovery: bool = False) -> Dict[str, str] ``` Deletes a secret from AWS Secrets Manager. Secrets can either be deleted immediately by setting `force_delete_without_recovery` equal to `True`. Otherwise, secrets will be marked for deletion and available for recovery for the number of days specified in `recovery_window_in_days` **Args:** * `secret_name`: Name of the secret to be deleted. * `aws_credentials`: Credentials to use for authentication with AWS. * `recovery_window_in_days`: Number of days a secret should be recoverable for before permanent deletion. Minimum window is 7 days and maximum window is 30 days. If `force_delete_without_recovery` is set to `True`, this value will be ignored. * `force_delete_without_recovery`: If `True`, the secret will be immediately deleted and will not be recoverable. **Returns:** * A dict containing the secret ARN (Amazon Resource Name), name, and deletion date of the secret. DeletionDate is the date and time of the delete request plus the number of days in `recovery_window_in_days`. ```python theme={null} { "ARN": str, "Name": str, "DeletionDate": datetime.datetime } ``` **Examples:** Delete a secret immediately: ```python theme={null} from prefect import flow from prefect_aws import AwsCredentials from prefect_aws.secrets_manager import delete_secret @flow def example_delete_secret_immediately(): aws_credentials = AwsCredentials( aws_access_key_id="access_key_id", aws_secret_access_key="secret_access_key" ) delete_secret( secret_name="life_the_universe_and_everything", aws_credentials=aws_credentials, force_delete_without_recovery: True ) example_delete_secret_immediately() ``` Delete a secret with a 90 day recovery window: ```python theme={null} from prefect import flow from prefect_aws import AwsCredentials from prefect_aws.secrets_manager import delete_secret @flow def example_delete_secret_with_recovery_window(): aws_credentials = AwsCredentials( aws_access_key_id="access_key_id", aws_secret_access_key="secret_access_key" ) delete_secret( secret_name="life_the_universe_and_everything", aws_credentials=aws_credentials, recovery_window_in_days=90 ) example_delete_secret_with_recovery_window() ``` ## Classes ### `AwsSecret` Manages a secret in AWS's Secrets Manager. **Attributes:** * `aws_credentials`: The credentials to use for authentication with AWS. * `secret_name`: The name of the secret. **Methods:** #### `adelete_secret` ```python theme={null} adelete_secret(self, recovery_window_in_days: int = 30, force_delete_without_recovery: bool = False, **delete_kwargs: Dict[str, Any]) -> str ``` Asynchronously deletes the secret from the secret storage service. **Args:** * `recovery_window_in_days`: The number of days to wait before permanently deleting the secret. Must be between 7 and 30 days. * `force_delete_without_recovery`: If True, the secret will be deleted immediately without a recovery window. * `**delete_kwargs`: Additional keyword arguments to pass to the delete\_secret method of the boto3 client. **Returns:** * The path that the secret was deleted from. **Examples:** Deletes the secret with a recovery window of 15 days. ```python theme={null} secrets_manager = SecretsManager.load("MY_BLOCK") await secrets_manager.adelete_secret(recovery_window_in_days=15) ``` #### `aread_secret` ```python theme={null} aread_secret(self, version_id: Optional[str] = None, version_stage: Optional[str] = None, **read_kwargs: Any) -> bytes ``` Asynchronously reads the secret from the secret storage service. **Args:** * `version_id`: The version of the secret to read. If not provided, the latest version will be read. * `version_stage`: The version stage of the secret to read. If not provided, the latest version will be read. * `read_kwargs`: Additional keyword arguments to pass to the `get_secret_value` method of the boto3 client. **Returns:** * The secret data. **Examples:** Reads a secret. ```python theme={null} secrets_manager = SecretsManager.load("MY_BLOCK") await secrets_manager.aread_secret() ``` #### `awrite_secret` ```python theme={null} awrite_secret(self, secret_data: bytes, **put_or_create_secret_kwargs: Dict[str, Any]) -> str ``` Asynchronously writes the secret to the secret storage service as a SecretBinary; if it doesn't exist, it will be created. **Args:** * `secret_data`: The secret data to write. * `**put_or_create_secret_kwargs`: Additional keyword arguments to pass to put\_secret\_value or create\_secret method of the boto3 client. **Returns:** * The path that the secret was written to. **Examples:** Write some secret data. ```python theme={null} secrets_manager = SecretsManager.load("MY_BLOCK") await secrets_manager.awrite_secret(b"my_secret_data") ``` #### `delete_secret` ```python theme={null} delete_secret(self, recovery_window_in_days: int = 30, force_delete_without_recovery: bool = False, **delete_kwargs: Dict[str, Any]) -> str ``` Deletes the secret from the secret storage service. **Args:** * `recovery_window_in_days`: The number of days to wait before permanently deleting the secret. Must be between 7 and 30 days. * `force_delete_without_recovery`: If True, the secret will be deleted immediately without a recovery window. * `**delete_kwargs`: Additional keyword arguments to pass to the delete\_secret method of the boto3 client. **Returns:** * The path that the secret was deleted from. **Examples:** Deletes the secret with a recovery window of 15 days. ```python theme={null} secrets_manager = SecretsManager.load("MY_BLOCK") secrets_manager.delete_secret(recovery_window_in_days=15) ``` #### `read_secret` ```python theme={null} read_secret(self, version_id: Optional[str] = None, version_stage: Optional[str] = None, **read_kwargs: Any) -> bytes ``` Reads the secret from the secret storage service. **Args:** * `version_id`: The version of the secret to read. If not provided, the latest version will be read. * `version_stage`: The version stage of the secret to read. If not provided, the latest version will be read. * `read_kwargs`: Additional keyword arguments to pass to the `get_secret_value` method of the boto3 client. **Returns:** * The secret data. **Examples:** Reads a secret. ```python theme={null} secrets_manager = SecretsManager.load("MY_BLOCK") secrets_manager.read_secret() ``` #### `write_secret` ```python theme={null} write_secret(self, secret_data: bytes, **put_or_create_secret_kwargs: Dict[str, Any]) -> str ``` Writes the secret to the secret storage service as a SecretBinary; if it doesn't exist, it will be created. **Args:** * `secret_data`: The secret data to write. * `**put_or_create_secret_kwargs`: Additional keyword arguments to pass to put\_secret\_value or create\_secret method of the boto3 client. **Returns:** * The path that the secret was written to. **Examples:** Write some secret data. ```python theme={null} secrets_manager = SecretsManager.load("MY_BLOCK") secrets_manager.write_secret(b"my_secret_data") ``` # settings Source: https://docs.prefect.io/integrations/prefect-aws/api-ref/prefect_aws-settings # `prefect_aws.settings` ## Classes ### `EcsObserverSqsSettings` ### `EcsObserverSettings` ### `EcsWorkerSettings` Settings for controlling ECS worker behavior. ### `EcsSettings` ### `RdsIAMSettings` Settings for controlling RDS IAM authentication. ### `RdsSettings` Settings for AWS RDS integration. ### `AwsSettings` # __init__ Source: https://docs.prefect.io/integrations/prefect-aws/api-ref/prefect_aws-templates-__init__ # `prefect_aws.templates` Template utilities for prefect-aws infrastructure deployment. # ecs Source: https://docs.prefect.io/integrations/prefect-aws/api-ref/prefect_aws-templates-ecs # `prefect_aws.templates.ecs` ECS infrastructure templates generated by `cdk synth`. See @infra for more info. # utilities Source: https://docs.prefect.io/integrations/prefect-aws/api-ref/prefect_aws-utilities # `prefect_aws.utilities` Utilities for working with AWS services. ## Functions ### `hash_collection` ```python theme={null} hash_collection(collection) -> int ``` Use visit\_collection to transform and hash a collection. **Args:** * `collection`: The collection to hash. **Returns:** * The hash of the transformed collection. ### `ensure_path_exists` ```python theme={null} ensure_path_exists(doc: Union[Dict, List], path: List[str]) ``` Ensures the path exists in the document, creating empty dictionaries or lists as needed. **Args:** * `doc`: The current level of the document or sub-document. * `path`: The remaining path parts to ensure exist. ### `assemble_document_for_patches` ```python theme={null} assemble_document_for_patches(patches) ``` Assembles an initial document that can successfully accept the given JSON Patch operations. **Args:** * `patches`: A list of JSON Patch operations. **Returns:** * An initial document structured to accept the patches. Example: ```python theme={null} patches = [ {"op": "replace", "path": "/name", "value": "Jane"}, {"op": "add", "path": "/contact/address", "value": "123 Main St"}, {"op": "remove", "path": "/age"} ] initial_document = assemble_document_for_patches(patches) #output { "name": {}, "contact": {}, "age": {} } ``` # __init__ Source: https://docs.prefect.io/integrations/prefect-aws/api-ref/prefect_aws-workers-__init__ # `prefect_aws.workers` *This module is empty or contains only private/internal implementations.* # ecs_worker Source: https://docs.prefect.io/integrations/prefect-aws/api-ref/prefect_aws-workers-ecs_worker # `prefect_aws.workers.ecs_worker` Prefect worker for executing flow runs as ECS tasks. Get started by creating a work pool: ``` $ prefect work-pool create --type ecs my-ecs-pool ``` Then, you can start a worker for the pool: ``` $ prefect worker start --pool my-ecs-pool ``` It's common to deploy the worker as an ECS task as well. However, you can run the worker locally to get started. The worker may work without any additional configuration, but it is dependent on your specific AWS setup and we'd recommend opening the work pool editor in the UI to see the available options. By default, the worker will register a task definition for each flow run and run a task in your default ECS cluster using AWS Fargate. Fargate requires tasks to configure subnets, which we will infer from your default VPC. If you do not have a default VPC, you must provide a VPC ID or manually setup the network configuration for your tasks. Note, the worker caches task definitions for each deployment to avoid excessive registration. The worker will check that the cached task definition is compatible with your configuration before using it. The launch type option can be used to run your tasks in different modes. For example, `FARGATE_SPOT` can be used to use spot instances for your Fargate tasks or `EC2` can be used to run your tasks on a cluster backed by EC2 instances. Generally, it is very useful to enable CloudWatch logging for your ECS tasks; this can help you debug task failures. To enable CloudWatch logging, you must provide an execution role ARN with permissions to create and write to log streams. See the `configure_cloudwatch_logs` field documentation for details. The worker can be configured to use an existing task definition by setting the task definition arn variable or by providing a "taskDefinition" in the task run request. When a task definition is provided, the worker will never create a new task definition which may result in variables that are templated into the task definition payload being ignored. ## Functions ### `parse_identifier` ```python theme={null} parse_identifier(identifier: str) -> ECSIdentifier ``` Splits identifier into its cluster and task components, e.g. input "cluster\_name::task\_arn" outputs ("cluster\_name", "task\_arn"). ### `mask_sensitive_env_values` ```python theme={null} mask_sensitive_env_values(task_run_request: dict, values: List[str], keep_length = 3, replace_with = '***') ``` ### `mask_api_key` ```python theme={null} mask_api_key(task_run_request) ``` ## Classes ### `ECSIdentifier` The identifier for a running ECS task. ### `CapacityProvider` The capacity provider strategy to use when running the task. ### `ECSJobConfiguration` Job configuration for an ECS worker. **Methods:** #### `at_least_one_container_is_essential` ```python theme={null} at_least_one_container_is_essential(self) -> Self ``` Ensures that at least one container will be marked as essential in the task definition. #### `cloudwatch_logs_options_requires_configure_cloudwatch_logs` ```python theme={null} cloudwatch_logs_options_requires_configure_cloudwatch_logs(self) -> Self ``` Enforces that an execution role arn is provided (or could be provided by a runtime task definition) when configuring logging. #### `configure_cloudwatch_logs_requires_execution_role_arn` ```python theme={null} configure_cloudwatch_logs_requires_execution_role_arn(self) -> Self ``` Enforces that an execution role arn is provided (or could be provided by a runtime task definition) when configuring logging. #### `container_name_default_from_task_definition` ```python theme={null} container_name_default_from_task_definition(self) -> Self ``` Infers the container name from the task definition if not provided. #### `json_template` ```python theme={null} json_template(cls) -> dict[str, Any] ``` Returns a dict with job configuration as keys and the corresponding templates as values Defaults to using the job configuration parameter name as the template variable name. e.g. ```python theme={null} { key1: '{{ key1 }}', # default variable template key2: '{{ template2 }}', # `template2` specifically provide as template } ``` #### `network_configuration_requires_vpc_id` ```python theme={null} network_configuration_requires_vpc_id(self) -> Self ``` Enforces a `vpc_id` is provided when custom network configuration mode is enabled for network settings. #### `prepare_for_flow_run` ```python theme={null} prepare_for_flow_run(self, flow_run: 'FlowRun', deployment: 'DeploymentResponse | None' = None, flow: 'APIFlow | None' = None, work_pool: 'WorkPool | None' = None, worker_name: str | None = None, worker_id: 'UUID | None' = None) -> None ``` #### `task_run_request_requires_arn_if_no_task_definition_given` ```python theme={null} task_run_request_requires_arn_if_no_task_definition_given(self) -> Self ``` If no task definition is provided, a task definition ARN must be present on the task run request. ### `ECSVariables` Variables for templating an ECS job. ### `ECSWorkerResult` The result of an ECS job. ### `ECSWorker` A Prefect worker to run flow runs as ECS tasks. **Methods:** #### `kill_infrastructure` ```python theme={null} kill_infrastructure(self, infrastructure_pid: str, configuration: ECSJobConfiguration, grace_seconds: int = 30) -> None ``` Stop an ECS task. **Args:** * `infrastructure_pid`: The infrastructure identifier in format "cluster::task\_arn". * `configuration`: The job configuration used to connect to AWS. * `grace_seconds`: Not used for ECS (ECS handles graceful shutdown internally). **Raises:** * `InfrastructureNotFound`: If the task doesn't exist. #### `run` ```python theme={null} run(self, flow_run: 'FlowRun', configuration: ECSJobConfiguration, task_status: Optional[anyio.abc.TaskStatus] = None) -> ECSWorkerResult ``` Runs a given flow run on the current worker. # Get to know the ECS worker Source: https://docs.prefect.io/integrations/prefect-aws/ecs-worker/index Deploy production-ready Prefect workers on AWS Elastic Container Service (ECS) for scalable, containerized flow execution. ECS workers provide robust infrastructure management with automatic scaling, high availability, and seamless AWS integration. ## Why use ECS for flow run execution? ECS (Elastic Container Service) is an excellent choice for executing Prefect flow runs in production environments: * **Production-ready scalability**: ECS automatically scales your infrastructure based on demand, efficiently managing container distribution across multiple instances * **Flexible compute options**: Choose between AWS Fargate for serverless execution or Amazon EC2 for faster job start times and additional control * **Native AWS integration**: Seamlessly connect with AWS services like IAM, CloudWatch, Secrets Manager, and VPC networking * **Containerized reliability**: Docker container support ensures reproducible deployments and consistent runtime environments * **Cost optimization**: Pay only for the compute resources you use with automatic scaling and spot instance support ## Architecture Overview ECS workers operate within your AWS infrastructure, providing secure and scalable flow execution. Prefect enables remote flow execution via workers and work pools - to learn more about these concepts see the [deployment docs](/v3/deploy/infrastructure-concepts/work-pools/). ```mermaid theme={null} %%{ init: { 'theme': 'neutral', 'themeVariables': { 'margin': '10px' } } }%% flowchart TB subgraph ecs_cluster[ECS Cluster] subgraph ecs_service[ECS Service] td_worker[Worker Task Definition] --> |defines| prefect_worker[Prefect Worker] end prefect_worker -->|kicks off| ecs_task fr_task_definition[Flow Run Task Definition] subgraph ecs_task[ECS Task Execution] flow_run((Flow Run)) end fr_task_definition -->|defines| ecs_task end subgraph prefect_cloud[Prefect Cloud] work_pool[ECS Work Pool] end subgraph github[ECR] flow_code["Flow Code"] end flow_code --> |pulls| ecs_task prefect_worker -->|polls| work_pool work_pool -->|configures| fr_task_definition ``` ### Key Components * **ECS Worker**: Long-running service that polls work pools and manages flow run execution. Runs as an ECS Service for auto-recovery in case of failure * **Task Definitions**: Blueprint for ECS tasks that describes which Docker containers to run and their configuration * **ECS Cluster**: Provides the underlying compute capacity with auto-scaling capabilities * **Work Pools**: Typed according to infrastructure - flow runs in `ecs` work pools are executed as ECS tasks * **Flow Run Tasks**: Ephemeral ECS tasks that execute individual Prefect flows until completion ### How It Works 1. **Continuous Polling**: The ECS worker continuously polls your Prefect server or Prefect Cloud for scheduled flow runs 2. **Task Creation**: When work is available, the worker creates ECS task definitions based on work pool configuration 3. **Flow Execution**: Flow runs are launched as ECS tasks with appropriate resource allocation and configuration 4. **Auto-scaling**: ECS automatically manages container distribution and scaling based on demand 5. **Cleanup**: After flow completion, containers are cleaned up while the worker continues polling **ECS tasks ≠ Prefect tasks** An ECS task is **not** the same as a [Prefect task](/v3/develop/write-tasks). ECS tasks are groupings of containers that run within an ECS Cluster, defined by task definitions. They're ideal for ephemeral processes like Prefect flow runs. ## Deployment options ### With the `prefect-aws` CLI The fastest way to deploy production-ready ECS workers is by using the `prefect-aws` CLI: ```bash theme={null} prefect-aws ecs-worker deploy-service \ --work-pool-name my-ecs-pool \ --stack-name prefect-ecs-worker \ --existing-cluster-identifier my-ecs-cluster \ --existing-vpc-id vpc-12345678 \ --existing-subnet-ids subnet-12345,subnet-67890 \ --prefect-api-url https://api.prefect.cloud/api/accounts/.../workspaces/... \ --prefect-api-key your-api-key ``` This command creates a CloudFormation stack that provisions all the infrastructure required for a production-ready ECS worker service. **Pinning worker image versions** By default, the CLI uses `prefecthq/prefect-aws:latest` which includes both `prefect` and `prefect-aws` pre-installed. For more control over versions in production or to use a custom image, use the `--docker-image` flag: ```bash theme={null} prefect-aws ecs-worker deploy-service \ --docker-image prefecthq/prefect-aws:0.7.5-python3.12-prefect3.6.20 \ --work-pool-name my-ecs-pool \ ... ``` This ensures your worker uses specific versions of both `prefect-aws` and `prefect`, preventing unexpected behavior from automatic updates. **Key benefits:** * **One-command deployment**: Provisions complete infrastructure with a single command * **CloudFormation managed**: Infrastructure as code with rollback capabilities * **Auto-scaling configured**: Built-in scaling policies for production workloads * **Monitoring included**: CloudWatch logs and alarms pre-configured * **Production defaults**: Secure, optimized settings out of the box **Additional CLI commands:** * `prefect-aws ecs-worker list` - View all deployed stacks * `prefect-aws ecs-worker status ` - Check deployment status * `prefect-aws ecs-worker delete ` - Clean up infrastructure * `prefect-aws ecs-worker export-template` - Export CloudFormation templates for customization For detailed CLI options run `prefect-aws ecs-worker deploy-service --help`. ### Manual deployment For users who want full control over their ECS infrastructure setup: **[Deploy manually →](/integrations/prefect-aws/ecs-worker/manual-deployment)** Step-by-step guide for creating ECS clusters, task definitions, and configuring workers from scratch. ## Prerequisites Before deploying ECS workers, ensure you have: * **AWS Account**: Active AWS account with appropriate permissions * **IAM Permissions**: Rights to create ECS clusters, task definitions, and IAM roles * **Docker Knowledge**: Basic understanding of containerization concepts * **Prefect Setup**: Active Prefect server or Prefect Cloud workspace ## Getting started 1. **Choose your deployment method**: Manual setup provides maximum flexibility, while infrastructure as code offers reproducible deployments 2. **Configure AWS credentials**: Set up IAM roles and permissions for secure AWS service access 3. **Create work pools**: Define work pool configurations that match your ECS infrastructure 4. **Deploy workers**: Launch ECS workers that will poll for and execute flow runs 5. **Monitor and scale**: Use CloudWatch and ECS metrics to optimize performance ## Next steps * **[Manual Deployment Guide](/integrations/prefect-aws/ecs-worker/manual-deployment)** - Complete walkthrough for setting up ECS workers step-by-step * **[Work Pool Configuration](/v3/deploy/infrastructure-concepts/work-pools/)** - Learn about Prefect work pools and worker concepts * **[AWS ECS Documentation](https://docs.aws.amazon.com/ecs/)** - Official AWS documentation for ECS services * **[Prefect Cloud Push Work Pools](/v3/how-to-guides/deployment_infra/serverless)** - Serverless alternative to self-managed workers # How to manually deploy an ECS worker to an ECS cluster Source: https://docs.prefect.io/integrations/prefect-aws/ecs-worker/manual-deployment Step-by-step guide for manually setting up ECS infrastructure to run Prefect workers with full control over cluster configuration, IAM roles, and task definitions. This guide is valid for users of self-hosted Prefect server or Prefect Cloud users with a tier that allows hybrid work pools. This guide walks you through manually setting up ECS infrastructure to run Prefect workers. For architecture concepts and overview, see the [ECS Worker overview](/integrations/prefect-aws/ecs-worker). ## Prerequisites You will need the following to successfully complete this guide: * A Prefect server. You will need either: * [Prefect Cloud](https://app.prefect.cloud) account on Starter tier or above * [Prefect self-managed instance](/v3/concepts/server) * An AWS account with permissions to create: * IAM roles * IAM policies * Secrets in [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) or [Systems Manager Parameter Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html) * ECS task definitions * ECS services * The AWS CLI installed on your local machine. You can [download it from the AWS website](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). * An existing [ECS Cluster](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/clusters.html) * An existing [VPC](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) - this guide assumes the use the default VPC. You can create an ECS cluster using the AWS CLI or the AWS Management Console. To create an ECS cluster using the AWS CLI, run the following command: ```bash wrap theme={null} aws ecs create-cluster --cluster-name my-ecs-cluster ``` No further configuration is required for this guide, as we will use the Fargate launch type and the default VPC. For production deployments, it is recommended that you create your own VPC with appropriate security policies based on your organization's recommendations. If you want to create a new VPC for this guide, follow the [VPC creation guide](https://docs.aws.amazon.com/vpc/latest/userguide/create-vpc.html). ## Create the Prefect ECS work pool First, create an ECS [work pool](/v3/deploy/infrastructure-concepts/work-pools/) for your deployments to use. You can do this either from the CLI or the Prefect UI. If doing so from the CLI, be sure to [authenticate with Prefect Cloud](/v3/how-to-guides/cloud/connect-to-cloud) or run a local Prefect server instance. Run the following command to create a new ECS work pool named `my-ecs-pool`: ```bash theme={null} prefect work-pool create --type ecs my-ecs-pool ``` 1. Navigate to the **Work Pools** page in the Prefect UI. 2. Click the `+` button to the right of the **Work Pool** page header. 3. Select **AWS Elastic Container Service**. In Prefect Cloud, this will be under the **Hybrid** section. {"Work Because this guide uses Fargate as the capacity provider, this step requires no further action. ## Create a Secret for the Prefect API key If you are using a Prefect self-hosted server and have authentication disabled, you can skip this step. The Prefect worker needs to authenticate with your Prefect server to poll the work pool for flow runs. For authentication, you must provide a Bearer token (`PREFECT_API_KEY`) or Basic Auth string (`PREFECT_API_AUTH_STRING`) to the Prefect API. As a security best practice, we recommend you store your Prefect API key in [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) or [Systems Manager Parameter Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-paramstore.html). You can find your Prefect API key several ways: If you are on a paid plan you can create a [service account](/v3/how-to-guides/cloud/manage-users/service-accounts) for the worker. If you are on a free plan, you can use a user's API key. To find your API key, use the Prefect CLI: ```bash wrap theme={null} # If not already authenticated, log in first prefect cloud login prefect config view --show-secrets ``` There is no concept of a `PREFECT_API_KEY` in a self-hosted Prefect server. Instead, you use the `PREFECT_API_AUTH_STRING` containing your basic auth credentials (if your server uses [basic authentication](/v3/advanced/security-settings#basic-authentication)). You can find this information on the Settings page for your Prefect server. Choose between AWS Secrets Manager or Systems Manager Parameter Store to store your Prefect API key. Both services allow you to securely store and manage sensitive information such as API keys, passwords, and other secrets. To create a Secret in AWS Secrets Manager, use the [`aws secretsmanager create-secret`](https://docs.aws.amazon.com/cli/latest/reference/secretsmanager/create-secret.html) command: ```bash wrap theme={null} aws secretsmanager create-secret --name PrefectECSWorkerAPIKey --secret-string '' ``` Make a note of the Amazon Resource Name (ARN) of the secret that is returned in the command output. You will need it later when configuring the ECS worker task definition. To create a SecureString parameter in AWS Systems Manager Parameter Store, use the [`aws ssm put-parameter`](https://docs.aws.amazon.com/cli/latest/reference/ssm/put-parameter.html) command: ```bash wrap theme={null} aws ssm put-parameter --name "/prefect/my-ecs-pool/api/key" --value "" --type "SecureString" ``` You may customize the parameter hierarchy and name to suit your needs. In this example we've used, `/prefect/my-ecs-pool/api/key` but any parameter name works. Your ECS task execution role will need to be able to read this value. Make a note of the name you specified for the parameter, as you will need it later when configuring the ECS worker. ## Create the AWS IAM resources We will create two [IAM roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-custom.html#roles-creatingrole-custom-trust-policy-console): 1. `ecsTaskExecutionRole`: This role will be used by ECS to start ECS tasks. 2. `ecsTaskRole`: This role will contain the permissions required by Prefect ECS worker in order to run your flows as ECS tasks. The role permissions are based on the principle of [least-privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-started-reduce-permissions.html), meaning that each role will only have the permissions it needs to perform its job. ### Create a trust policy The trust policy will allow the ECS service containing the Prefect worker to assume the role required for calling other AWS services. This is called a [service-linked role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create-service-linked-role.html). The trust policy is a JSON document that specifies which AWS service can assume the role. Save this policy to a file, such as `trust-policy.json`: ```json theme={null} { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "ecs-tasks.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } ``` Alternately, you can download this file using the following command: ```bash curl wrap theme={null} curl -O https://raw.githubusercontent.com/PrefectHQ/prefect/refs/heads/main/docs/integrations/prefect-aws/ecs/iam/trust-policy.json ``` ```bash wget wrap theme={null} wget https://raw.githubusercontent.com/PrefectHQ/prefect/refs/heads/main/docs/integrations/prefect-aws/ecs/iam/trust-policy.json ``` ### Create the IAM roles Now, we will create the IAM roles that will be used by the ECS worker. #### Create the ECS task execution role The ECS task execution role will be used to start the ECS worker task. We will assign it a minimal set of permissions to allow the worker to pull images from ECR and publish logs to CloudWatch. Create the role using the [`aws iam create-role`](https://docs.aws.amazon.com/cli/latest/reference/iam/create-role.html) command: ```bash wrap theme={null} aws iam create-role --role-name ecsTaskExecutionRole --assume-role-policy-document file://trust-policy.json ``` Make a note of the ARN (Amazon Resource Name) of the role that is returned in the command output. You will need it later when creating the ECS task definition. The following is a minimal policy that grants the necessary permissions for ECS to obtain the current value of the secret and inject it into the ECS task. Save this policy to a file, such as `secret-policy.json`: ```json theme={null} { "Version": "2012-10-17", "Statement": [ { "Action": [ "secretsmanager:GetSecretValue", ], "Effect": "Allow", "Resource": "arn:aws:secretsmanager:::secret:PrefectECSWorkerAPIKey" } ] } ``` Alternately, you can download this file using the following command: ```bash curl wrap theme={null} curl -O https://raw.githubusercontent.com/PrefectHQ/prefect/refs/heads/main/docs/integrations/prefect-aws/ecs/iam/secrets/secrets-manager/secret-policy.json ``` ```bash wget wrap theme={null} wget https://raw.githubusercontent.com/PrefectHQ/prefect/refs/heads/main/docs/integrations/prefect-aws/ecs/iam/secrets/secrets-manager/secret-policy.json ``` The following is a minimal policy that grants the necessary permissions for ECS to obtain the current value of the parameter and inject it into the ECS task. Save this policy to a file, such as `secret-policy.json`: ```json theme={null} { "Version": "2012-10-17", "Statement": [ { "Action": [ "ssm:GetParameters" ], "Effect": "Allow", "Resource": "arn:aws:ssm:::parameter/prefect/my-ecs-pool/api/key" } ] } ``` Alternately, you can download this file using the following command: ```bash curl wrap theme={null} curl -O https://raw.githubusercontent.com/PrefectHQ/prefect/refs/heads/main/docs/integrations/prefect-aws/ecs/iam/secrets/ssm-parameter-store/secret-policy.json ``` ```bash wget wrap theme={null} wget https://raw.githubusercontent.com/PrefectHQ/prefect/refs/heads/main/docs/integrations/prefect-aws/ecs/iam/secrets/ssm-parameter-store/secret-policy.json ``` If your secret is encrypted with a customer-managed key (CMK) in AWS Key Management Service (KMS), you will also need to add the `kms:Decrypt` permission to the policy. For example: ```json focus={11-17} theme={null} { "Version": "2012-10-17", "Statement": [ { "Action": [ "secretsmanager:GetSecretValue", ], "Effect": "Allow", "Resource": "arn:aws:secretsmanager:::secret:PrefectECSWorkerAPIKey" }, { "Action": [ "kms:Decrypt" ], "Effect": "Allow", "Resource": "arn:aws:kms:::key/" } ] } ``` Create a new IAM policy named `ecsTaskExecutionPolicy` using the policy document you just created. ```bash wrap theme={null} aws iam create-policy --policy-name ecsTaskExecutionPolicy --policy-document file://secret-policy.json ``` The `AmazonECSTaskExecutionRolePolicy` managed policy grants the minimum permissions necessary for starting ECS tasks. [See here](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_execution_IAM_role.html) for other common execution role permissions. Attach this policy to your task execution role using the [`aws iam attach-role-policy`](https://docs.aws.amazon.com/cli/latest/reference/iam/attach-role-policy.html): ```bash wrap theme={null} aws iam attach-role-policy --role-name ecsTaskExecutionRole --policy-arn arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy ``` Attach the custom policy you created in the previous step so that the ECS task can access the Prefect API key stored in AWS Secrets Manager or Systems Manager Parameter Store: ```bash wrap theme={null} aws iam put-role-policy --role-name ecsTaskExecutionRole --policy-name PrefectECSWorkerSecretPolicy --policy-document file://secret-policy.json ``` #### Create the worker ECS task role The worker ECS task role will be used by the Prefect worker to interact with the AWS API to run flows as ECS containers. This role will require the ability to describe, register, and deregister ECS task definitions, as well as the ability to start and stop ECS tasks. Use the following command to create the role. The same trust policy is also used for this role. ```bash wrap theme={null} aws iam create-role --role-name ecsTaskRole --assume-role-policy-document file://trust-policy.json ``` The following is a minimal policy that grants the necessary permissions for the Prefect ECS worker to run your flows as ECS tasks. Save this policy to a file, such as `worker-policy.json`: ```json theme={null} { "Version": "2012-10-17", "Statement": [ { "Action": [ "ec2:DescribeSubnets", "ec2:DescribeVpcs", "ecs:DeregisterTaskDefinition", "ecs:DescribeTaskDefinition", "ecs:DescribeTasks", "ecs:RegisterTaskDefinition", "ecs:RunTask", "ecs:StopTask", "ecs:TagResource", "iam:PassRole", "logs:GetLogEvents", "logs:PutLogEvents" ], "Effect": "Allow", "Resource": "*" } ] } ``` Alternately, you can download this file using the following command: ```bash curl wrap theme={null} curl -O https://raw.githubusercontent.com/PrefectHQ/prefect/refs/heads/main/docs/integrations/prefect-aws/ecs/iam/worker-policy.json ``` ```bash wget wrap theme={null} wget https://raw.githubusercontent.com/PrefectHQ/prefect/refs/heads/main/docs/integrations/prefect-aws/ecs/iam/worker-policy.json ``` Create a new IAM policy named `ecsTaskPolicy` using the policy document you just created. ```bash wrap theme={null} aws iam create-policy --policy-name ecsTaskPolicy --policy-document file://worker-policy.json ``` Attach the custom `ecsTaskPolicy` to the `ecsTaskRole` so that the Prefect worker can dispatch flows to ECS: ```bash wrap theme={null} aws iam attach-role-policy --role-name ecsTaskRole --policy-arn arn:aws:iam:::policy/ecsTaskPolicy ``` Replace `` with your AWS account ID. #### Create an ECS task role for Prefect flows This step is optional, but recommended if your flows require access to other AWS services. Depending on the requirements of your flows, it is advised to create a [separate role for your ECS tasks](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html). This role will contain the permissions required by the ECS tasks in which your flows will run. For example, if your workflow loads data into an S3 bucket, you would need a role with additional permissions to access S3. Use the following command to create the role: ```bash wrap theme={null} aws iam create-role --role-name PrefectECSRunnerTaskRole --assume-role-policy-document file://trust-policy.json ``` The following is an example policy that allows reading/writing to an S3 bucket named `prefect-demo-bucket`. Save this policy to a file, such as `runner-task-policy.json`: ```json theme={null} { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation" ], "Resource": "arn:aws:s3:::prefect-demo-bucket" }, { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:PutObjectAcl", "s3:GetObject", "s3:GetObjectAcl", "s3:DeleteObject" ], "Resource": "arn:aws:s3:::prefect-demo-bucket/*" } ] } ``` Create a new IAM policy named `PrefectECSRunnerTaskPolicy` using the policy document you just created: ```bash wrap theme={null} aws iam create-policy --policy-name PrefectECSRunnerTaskPolicy --policy-document file://runner-task-policy.json ``` Attach the new `PrefectECSRunnerTaskPolicy` IAM policy to the `PrefectECSRunnerTaskRole` IAM role: ```bash wrap theme={null} aws iam attach-role-policy --role-name PrefectECSRunnerTaskRole --policy-arn arn:aws:iam:::policy/PrefectECSRunnerTaskPolicy ``` Replace `` with your AWS account ID. Finally, add the ARN of the `PrefectECSRunnerTaskRole` to your ECS work pool. This can be configured two ways: 1. Globally for all flows in the work pool by setting the **Task Role ARN (Optional)** field in the work pool configuration. 2. On a per-deployment basis by specifying the `task_role_arn` job variable in the deployment configuration. ## Configure event monitoring infrastructure To enable the ECS worker to monitor and update the status of flow runs, we need to set up SQS queues and EventBridge rules that capture ECS task state changes. This infrastructure allows the worker to: * Track when ECS tasks (flow runs) start, stop, or fail * Update flow run states in real-time based on ECS task events * Provide better observability and status reporting for your workflows This step sets up the same event monitoring infrastructure that the `prefect-aws ecs-worker deploy-events` command creates automatically. The worker will use the environment variable `PREFECT_INTEGRATIONS_AWS_ECS_OBSERVER_SQS_QUEUE_NAME` to discover and read from the events queue. Create an SQS queue to receive ECS task state change events and a dead-letter queue for handling failed messages. First, create the dead-letter queue: ```bash theme={null} aws sqs create-queue --queue-name my-ecs-pool-events-dlq --attributes MessageRetentionPeriod=1209600,VisibilityTimeout=60 ``` Get the ARN of the dead-letter queue: ```bash theme={null} aws sqs get-queue-attributes --queue-url $(aws sqs get-queue-url --queue-name my-ecs-pool-events-dlq --query 'QueueUrl' --output text) --attribute-names QueueArn --query 'Attributes.QueueArn' --output text ``` Now create the main queue with the dead-letter queue configured: ```bash theme={null} aws sqs create-queue \ --queue-name my-ecs-pool-events \ --attributes '{ "MessageRetentionPeriod": "604800", "VisibilityTimeout": "300", "RedrivePolicy": "{\"deadLetterTargetArn\":\"\",\"maxReceiveCount\":3}" }' ``` Replace `` with the ARN of the dead-letter queue from the previous step, and `my-ecs-pool` with your work pool name. The queue name should follow the pattern `{work-pool-name}-events` for consistency with the automated deployment. Allow EventBridge to send messages to your SQS queue by updating the queue policy: ```bash theme={null} aws sqs set-queue-attributes \ --queue-url $(aws sqs get-queue-url --queue-name my-ecs-pool-events --query 'QueueUrl' --output text) \ --attributes '{"Policy":"{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"events.amazonaws.com\"},\"Action\":[\"sqs:SendMessage\",\"sqs:GetQueueAttributes\",\"sqs:GetQueueUrl\"],\"Resource\":\"\"}]}"}' ``` Replace `` with the ARN of the queue created in the previous step. Create an EventBridge rule to capture ECS task state changes and send them to the SQS queue: ```bash wrap theme={null} aws events put-rule \ --name my-ecs-pool-task-state-changes \ --event-pattern '{ "source": ["aws.ecs"], "detail-type": ["ECS Task State Change"], "detail": { "clusterArn": ["arn:aws:ecs:::cluster/"] } }' \ --description "Capture ECS task state changes for Prefect worker" \ --state ENABLED ``` Replace: * `` with your AWS region * `` with your AWS account ID * `` with your ECS cluster name * `my-ecs-pool` with your work pool name You can find your cluster ARN using: ```bash wrap theme={null} aws ecs describe-clusters --clusters --query 'clusters[0].clusterArn' --output text ``` Get the queue ARN and add it as a target for the EventBridge rule: ```bash theme={null} aws events put-targets \ --rule my-ecs-pool-task-state-changes \ --targets "Id=1,Arn=" ``` Replace `` with the ARN of the queue created in step 1. Add SQS permissions to the worker task role created earlier: Create a file named `sqs-policy.json`: ```json wrap theme={null} { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "sqs:ReceiveMessage", "sqs:DeleteMessage", "sqs:GetQueueAttributes", "sqs:GetQueueUrl", "sqs:ChangeMessageVisibility" ], "Resource": "arn:aws:sqs:::my-ecs-pool-events" } ] } ``` Replace ``, ``, and `my-ecs-pool-events` with your values. Apply the policy to the worker task role: ```bash wrap theme={null} aws iam put-role-policy \ --role-name ecsTaskRole \ --policy-name EcsWorkerSqsPolicy \ --policy-document file://sqs-policy.json ``` ## Creating the ECS worker service Now that all the AWS IAM roles and event monitoring infrastructure have been created, we can deploy the Prefect worker to the ECS cluster. This task definition will be used to run the Prefect worker in an ECS task. Ensure you replace the placeholders for: * `` with the ARN of the `ecsTaskExecutionRole` you created in Step 2. You can find the ARN of the `ecsTaskExecutionRole` using the following command: ```bash wrap theme={null} aws iam get-role --role-name ecsTaskExecutionRole --query 'Role.Arn' --output text ``` * `` with the ARN of the `ecsTaskRole` you created in Step 2. You can find the ARN of the `ecsTaskRole` using the following command: ```bash wrap theme={null} aws iam get-role --role-name ecsTaskRole --query 'Role.Arn' --output text ``` * `` with the URL of your Prefect Server. You can find your Prefect API URL several ways: If you have the Prefect CLI installed, you can run the following command to view your current Prefect profile's API URL: ```bash theme={null} prefect config view ``` To manually construct the Prefect Cloud API URL, use the following format: ```text wrap theme={null} https://api.prefect.cloud/api/accounts//workspaces/ ``` * `` with the ARN of the resource from Secrets Manager or Systems Manager Parameter Store. * `my-ecs-pool-events` in the `PREFECT_INTEGRATIONS_AWS_ECS_OBSERVER_SQS_QUEUE_NAME` environment variable with your actual queue name from the event monitoring setup. Your secret ARN is based on the service you are using: You can find the ARN of your secret using the following command: ```bash wrap theme={null} aws secretsmanager describe-secret --secret-id PrefectECSWorkerAPIKey --query 'ARN' --output text ``` You can find the ARN of your parameter using the following command: ```bash wrap theme={null} aws ssm get-parameter --name "/prefect/my-ecs-pool/api/key" --query 'Parameter.ARN' --output text ``` As `PREFECT_API_KEY` is not used with a self-hosted Prefect server, you will need to replace the `PREFECT_API_KEY` environment variable in the task definition secrets with `PREFECT_API_AUTH_STRING`. ```json focus={28-35} theme={null} { "family": "prefect-worker-task", "networkMode": "awsvpc", "requiresCompatibilities": [ "FARGATE" ], "cpu": "512", "memory": "1024", "executionRoleArn": "", "taskRoleArn": "", "containerDefinitions": [ { "name": "prefect-worker", "image": "prefecthq/prefect-aws:latest", "cpu": 512, "memory": 1024, "essential": true, "command": [ "/bin/sh", "-c", "prefect worker start --pool my-ecs-pool --type ecs" ], "environment": [ { "name": "PREFECT_API_URL", "value": "" }, { "name": "PREFECT_INTEGRATIONS_AWS_ECS_OBSERVER_SQS_QUEUE_NAME", "value": "my-ecs-pool-events" } ], "secrets": [ { "name": "PREFECT_API_KEY", // [!code --] "name": "PREFECT_API_AUTH_STRING", // [!code ++] "value": "" } ] } ] } ``` Save the following JSON to a file named `task-definition.json`: ```json wrap theme={null} { "family": "prefect-worker-task", "networkMode": "awsvpc", "requiresCompatibilities": [ "FARGATE" ], "cpu": "512", "memory": "1024", "executionRoleArn": "", "taskRoleArn": "", "containerDefinitions": [ { "name": "prefect-worker", "image": "prefecthq/prefect-aws:latest", "cpu": 512, "memory": 1024, "essential": true, "command": [ "/bin/sh", "-c", "prefect worker start --pool my-ecs-pool --type ecs" ], "environment": [ { "name": "PREFECT_API_URL", "value": "" }, { "name": "PREFECT_INTEGRATIONS_AWS_ECS_OBSERVER_SQS_QUEUE_NAME", "value": "my-ecs-pool-events" } ], "secrets": [ { "name": "PREFECT_API_KEY", "valueFrom": "" } ] } ] } ``` This example uses `prefecthq/prefect-aws:latest` which includes both `prefect` and `prefect-aws` pre-installed. For production deployments, consider pinning to a specific version tag (e.g., `prefecthq/prefect-aws:0.7.5-python3.12-prefect3.6.20`). Alternately, you can download this file using the following command: ```bash curl wrap theme={null} curl -O https://raw.githubusercontent.com/PrefectHQ/prefect/refs/heads/main/docs/integrations/prefect-aws/ecs/iam/task-definition.json ``` ```bash wget wrap theme={null} wget https://raw.githubusercontent.com/PrefectHQ/prefect/refs/heads/main/docs/integrations/prefect-aws/ecs/iam/task-definition.json ``` Notice that the CPU and Memory allocations are relatively small. The worker's main responsibility is to submit work through API calls to AWS, *not* to execute your Prefect flow code. To avoid hardcoding your API key into the task definition JSON see [how to add sensitive data using AWS secrets manager to the container definition](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data-tutorial.html#specifying-sensitive-data-tutorial-create-taskdef). Before creating a service, you first need to register a task definition. You can do that using the [`register-task-definition` command](https://docs.aws.amazon.com/cli/latest/reference/ecs/register-task-definition.html): ```bash wrap theme={null} aws ecs register-task-definition --cli-input-json file://task-definition.json ``` Replace `task-definition.json` with the name of your task definition file. Finally, create a service that will manage your Prefect worker. Ensure you replace the placeholders for: * `` with the name of your ECS cluster. * `` with the ARN of the task definition you just registered. * `` with a comma-separated list of your VPC subnet IDs. * Replace `` with a comma-separated list of your VPC security group IDs. If you are using the default VPC, you will need to gather some information about it to use in the next steps. We will use the default VPC for this guide. To find the default VPC ID, run the following command: ```bash wrap theme={null} aws ec2 describe-vpcs --filters "Name=isDefault,Values=true" --query "Vpcs[0].VpcId" --output text ``` This will output the VPC ID (e.g. `vpc-abcdef01`) of the default VPC, which you can use in the next steps in this section. To find the subnets associated with the default VPC: ```bash wrap theme={null} aws ec2 describe-subnets --filters "Name=vpc-id,Values=" --query "Subnets[*].SubnetId" --output text ``` Which will output a list of available subnets (e.g. `subnet-12345678 subnet-23456789`). Finally, we will need the security group ID for the default VPC: ```bash wrap theme={null} aws ec2 describe-security-groups --filters "Name=vpc-id,Values=" "Name=group-name,Values=default" --query "SecurityGroups[*].GroupId" --output text ``` This will output the security group ID (e.g. `sg-12345678`) of the default security group. Copy the subnet IDs and security group ID for use in Step 3. Use the [`aws ecs create-service`](https://docs.aws.amazon.com/cli/latest/reference/ecs/create-service.html) command to create an ECS service running on Fargate for the Prefect worker: ```bash wrap theme={null} aws ecs create-service --service-name prefect-worker-service --cluster --task-definition --launch-type FARGATE --desired-count 1 --network-configuration "awsvpcConfiguration={subnets=[],securityGroups=[],assignPublicIp='ENABLED'}" ``` The work pool page in the Prefect UI allows you to check the health of your workers - make sure your new worker is live! It may take a few minutes for the worker to come online after creating the service. Refer to the [troubleshooting](#troubleshooting) section for further assistance if the worker isn't online. ## Configure work pool defaults Now that your infrastructure is deployed, you should update your ECS work pool configuration with the resource identifiers so they don't need to be specified on every deployment. Navigate to your work pool in the Prefect UI and update the following fields in the **Infrastructure** tab: * **Cluster ARN**: Set to your ECS cluster ARN (e.g., `arn:aws:ecs:us-east-1:123456789012:cluster/my-cluster`) * **VPC ID**: Set to your VPC ID (e.g., `vpc-12345678`) * **Subnets**: Add your subnet IDs (e.g., `subnet-12345678,subnet-87654321`) * **Execution Role ARN**: Set to the task execution role ARN (e.g., `arn:aws:iam::123456789012:role/ecsTaskExecutionRole`) These settings will be used as defaults for all deployments using this work pool, but can be overridden per deployment if needed. You can also update the work pool configuration programmatically using the Prefect API: ```python theme={null} from prefect.client.schemas.objects import WorkPoolUpdate from prefect import get_client async def update_work_pool(): async with get_client() as client: work_pool = await client.read_work_pool("my-ecs-pool") # Update base job template variables base_template = work_pool.base_job_template variables = base_template.get("variables", {}) properties = variables.get("properties", {}) # Update infrastructure defaults properties["cluster"] = { "default": "arn:aws:ecs:us-east-1:123456789012:cluster/my-cluster" } properties["vpc_id"] = { "default": "vpc-12345678" } properties["execution_role_arn"] = { "default": "arn:aws:iam::123456789012:role/ecsTaskExecutionRole" } # Update network configuration network_config = properties.setdefault("network_configuration", {}) network_props = network_config.setdefault("properties", {}) awsvpc_config = network_props.setdefault("awsvpcConfiguration", {}) awsvpc_props = awsvpc_config.setdefault("properties", {}) awsvpc_props["subnets"] = { "default": ["subnet-12345678", "subnet-87654321"] } # Update work pool variables["properties"] = properties base_template["variables"] = variables await client.update_work_pool( "my-ecs-pool", WorkPoolUpdate(base_job_template=base_template) ) # Run the update import asyncio asyncio.run(update_work_pool()) ``` Replace the ARNs and IDs with your actual resource identifiers. ## Customize the base job template The ECS work pool's base job template defines both the available job variables and how they map to the ECS task definition. You can customize this template to expose additional configuration options that can be overridden per-deployment. ### Add custom variables to the schema To add a new variable that can be set per-deployment, you need to: 1. Add the variable to the `variables` section of the base job template 2. Reference it in the `job_configuration` section using `{{ variable_name }}` syntax For example, to allow per-deployment customization of container secrets (for injecting values from AWS Secrets Manager or Parameter Store): ```python theme={null} from prefect.client.schemas.actions import WorkPoolUpdate from prefect import get_client import asyncio async def add_task_definition_variable(): async with get_client() as client: work_pool = await client.read_work_pool("my-ecs-pool") template = work_pool.base_job_template # Add task_definition to the variables schema template["variables"]["properties"]["task_definition"] = { "type": "object", "title": "Task Definition", "description": "Custom ECS task definition overrides", "default": {} } # Reference the variable in job_configuration template["job_configuration"]["task_definition"] = "{{ task_definition }}" await client.update_work_pool( work_pool_name="my-ecs-pool", work_pool=WorkPoolUpdate(base_job_template=template) ) asyncio.run(add_task_definition_variable()) ``` Once the variable is added to the schema, you can set it per-deployment in your `prefect.yaml`: ```yaml theme={null} deployments: - name: my-deployment entrypoint: my_flow.py:my_flow work_pool: name: my-ecs-pool job_variables: task_definition: containerDefinitions: - name: prefect secrets: - name: MY_SECRET_VAR valueFrom: arn:aws:secretsmanager:us-east-1:123456789:secret:my-secret - name: DATABASE_PASSWORD valueFrom: arn:aws:ssm:us-east-1:123456789:parameter/db/password cpu: "1024" memory: "2048" family: my-task-family ``` Variables defined in the `variables` section must be explicitly referenced in `job_configuration` using `{{ variable_name }}` syntax to take effect. If you add a variable but don't reference it in the job configuration, it will have no effect. You can also add more granular variables. For example, instead of exposing the entire `task_definition`, you could add just a `container_secrets` variable and reference it as `{{ container_secrets }}` within the `containerDefinitions` section of your job configuration. ## Deploy a flow run to your ECS work pool This guide uses the [AWS Elastic Container Registry (ECR)](https://aws.amazon.com/ecr/) to store a Docker image containing your flow code. To do this, we will write a flow, then deploy it using build and push steps that copy flow code into a Docker image and push that image to an ECR repository. ```python my_flow.py lines icon="python" theme={null} from prefect import flow from prefect.logging import get_run_logger @flow def my_flow(): logger = get_run_logger() logger.info("Hello from ECS!!") if __name__ == "__main__": my_flow() ``` Use the [`aws ecr create-repository`](https://docs.aws.amazon.com/cli/latest/reference/ecr/create-repository.html) command to create an ECR repository. The name you choose for your repository will be reused in the next step when defining your Prefect deployment. ```bash wrap theme={null} aws ecr create-repository --repository-name ``` To have Prefect build your image when deploying your flow create a `prefect.yaml` file with the following specification: ```yaml prefect.yaml lines theme={null} name: ecs-worker-guide pull: - prefect.deployments.steps.set_working_directory: directory: /opt/prefect/ecs-worker-guide # build section allows you to manage and build docker images build: - prefect_docker.deployments.steps.build_docker_image: id: build_image requires: prefect-docker>=0.3.1 image_name: tag: latest dockerfile: auto # push section allows you to manage if and how this project is uploaded to remote locations push: - prefect_docker.deployments.steps.push_docker_image: requires: prefect-docker>=0.3.1 image_name: '{{ build_image.image_name }}' tag: '{{ build_image.tag }}' # the deployments section allows you to provide configuration for deploying flows deployments: - name: my_ecs_deployment version: tags: [] description: entrypoint: flow.py:my_flow parameters: {} work_pool: name: my-ecs-pool work_queue_name: job_variables: image: '{{ build_image.image }}' schedules: [] ``` [Deploy](https://docs.prefect.io/deploy/serve-flows/#create-a-deployment) the flow to the Prefect Cloud or your self-managed server instance. ```bash theme={null} prefect deploy my_flow.py:my_ecs_deployment ``` Find the deployment in the UI and click the **Quick Run** button! ## Troubleshooting If your worker does not appear in the Prefect UI, check the following: * Ensure that the ECS service is running and that the task definition is registered correctly. * Check the ECS service logs in CloudWatch to see if there are any errors. * Verify that the IAM roles have the correct permissions. * Ensure that the `PREFECT_API_URL` and `PREFECT_API_KEY` environment variables are set correctly in the task definition. * For self-hosted Prefect servers, ensure that you replaced `PREFECT_API_KEY` from the example with `PREFECT_API_AUTH_STRING` in the task definition. * Ensure your Prefect ECS worker has network connectivity to the Prefect API. If you are using a private VPC, ensure that there is a NAT gateway or internet gateway configured to allow outbound traffic to the Prefect API. ### Event monitoring issues If flow runs are not updating their status properly, check the event monitoring setup: * Verify the SQS queue was created and is receiving messages from EventBridge * Check that the EventBridge rule is active and properly configured for your ECS cluster * Ensure the worker task role has the necessary SQS permissions (`sqs:ReceiveMessage`, `sqs:DeleteMessage`, etc.) * Verify the `PREFECT_INTEGRATIONS_AWS_ECS_OBSERVER_SQS_QUEUE_NAME` environment variable is set correctly in the worker task definition * Check CloudWatch logs for any SQS-related errors in the worker logs ## Next steps Now that you are confident your ECS worker is healthy, you can experiment with different work pool configurations. * Do your flow runs require higher `CPU`? * Would an EC2 `Launch Type` speed up your flow run execution? These infrastructure configuration values can be set on your ECS work pool or they can be overridden on the deployment level through [job\_variables](/v3/deploy/infrastructure-concepts/customize/) if desired. # prefect-aws Source: https://docs.prefect.io/integrations/prefect-aws/index Build production-ready data workflows that seamlessly integrate with AWS services. `prefect-aws` provides battle-tested blocks, tasks, and infrastructure integrations for AWS, including ECS orchestration, S3 storage, Secrets Manager, Lambda functions, Batch computing, and Glue ETL operations. ## Why use prefect-aws? `prefect-aws` offers significant advantages over direct boto3 integration: * **Production-ready integrations**: Pre-built, tested components that handle common AWS patterns and edge cases * **Unified credential management**: Secure, centralized authentication that works consistently across all AWS services * **Built-in observability**: Automatic logging, monitoring, and state tracking for all AWS operations * **Infrastructure as code**: Deploy and scale workflows on AWS ECS with minimal configuration ## Getting started ### Prerequisites * An [AWS account](https://aws.amazon.com/account/) and the necessary permissions to access desired services. ### Install prefect-aws The following command will install a version of `prefect-aws` compatible with your installed version of `prefect`. If you don't already have `prefect` installed, it will install the newest version of `prefect` as well. ```bash theme={null} pip install "prefect[aws]" ``` Upgrade to the latest versions of `prefect` and `prefect-aws`: ```bash theme={null} pip install -U "prefect[aws]" ``` ## Blocks setup ### Credentials Most AWS services requires an authenticated session. Prefect makes it simple to provide credentials via AWS Credentials blocks. Steps: 1. Refer to the [AWS Configuration documentation](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-creds) to retrieve your access key ID and secret access key. 2. Copy the access key ID and secret access key. 3. Create an `AwsCredentials` block in the Prefect UI or use a Python script like the one below. ```python theme={null} from prefect_aws import AwsCredentials AwsCredentials( aws_access_key_id="PLACEHOLDER", aws_secret_access_key="PLACEHOLDER", aws_session_token=None, # replace this with token if necessary region_name="us-east-2" ).save("BLOCK-NAME-PLACEHOLDER") ``` Prefect uses the Boto3 library under the hood. To find credentials for authentication, any data not provided to the block are sourced at runtime in the order shown in the [Boto3 docs](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html#configuring-credentials). Prefect creates the session object using the values in the block and then, any missing values follow the sequence in the Boto3 docs. #### IAM Role Assumption `AwsCredentials` supports assuming IAM roles for cross-account access or enhanced security. When `assume_role_arn` is provided, `get_boto3_session()` automatically assumes the role and returns a session with temporary credentials. ```python theme={null} AwsCredentials( aws_access_key_id="PLACEHOLDER", aws_secret_access_key="PLACEHOLDER", region_name="us-east-2", assume_role_arn="arn:aws:iam::123456789012:role/MyRole", assume_role_kwargs={ "RoleSessionName": "my-session", "DurationSeconds": 3600, "ExternalId": "unique-external-id" } ).save("BLOCK-NAME-PLACEHOLDER") ``` Available `assume_role_kwargs` parameters include: * `RoleSessionName`: Session name for the assumed role * `DurationSeconds`: Session duration (900-43200 seconds) * `ExternalId`: Unique identifier for third-party access * `Policy`: Inline session policy (JSON string) * `PolicyArns`: List of managed policy ARNs * `Tags`: Session tags for attribution * `SerialNumber` and `TokenCode`: For MFA authentication ### S3 Create a block for reading and writing files to S3. ```python theme={null} from prefect_aws import AwsCredentials from prefect_aws.s3 import S3Bucket S3Bucket( bucket_name="BUCKET-NAME-PLACEHOLDER", credentials=aws_credentials ).save("S3-BLOCK-NAME-PLACEHOLDER") ``` ### Lambda Invoke AWS Lambdas, synchronously or asynchronously. ```python theme={null} from prefect_aws.lambda_function import LambdaFunction from prefect_aws.credentials import AwsCredentials LambdaFunction( function_name="test_lambda_function", aws_credentials=credentials, ).save("LAMBDA-BLOCK-NAME-PLACEHOLDER") ``` ### Secret Manager Create a block to read, write, and delete AWS Secret Manager secrets. ```python theme={null} from prefect_aws import AwsCredentials from prefect_aws.secrets_manager import AwsSecret AwsSecret( secret_name="test_secret_name", aws_credentials=credentials, ).save("AWS-SECRET-BLOCK-NAME-PLACEHOLDER") ``` ## RDS IAM Authentication (Experimental) `prefect-aws` includes a plugin to automatically handle authentication for AWS RDS PostgreSQL databases using IAM tokens. ### Prerequisites Before using RDS IAM authentication, you need to configure AWS: 1. **Enable IAM authentication on your RDS instance**: In the AWS Console, modify your RDS instance and enable "IAM database authentication". 2. **Create an IAM policy** with the `rds-db:connect` permission: ```json theme={null} { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "rds-db:connect", "Resource": "arn:aws:rds-db:::dbuser:/" } ] } ``` 3. **Create a database user** that uses IAM authentication: ```sql theme={null} CREATE USER iam_user WITH LOGIN; GRANT rds_iam TO iam_user; ``` ### Enable the Plugin 1. Enable the experimental plugin system: ```bash theme={null} export PREFECT_EXPERIMENTS_PLUGINS_ENABLED=true ``` 2. Enable RDS IAM authentication: ```bash theme={null} export PREFECT_INTEGRATIONS_AWS_RDS_IAM_ENABLED=true ``` 3. (Optional) Set the AWS region: ```bash theme={null} export PREFECT_INTEGRATIONS_AWS_RDS_IAM_REGION_NAME=us-west-2 ``` 4. Configure your database connection URL: ```bash theme={null} export PREFECT_API_DATABASE_CONNECTION_URL="postgresql+asyncpg://iam_user@your-rds-host:5432/prefect" ``` The plugin will automatically generate and inject an IAM authentication token as the password when connecting to the database. ## Supported AWS services `prefect-aws` provides comprehensive integrations for key AWS services: | Service | Integration Type | Use Cases | | ------------------- | -------------------------- | ------------------------------------------------------ | | **S3** | `S3Bucket` block | File storage, data lake operations, deployment storage | | **Secrets Manager** | `AwsSecret` block | Secure credential storage, API key management | | **Lambda** | `LambdaFunction` block | Serverless function execution, event-driven processing | | **Glue** | `GlueJobBlock` block | ETL operations, data transformation pipelines | | **ECS** | `ECSWorker` infrastructure | Container orchestration, scalable compute workloads | | **Batch** | `batch_submit` task | High-throughput computing, batch job processing | **Integration types:** * **Blocks**: Reusable configuration objects that can be saved and shared across flows * **Tasks**: Functions decorated with `@task` for direct use in flows * **Workers**: Infrastructure components for running flows on AWS compute services ## Scale workflows with AWS infrastructure ### ECS (Elastic Container Service) Deploy and scale your Prefect workflows on [AWS ECS](https://aws.amazon.com/ecs/) for production workloads. `prefect-aws` provides: * **ECS worker**: Long-running worker for hybrid deployments with full control over execution environment * **Auto-scaling**: Dynamic resource allocation based on workflow demands * **Cost optimization**: Pay only for compute resources when workflows are running See the [ECS worker deployment guide](/integrations/prefect-aws/ecs-worker) for a step-by-step walkthrough of deploying production-ready workers to your ECS cluster. ### Docker Images Pre-built Docker images with `prefect-aws` are available for simplified deployment: ```bash theme={null} docker pull prefecthq/prefect-aws:latest ``` #### Available Tags Image tags have the following format: * `prefecthq/prefect-aws:latest` - Latest stable release with Python 3.12 * `prefecthq/prefect-aws:latest-python3.11` - Latest stable with Python 3.11 * `prefecthq/prefect-aws:0.5.9-python3.12` - Specific prefect-aws version with Python 3.12 * `prefecthq/prefect-aws:0.5.9-python3.12-prefect3.4.9` - Full version specification #### Usage Examples **Running an ECS worker:** ```bash theme={null} docker run -d \ --name prefect-ecs-worker \ -e PREFECT_API_URL=https://api.prefect.cloud/api/accounts/your-account/workspaces/your-workspace \ -e PREFECT_API_KEY=your-api-key \ prefecthq/prefect-aws:latest \ prefect worker start --pool ecs-pool ``` **Local development:** ```bash theme={null} docker run -it --rm \ -v $(pwd):/opt/prefect \ prefecthq/prefect-aws:latest \ python your_flow.py ``` ## Examples ### Read and write files to AWS S3 Upload a file to an AWS S3 bucket and download the same file under a different filename. The following code assumes that the bucket already exists: ```python theme={null} from pathlib import Path from prefect import flow from prefect_aws import AwsCredentials, S3Bucket @flow def s3_flow(): # create a dummy file to upload file_path = Path("test-example.txt") file_path.write_text("Hello, Prefect!") aws_credentials = AwsCredentials.load("BLOCK-NAME-PLACEHOLDER") s3_bucket = S3Bucket( bucket_name="BUCKET-NAME-PLACEHOLDER", credentials=aws_credentials ) s3_bucket_path = s3_bucket.upload_from_path(file_path) downloaded_file_path = s3_bucket.download_object_to_path( s3_bucket_path, "downloaded-test-example.txt" ) return downloaded_file_path.read_text() if __name__ == "__main__": s3_flow() ``` ### Access secrets with AWS Secrets Manager Write a secret to AWS Secrets Manager, read the secret data, delete the secret, and return the secret data. ```python theme={null} from prefect import flow from prefect_aws import AwsCredentials, AwsSecret @flow def secrets_manager_flow(): aws_credentials = AwsCredentials.load("BLOCK-NAME-PLACEHOLDER") aws_secret = AwsSecret(secret_name="test-example", aws_credentials=aws_credentials) aws_secret.write_secret(secret_data=b"Hello, Prefect!") secret_data = aws_secret.read_secret() aws_secret.delete_secret() return secret_data if __name__ == "__main__": secrets_manager_flow() ``` ### Invoke lambdas ```python theme={null} from prefect_aws.lambda_function import LambdaFunction from prefect_aws.credentials import AwsCredentials credentials = AwsCredentials() lambda_function = LambdaFunction( function_name="test_lambda_function", aws_credentials=credentials, ) response = lambda_function.invoke( payload={"foo": "bar"}, invocation_type="RequestResponse", ) response["Payload"].read() ``` ### Submit AWS Glue jobs ```python theme={null} from prefect import flow from prefect_aws import AwsCredentials from prefect_aws.glue_job import GlueJobBlock @flow def example_run_glue_job(): aws_credentials = AwsCredentials( aws_access_key_id="your_access_key_id", aws_secret_access_key="your_secret_access_key" ) glue_job_run = GlueJobBlock( job_name="your_glue_job_name", arguments={"--YOUR_EXTRA_ARGUMENT": "YOUR_EXTRA_ARGUMENT_VALUE"}, ).trigger() return glue_job_run.wait_for_completion() if __name__ == "__main__": example_run_glue_job() ``` ### Submit AWS Batch jobs ```python theme={null} from prefect import flow from prefect_aws import AwsCredentials from prefect_aws.batch import batch_submit @flow def example_batch_submit_flow(): aws_credentials = AwsCredentials( aws_access_key_id="access_key_id", aws_secret_access_key="secret_access_key" ) job_id = batch_submit( "job_name", "job_queue", "job_definition", aws_credentials ) return job_id if __name__ == "__main__": example_batch_submit_flow() ``` ## Resources ### Documentation * **[prefect-aws SDK Reference](/integrations/prefect-aws/api-ref/prefect_aws-credentials)** - Complete API documentation for all blocks and tasks * **[ECS Deployment Guide](/integrations/prefect-aws/ecs-worker)** - Step-by-step guide for deploying workflows on ECS * **[Prefect Secrets Management](/v3/develop/secrets)** - Using AWS credentials with third-party services ### AWS Resources * **[AWS Documentation](https://docs.aws.amazon.com/)** - Official AWS service documentation * **[Boto3 Documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html)** - Python SDK reference for AWS services * **[AWS IAM Best Practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html)** - Security recommendations for AWS access # Azure Container Instances Worker Guide Source: https://docs.prefect.io/integrations/prefect-azure/aci_worker ## Why use ACI for flow run execution? ACI (Azure Container Instances) is a fully managed compute platform that streamlines running your Prefect flows on scalable, on-demand infrastructure on Azure. ## Prerequisites Before starting this guide, make sure you have: * An Azure account and user permissions for provisioning resource groups and container instances. * The `azure` CLI installed on your local machine. You can follow Microsoft's [installation guide](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli). * Docker installed on your local machine. ## Step 1. Create a resource group Azure resource groups serve as containers for managing groupings of Azure resources. Replace `` with the name of your choosing, and `` with a valid Azure location name, such as`eastus`. ```bash theme={null} export RG_NAME= && \ az group create --name $RG_NAME --location ``` Throughout the rest of the guide, we'll need to refer to the **scope** of the created resource group, which is a string describing where the resource group lives in the hierarchy of your Azure account. To save the scope of your resource group as an environment variable, run the following command: ```bash theme={null} RG_SCOPE=$(az group show --name $RG_NAME --query id --output tsv) ``` You can check that the scope is correct before moving on by running `echo $RG_SCOPE` in your terminal. It should be formatted as follows: ``` /subscriptions//resourceGroups/ ``` ## Step 2. Prepare ACI permissions In order for the worker to create, monitor, and delete the other container instances in which flows will run, we'll need to create a **custom role** and an **identity**, and then affiliate that role to the identity with a **role assignment**. When we start our worker, we'll assign that identity to the container instance it's running in. ### 1. Create a role The custom `Container Instances Contributor` role has all the permissions your worker will need to run flows in other container instances. Create it by running the following command: ```bash theme={null} az role definition create --role-definition '{ "Name": "Container Instances Contributor", "IsCustom": true, "Description": "Can create, delete, and monitor container instances.", "Actions": [ "Microsoft.ManagedIdentity/userAssignedIdentities/assign/action", "Microsoft.Resources/deployments/*", "Microsoft.ContainerInstance/containerGroups/*" ], "NotActions": [ ], "AssignableScopes": [ '"\"$RG_SCOPE\""' ] }' ``` ### 2. Create an identity Create a user-managed identity with the following command, replacing `` with the name you'd like to use for the identity: ```bash theme={null} export IDENTITY_NAME= && \ az identity create -g $RG_NAME -n $IDENTITY_NAME ``` We'll also need to save the principal ID and full object ID of the identity for the role assignment and container creation steps, respectively: ```bash theme={null} IDENTITY_PRINCIPAL_ID=$(az identity list --query "[?name=='$IDENTITY_NAME'].principalId" --output tsv) && \ IDENTITY_ID=$(az identity list --query "[?name=='$IDENTITY_NAME'].id" --output tsv) ``` ### 3. Assign roles to the identity Now let's assign the `Container Instances Contributor` role we created earlier to the new identity: ```bash theme={null} az role assignment create \ --assignee $IDENTITY_PRINCIPAL_ID \ --role "Container Instances Contributor" \ --scope $RG_SCOPE ``` Since we'll be using ACR to host a custom Docker image containing a Prefect flow later in the guide, let's also assign the built in `AcrPull` role to the identity: ```bash theme={null} az role assignment create \ --assignee $IDENTITY_PRINCIPAL_ID \ --role "AcrPull" \ --scope $RG_SCOPE ``` ## Step 3. Create the worker container instance Before running this command, set your `PREFECT_API_URL` and `PREFECT_API_KEY` as environment variables: ```bash theme={null} export PREFECT_API_URL= PREFECT_API_KEY= ``` Running the following command will create a container instance in your Azure resource group that will start a Prefect ACI worker. If there is not already a work pool in Prefect with the name you chose, a work pool will also be created. Replace `` with the name of the ACI work pool you want to create in Prefect. Here we're using the work pool name as the name of the container instance in Azure as well, but you may name it something else if you prefer. ```bash theme={null} az container create \ --name \ --resource-group $RG_NAME \ --assign-identity $IDENTITY_ID \ --image "prefecthq/prefect-azure:latest" \ --secure-environment-variables PREFECT_API_URL=$PREFECT_API_URL PREFECT_API_KEY=$PREFECT_API_KEY \ --command-line "/bin/bash -c 'prefect worker start --pool --type azure-container-instance'" ``` This example uses `prefecthq/prefect-azure:latest` which includes both `prefect` and `prefect-azure` pre-installed. For production deployments, consider pinning to a specific version tag (e.g., `prefecthq/prefect-azure:0.4.9-python3.12-prefect3.6.19`). This container instance uses default networking and security settings. For advanced configuration, refer the `az container create` [CLI reference](https://learn.microsoft.com/en-us/cli/azure/container?view=azure-cli-latest#az-container-create). ## Step 4. Create an ACR registry In order to build and push images containing flow code to Azure, we'll need a container registry. Create one with the following command, replacing `` with the registry name of your choosing: ```bash theme={null} export REGISTRY_NAME= && \ az acr create --resource-group $RG_NAME \ --name --sku Basic ``` ## Step 5. Update your ACI work pool configuration Once your work pool is created, navigate to the Edit page of your ACI work pool. You will need to update the following fields: ### Identities This will be your `IDENTITY_ID`. You can get it from your terminal by running `echo $IDENTITY_ID`. When adding it to your work pool, it should be formatted as a JSON array: ``` ["/subscriptions//resourcegroups//providers/Microsoft.ManagedIdentity/userAssignedIdentities/"] ``` Configuring an ACI work pool's identities. ### ACRManagedIdentity ACRManagedIdentity is required for your flow code containers to be pulled from ACR. It consists of the following: * Identity: the same `IDENTITY_ID` as above, as a string ``` /subscriptions//resourcegroups//providers/Microsoft.ManagedIdentity/userAssignedIdentities/ ``` * Registry URL: your ``, followed by `.azurecr.io` ``` .azurecr.io ``` Configuring an ACI work pool's ACR Managed Identity. ### Subscription ID and resource group name Both the subscription ID and resource group name can be found in the `RG_SCOPE` environment variable created earlier in the guide. View their values by running `echo $RG_SCOPE`: ``` /subscriptions//resourceGroups/ ``` Configuring an ACI work pool. Then click Save. ## Step 6. Pick up a flow run with your new worker This guide uses ACR to store a Docker image containing your flow code. Write a flow, then deploy it using `flow.deploy()`, which will copy flow code into a Docker image and push that image to an ACR registry. ### 1. Log in to ACR Use the following commands to log in to ACR: ``` TOKEN=$(az acr login --name $REGISTRY_NAME --expose-token --output tsv --query accessToken) ``` ``` docker login $REGISTRY_NAME.azurecr.io --username 00000000-0000-0000-0000-000000000000 --password-stdin <<< $TOKEN ``` ### 2. Write and deploy a simple test flow Create and run the following script to deploy your flow. Be sure to replace `` and `` with the appropriate values. `my_flow.py` ```python theme={null} from prefect import flow from prefect.logging import get_run_logger from prefect.docker import DockerImage @flow def my_flow(): logger = get_run_logger() logger.info("Hello from ACI!") if __name__ == "__main__": my_flow.deploy( name="aci-deployment", image=DockerImage( name=".azurecr.io/example:latest", platform="linux/amd64", ), work_pool_name="", ) ``` ### 3. Find the deployment in the UI and click the **Quick Run** button! # blob_storage Source: https://docs.prefect.io/integrations/prefect-azure/api-ref/prefect_azure-blob_storage # `prefect_azure.blob_storage` Integrations for interacting with Azure Blob Storage ## Functions ### `blob_storage_download` ```python theme={null} blob_storage_download(container: str, blob: str, blob_storage_credentials: 'AzureBlobStorageCredentials') -> bytes ``` Downloads a blob with a given key from a given Blob Storage container. Args: blob: Name of the blob within this container to retrieve. container: Name of the Blob Storage container to retrieve from. blob\_storage\_credentials: Credentials to use for authentication with Azure. Returns: A `bytes` representation of the downloaded blob. Example: Download a file from a Blob Storage container ```python theme={null} from prefect import flow from prefect_azure import AzureBlobStorageCredentials from prefect_azure.blob_storage import blob_storage_download @flow def example_blob_storage_download_flow(): connection_string = "connection_string" blob_storage_credentials = AzureBlobStorageCredentials( connection_string=connection_string, ) data = blob_storage_download( container="prefect", blob="prefect.txt", blob_storage_credentials=blob_storage_credentials, ) return data example_blob_storage_download_flow() ``` ### `blob_storage_upload` ```python theme={null} blob_storage_upload(data: bytes, container: str, blob_storage_credentials: 'AzureBlobStorageCredentials', blob: Optional[str] = None, overwrite: bool = False) -> str ``` Uploads data to an Blob Storage container. Args: data: Bytes representation of data to upload to Blob Storage. container: Name of the Blob Storage container to upload to. blob\_storage\_credentials: Credentials to use for authentication with Azure. blob: Name of the blob within this container to retrieve. overwrite: If `True`, an existing blob with the same name will be overwritten. Defaults to `False` and an error will be thrown if the blob already exists. Returns: The blob name of the uploaded object Example: Read and upload a file to a Blob Storage container ```python theme={null} from prefect import flow from prefect_azure import AzureBlobStorageCredentials from prefect_azure.blob_storage import blob_storage_upload @flow def example_blob_storage_upload_flow(): connection_string = "connection_string" blob_storage_credentials = AzureBlobStorageCredentials( connection_string=connection_string, ) with open("data.csv", "rb") as f: blob = blob_storage_upload( data=f.read(), container="container", blob="data.csv", blob_storage_credentials=blob_storage_credentials, overwrite=False, ) return blob example_blob_storage_upload_flow() ``` ### `blob_storage_list` ```python theme={null} blob_storage_list(container: str, blob_storage_credentials: 'AzureBlobStorageCredentials', name_starts_with: Optional[str] = None, include: Union[str, List[str], None] = None, **kwargs) -> List['BlobProperties'] ``` List objects from a given Blob Storage container. Args: container: Name of the Blob Storage container to retrieve from. blob\_storage\_credentials: Credentials to use for authentication with Azure. name\_starts\_with: Filters the results to return only blobs whose names begin with the specified prefix. include: Specifies one or more additional datasets to include in the response. Options include: 'snapshots', 'metadata', 'uncommittedblobs', 'copy', 'deleted', 'deletedwithversions', 'tags', 'versions', 'immutabilitypolicy', 'legalhold'. \*\*kwargs: Additional kwargs passed to `ContainerClient.list_blobs()` Returns: A `list` of `dict`s containing metadata about the blob. Example: ```python theme={null} from prefect import flow from prefect_azure import AzureBlobStorageCredentials from prefect_azure.blob_storage import blob_storage_list @flow def example_blob_storage_list_flow(): connection_string = "connection_string" blob_storage_credentials = AzureBlobStorageCredentials( connection_string="connection_string", ) data = blob_storage_list( container="container", blob_storage_credentials=blob_storage_credentials, ) return data example_blob_storage_list_flow() ``` ## Classes ### `AzureBlobStorageContainer` Represents a container in Azure Blob Storage. This class provides methods for downloading and uploading files and folders to and from the Azure Blob Storage container. **Attributes:** * `container_name`: The name of the Azure Blob Storage container. * `credentials`: The credentials to use for authentication with Azure. * `base_folder`: A base path to a folder within the container to use for reading and writing objects. **Methods:** #### `download_folder_to_path` ```python theme={null} download_folder_to_path(self, from_folder: str, to_folder: Union[str, Path], **download_kwargs: Dict[str, Any]) -> Coroutine[Any, Any, Path] ``` Download a folder from the container to a local path. **Args:** * `from_folder`: The folder path in the container to download. * `to_folder`: The local path to download the folder to. * `**download_kwargs`: Additional keyword arguments passed into `BlobClient.download_blob`. **Returns:** * The local path where the folder was downloaded. #### `download_object_to_file_object` ```python theme={null} download_object_to_file_object(self, from_path: str, to_file_object: BinaryIO, **download_kwargs: Dict[str, Any]) -> Coroutine[Any, Any, BinaryIO] ``` Downloads an object from the container to a file object. **Args:** * `from_path `: The path of the object to download within the container. * `to_file_object`: The file object to download the object to. * `**download_kwargs`: Additional keyword arguments for the download operation. **Returns:** * The file object that the object was downloaded to. #### `download_object_to_path` ```python theme={null} download_object_to_path(self, from_path: str, to_path: Union[str, Path], **download_kwargs: Dict[str, Any]) -> Coroutine[Any, Any, Path] ``` Downloads an object from a container to a specified path. **Args:** * `from_path`: The path of the object in the container. * `to_path`: The path where the object will be downloaded to. * `**download_kwargs`: Additional keyword arguments for the download operation. **Returns:** * The path where the object was downloaded to. #### `get_directory` ```python theme={null} get_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> None ``` Downloads the contents of a direry from the blob storage to a local path. Used to enable flow code storage for deployments. **Args:** * `from_path`: The path of the directory in the blob storage. * `local_path`: The local path where the directory will be downloaded. #### `list_blobs` ```python theme={null} list_blobs(self) -> List[str] ``` Lists blobs available within the specified Azure container. Used to introspect your containers. **Returns:** * A list of the blobs within your container. #### `put_directory` ```python theme={null} put_directory(self, local_path: Optional[str] = None, to_path: Optional[str] = None, ignore_file: Optional[str] = None) -> None ``` Uploads a directory to the blob storage. Used to enable flow code storage for deployments. **Args:** * `local_path`: The local path of the directory to upload. Defaults to current directory. * `to_path`: The destination path in the blob storage. Defaults to root directory. * `ignore_file`: The path to a file containing patterns to ignore during upload. #### `read_path` ```python theme={null} read_path(self, path: str) -> bytes ``` Reads the contents of a file at the specified path and returns it as bytes. Used to enable results storage. **Args:** * `path`: The path of the file to read. **Returns:** * The contents of the file as bytes. #### `upload_from_file_object` ```python theme={null} upload_from_file_object(self, from_file_object: BinaryIO, to_path: str, **upload_kwargs: Dict[str, Any]) -> Coroutine[Any, Any, str] ``` Uploads an object from a file object to the specified path in the blob storage container. **Args:** * `from_file_object`: The file object to upload. * `to_path`: The path in the blob storage container to upload the object to. * `**upload_kwargs`: Additional keyword arguments to pass to the upload\_blob method. **Returns:** * The path where the object was uploaded to. #### `upload_from_folder` ```python theme={null} upload_from_folder(self, from_folder: Union[str, Path], to_folder: str, **upload_kwargs: Dict[str, Any]) -> Coroutine[Any, Any, str] ``` Uploads files from a local folder to a specified folder in the Azure Blob Storage container. **Args:** * `from_folder`: The path to the local folder containing the files to upload. * `to_folder`: The destination folder in the Azure Blob Storage container. * `**upload_kwargs`: Additional keyword arguments to pass to the `upload_blob` method. **Returns:** * The full path of the destination folder in the container. #### `upload_from_path` ```python theme={null} upload_from_path(self, from_path: Union[str, Path], to_path: str, **upload_kwargs: Dict[str, Any]) -> Coroutine[Any, Any, str] ``` Uploads an object from a local path to the specified destination path in the blob storage container. **Args:** * `from_path`: The local path of the object to upload. * `to_path`: The destination path in the blob storage container. * `**upload_kwargs`: Additional keyword arguments to pass to the `upload_blob` method. **Returns:** * The destination path in the blob storage container. #### `write_path` ```python theme={null} write_path(self, path: str, content: bytes) -> None ``` Writes the content to the specified path in the blob storage. Used to enable results storage. **Args:** * `path`: The path where the content will be written. * `content`: The content to be written. # container_instance Source: https://docs.prefect.io/integrations/prefect-azure/api-ref/prefect_azure-container_instance # `prefect_azure.container_instance` Integrations with the Azure Container Instances service. Note this module is experimental. The interfaces within may change without notice. The `AzureContainerInstanceJob` infrastructure block in this module is ideally configured via the Prefect UI and run via a Prefect agent, but it can be called directly as demonstrated in the following examples. Examples: Run a command using an Azure Container Instances container. ```python theme={null} AzureContainerInstanceJob(command=["echo", "hello world"]).run() ``` Run a command and stream the container's output to the local terminal. ```python theme={null} AzureContainerInstanceJob( command=["echo", "hello world"], stream_output=True, ) ``` Run a command with a specific image ```python theme={null} AzureContainerInstanceJob(command=["echo", "hello world"], image="alpine:latest") ``` Run a task with custom memory and CPU requirements ```python theme={null} AzureContainerInstanceJob(command=["echo", "hello world"], memory=1.0, cpu=1.0) ``` Run a task with custom memory and CPU requirements ```python theme={null} AzureContainerInstanceJob(command=["echo", "hello world"], memory=1.0, cpu=1.0) ``` Run a task with custom memory, CPU, and GPU requirements ```python theme={null} AzureContainerInstanceJob(command=["echo", "hello world"], memory=1.0, cpu=1.0, gpu_count=1, gpu_sku="V100") ``` Run a task with custom environment variables ```python theme={null} AzureContainerInstanceJob( command=["echo", "hello $PLANET"], env={"PLANET": "earth"} ) ``` Run a task that uses a private ACR registry with a managed identity ```python theme={null} AzureContainerInstanceJob( command=["echo", "hello $PLANET"], image="my-registry.azurecr.io/my-image", image_registry=ACRManagedIdentity( registry_url="my-registry.azurecr.io", identity="/my/managed/identity/123abc" ) ) ``` ## Classes ### `ContainerGroupProvisioningState` Terminal provisioning states for ACI container groups. Per the Azure docs, the states in this Enum are the only ones that can be relied on as dependencies. ### `ContainerRunState` Terminal run states for ACI containers. ### `ACRManagedIdentity` Use a Managed Identity to access Azure Container registry. Requires the user-assigned managed identity be available to the ACI container group. ### `AzureContainerInstanceJobResult` The result of an `AzureContainerInstanceJob` run. # cosmos_db Source: https://docs.prefect.io/integrations/prefect-azure/api-ref/prefect_azure-cosmos_db # `prefect_azure.cosmos_db` Tasks for interacting with Azure Cosmos DB ## Functions ### `cosmos_db_query_items` ```python theme={null} cosmos_db_query_items(query: str, container: Union[str, 'ContainerProxy', Dict[str, Any]], database: Union[str, 'DatabaseProxy', Dict[str, Any]], cosmos_db_credentials: AzureCosmosDbCredentials, parameters: Optional[List[Dict[str, object]]] = None, partition_key: Optional[Any] = None, **kwargs: Any) -> List[Union[str, dict]] ``` Return all results matching the given query. You can use any value for the container name in the FROM clause, but often the container name is used. In the examples below, the container name is "products," and is aliased as "p" for easier referencing in the WHERE clause. **Args:** * `query`: The Azure Cosmos DB SQL query to execute. * `container`: The ID (name) of the container, a ContainerProxy instance, or a dict representing the properties of the container to be retrieved. * `database`: The ID (name), dict representing the properties or DatabaseProxy instance of the database to read. * `cosmos_db_credentials`: Credentials to use for authentication with Azure. * `parameters`: Optional array of parameters to the query. Each parameter is a dict() with 'name' and 'value' keys. * `partition_key`: Partition key for the item to retrieve. * `**kwargs`: Additional keyword arguments to pass. **Returns:** * An `list` of results. ### `cosmos_db_read_item` ```python theme={null} cosmos_db_read_item(item: Union[str, Dict[str, Any]], partition_key: Any, container: Union[str, 'ContainerProxy', Dict[str, Any]], database: Union[str, 'DatabaseProxy', Dict[str, Any]], cosmos_db_credentials: AzureCosmosDbCredentials, **kwargs: Any) -> List[Union[str, dict]] ``` Get the item identified by item. **Args:** * `item`: The ID (name) or dict representing item to retrieve. * `partition_key`: Partition key for the item to retrieve. * `container`: The ID (name) of the container, a ContainerProxy instance, or a dict representing the properties of the container to be retrieved. * `database`: The ID (name), dict representing the properties or DatabaseProxy instance of the database to read. * `cosmos_db_credentials`: Credentials to use for authentication with Azure. * `**kwargs`: Additional keyword arguments to pass. **Returns:** * Dict representing the item to be retrieved. ### `cosmos_db_create_item` ```python theme={null} cosmos_db_create_item(body: Dict[str, Any], container: Union[str, 'ContainerProxy', Dict[str, Any]], database: Union[str, 'DatabaseProxy', Dict[str, Any]], cosmos_db_credentials: AzureCosmosDbCredentials, **kwargs: Any) -> Dict[str, Any] ``` Create an item in the container. To update or replace an existing item, use the upsert\_item method. **Args:** * `body`: A dict-like object representing the item to create. * `container`: The ID (name) of the container, a ContainerProxy instance, or a dict representing the properties of the container to be retrieved. * `database`: The ID (name), dict representing the properties or DatabaseProxy instance of the database to read. * `cosmos_db_credentials`: Credentials to use for authentication with Azure. * `**kwargs`: Additional keyword arguments to pass. **Returns:** * A dict representing the new item. # credentials Source: https://docs.prefect.io/integrations/prefect-azure/api-ref/prefect_azure-credentials # `prefect_azure.credentials` Credential classes used to perform authenticated interactions with Azure ## Classes ### `AzureBlobStorageCredentials` Stores credentials for authenticating with Azure Blob Storage. Authentication can be done using one of the following methods: 1. Connection string: Provide a connection string for your Azure storage account. 2. Account URL with DefaultAzureCredential: Provide an account URL and credentials will be discovered automatically using DefaultAzureCredential. 3. Account URL with Service Principal: Provide an account URL along with client\_id, tenant\_id, and client\_secret for service principal authentication. **Args:** * `account_url`: The URL for your Azure storage account. Required for DefaultAzureCredential or service principal authentication. * `connection_string`: The connection string to your Azure storage account. If provided, the connection string will take precedence over the account URL. * `client_id`: The service principal client ID. If provided, tenant\_id and client\_secret must also be provided. * `tenant_id`: The service principal tenant ID. If provided, client\_id and client\_secret must also be provided. * `client_secret`: The service principal client secret. If provided, client\_id and tenant\_id must also be provided. **Methods:** #### `aclose` ```python theme={null} aclose(self) ``` Cleanup resources. #### `check_connection_string_or_account_url` ```python theme={null} check_connection_string_or_account_url(cls, values: Dict[str, Any]) -> Dict[str, Any] ``` Validates authentication configuration. Valid configurations: 1. connection\_string only (no account\_url, no SPN fields) 2. account\_url only (uses DefaultAzureCredential) 3. account\_url + client\_id + tenant\_id + client\_secret (SPN auth) #### `get_blob_client` ```python theme={null} get_blob_client(self, container: str, blob: str) -> 'BlobClient' ``` Returns an authenticated Blob client that can be used to download and upload blobs. **Args:** * `container`: Name of the Blob Storage container to retrieve from. * `blob`: Name of the blob within this container to retrieve. #### `get_client` ```python theme={null} get_client(self) -> 'BlobServiceClient' ``` Returns an authenticated base Blob Service client that can be used to create other clients for Azure services. #### `get_container_client` ```python theme={null} get_container_client(self, container: str) -> 'ContainerClient' ``` Returns an authenticated Container client that can be used to create clients for Azure services. **Args:** * `container`: Name of the Blob Storage container to retrieve from. ### `AzureCosmosDbCredentials` Block used to manage Cosmos DB authentication with Azure. Azure authentication is handled via the `azure` module through a connection string. **Args:** * `connection_string`: Includes the authorization information required. **Methods:** #### `get_client` ```python theme={null} get_client(self) -> 'CosmosClient' ``` Returns an authenticated Cosmos client that can be used to create other clients for Azure services. #### `get_container_client` ```python theme={null} get_container_client(self, container: str, database: str) -> 'ContainerProxy' ``` Returns an authenticated Container client used for querying. **Args:** * `container`: Name of the Cosmos DB container to retrieve from. * `database`: Name of the Cosmos DB database. #### `get_database_client` ```python theme={null} get_database_client(self, database: str) -> 'DatabaseProxy' ``` Returns an authenticated Database client. **Args:** * `database`: Name of the database. ### `AzureMlCredentials` Block used to manage authentication with AzureML. Azure authentication is handled via the `azure` module. **Args:** * `tenant_id`: The active directory tenant that the service identity belongs to. * `service_principal_id`: The service principal ID. * `service_principal_password`: The service principal password/key. * `subscription_id`: The Azure subscription ID containing the workspace. * `resource_group`: The resource group containing the workspace. * `workspace_name`: The existing workspace name. **Methods:** #### `get_workspace` ```python theme={null} get_workspace(self) -> 'Workspace' ``` Returns an authenticated base Workspace that can be used in Azure's Datasets and Datastores. ### `AzureContainerInstanceCredentials` Block used to manage Azure Container Instances authentication. Stores Azure Service Principal authentication data. **Methods:** #### `get_container_client` ```python theme={null} get_container_client(self, subscription_id: str) ``` Creates an Azure Container Instances client initialized with data from this block's fields and a provided Azure subscription ID. **Args:** * `subscription_id`: A valid Azure subscription ID. **Returns:** * An initialized `ContainerInstanceManagementClient` #### `get_resource_client` ```python theme={null} get_resource_client(self, subscription_id: str) ``` Creates an Azure resource management client initialized with data from this block's fields and a provided Azure subscription ID. **Args:** * `subscription_id`: A valid Azure subscription ID. **Returns:** * An initialized `ResourceManagementClient` #### `validate_credential_kwargs` ```python theme={null} validate_credential_kwargs(cls, values) ``` Validates that if any of `client_id`, `tenant_id`, or `client_secret` are provided, all must be provided. ### `AzureDevopsCredentials` Block used to authenticate with Azure DevOps using a Personal Access Token. **Attributes:** * `token`: A Personal Access Token generated from Azure DevOps. **Methods:** #### `get_auth_header` ```python theme={null} get_auth_header(self) -> Dict[str, str] ``` Returns an HTTP Authorization header using the stored PAT. This can be used for Azure DevOps REST API calls. # __init__ Source: https://docs.prefect.io/integrations/prefect-azure/api-ref/prefect_azure-deployments-__init__ # `prefect_azure.deployments` *This module is empty or contains only private/internal implementations.* # steps Source: https://docs.prefect.io/integrations/prefect-azure/api-ref/prefect_azure-deployments-steps # `prefect_azure.deployments.steps` Prefect deployment steps for code storage and retrieval in Azure Blob Storage. These steps can be used in a `prefect.yaml` file to define the default push and pull steps for a group of deployments, or they can be used to define the push and pull steps for a specific deployment. !!! example Sample `prefect.yaml` file that is configured to push and pull to and from an Azure Blob Storage container: ```yaml theme={null} prefect_version: ... name: ... push: - prefect_azure.deployments.steps.push_to_azure_blob_storage: requires: prefect-azure[blob_storage] container: my-container folder: my-folder credentials: "{{ prefect.blocks.azure-blob-storage-credentials.dev-credentials }}" pull: - prefect_azure.deployments.steps.pull_from_azure_blob_storage: requires: prefect-azure[blob_storage] container: "{{ container }}" folder: "{{ folder }}" credentials: "{{ prefect.blocks.azure-blob-storage-credentials.dev-credentials }}" ``` For more information about using deployment steps, check out out the Prefect [docs](https://docs.prefect.io/latest/deploy/). ## Functions ### `push_to_azure_blob_storage` ```python theme={null} push_to_azure_blob_storage(container: str, folder: str, credentials: Dict[str, str], ignore_file: Optional[str] = '.prefectignore') ``` Pushes to an Azure Blob Storage container. **Args:** * `container`: The name of the container to push files to * `folder`: The folder within the container to push to * `credentials`: A dictionary of credentials with keys `connection_string` or `account_url` and values of the corresponding connection string or account url. If both are provided, `connection_string` will be used. * `ignore_file`: The path to a file containing patterns of files to ignore when pushing to Azure Blob Storage. If not provided, the default `.prefectignore` file will be used. ### `pull_from_azure_blob_storage` ```python theme={null} pull_from_azure_blob_storage(container: str, folder: str, credentials: Dict[str, str]) ``` Pulls from an Azure Blob Storage container. **Args:** * `container`: The name of the container to pull files from * `folder`: The folder within the container to pull from * `credentials`: A dictionary of credentials with keys `connection_string` or `account_url` and values of the corresponding connection string or account url. If both are provided, `connection_string` will be used. # __init__ Source: https://docs.prefect.io/integrations/prefect-azure/api-ref/prefect_azure-experimental-__init__ # `prefect_azure.experimental` *This module is empty or contains only private/internal implementations.* # __init__ Source: https://docs.prefect.io/integrations/prefect-azure/api-ref/prefect_azure-experimental-bundles-__init__ # `prefect_azure.experimental.bundles` *This module is empty or contains only private/internal implementations.* # execute Source: https://docs.prefect.io/integrations/prefect-azure/api-ref/prefect_azure-experimental-bundles-execute # `prefect_azure.experimental.bundles.execute` ## Functions ### `execute_bundle_from_azure_blob_storage` ```python theme={null} execute_bundle_from_azure_blob_storage(container: str, key: str, azure_blob_storage_credentials_block_name: str) ``` # upload Source: https://docs.prefect.io/integrations/prefect-azure/api-ref/prefect_azure-experimental-bundles-upload # `prefect_azure.experimental.bundles.upload` ## Functions ### `upload_bundle_to_azure_blob_storage` ```python theme={null} upload_bundle_to_azure_blob_storage(local_filepath: Path, container: str, key: str, azure_blob_storage_credentials_block_name: str) -> UploadBundleToAzureBlobStorageOutput ``` ## Classes ### `UploadBundleToAzureBlobStorageOutput` # decorators Source: https://docs.prefect.io/integrations/prefect-azure/api-ref/prefect_azure-experimental-decorators # `prefect_azure.experimental.decorators` ## Functions ### `azure_container_instance` ```python theme={null} azure_container_instance(work_pool: str, **job_variables: Any) -> Callable[[Flow[P, R]], InfrastructureBoundFlow[P, R]] ``` Decorator that binds execution of a flow to an Azure Container Instance work pool **Args:** * `work_pool`: The name of the Azure Container Instance work pool to use * `**job_variables`: Additional job variables to use for infrastructure configuration # ml_datastore Source: https://docs.prefect.io/integrations/prefect-azure/api-ref/prefect_azure-ml_datastore # `prefect_azure.ml_datastore` Tasks for interacting with Azure ML Datastore ## Functions ### `ml_list_datastores` ```python theme={null} ml_list_datastores(ml_credentials: 'AzureMlCredentials') -> Dict ``` Lists the Datastores in the Workspace. **Args:** * `ml_credentials`: Credentials to use for authentication with Azure. ### `ml_get_datastore` ```python theme={null} ml_get_datastore(ml_credentials: 'AzureMlCredentials', datastore_name: Optional[str] = None) -> Datastore ``` Gets the Datastore within the Workspace. **Args:** * `ml_credentials`: Credentials to use for authentication with Azure. * `datastore_name`: The name of the Datastore. If `None`, then the default Datastore of the Workspace is returned. ### `ml_upload_datastore` ```python theme={null} ml_upload_datastore(path: Union[str, Path, List[Union[str, Path]]], ml_credentials: 'AzureMlCredentials', target_path: Union[str, Path, None] = None, relative_root: Union[str, Path, None] = None, datastore_name: Optional[str] = None, overwrite: bool = False) -> 'DataReference' ``` Uploads local files to a Datastore. **Args:** * `path`: The path to a single file, single directory, or a list of path to files to be uploaded. * `ml_credentials`: Credentials to use for authentication with Azure. * `target_path`: The location in the blob container to upload to. If None, then upload to root. * `relative_root`: The root from which is used to determine the path of the files in the blob. For example, if we upload /path/to/file.txt, and we define base path to be /path, when file.txt is uploaded to the blob storage, it will have the path of /to/file.txt. * `datastore_name`: The name of the Datastore. If `None`, then the default Datastore of the Workspace is returned. * `overwrite`: Overwrite existing file(s). ### `ml_register_datastore_blob_container` ```python theme={null} ml_register_datastore_blob_container(container_name: str, ml_credentials: 'AzureMlCredentials', blob_storage_credentials: 'AzureBlobStorageCredentials', datastore_name: Optional[str] = None, create_container_if_not_exists: bool = False, overwrite: bool = False, set_as_default: bool = False) -> 'AzureBlobDatastore' ``` Registers a Azure Blob Storage container as a Datastore in a Azure ML service Workspace. **Args:** * `container_name`: The name of the container. * `ml_credentials`: Credentials to use for authentication with Azure ML. * `blob_storage_credentials`: Credentials to use for authentication with Azure Blob Storage. * `datastore_name`: The name of the datastore. If not defined, the container name will be used. * `create_container_if_not_exists`: Create a container, if one does not exist with the given name. * `overwrite`: Overwrite an existing datastore. If the datastore does not exist, it will be created. * `set_as_default`: Set the created Datastore as the default datastore for the Workspace. # repository Source: https://docs.prefect.io/integrations/prefect-azure/api-ref/prefect_azure-repository # `prefect_azure.repository` Interact with files stored in Azure DevOps Git repositories. The `AzureDevopsRepository` class in this module is a storage block that lets Prefect agents pull Prefect flow code from Azure DevOps repositories. The `AzureDevopsRepository` block is ideally configured via the Prefect UI, but can also be used in Python as the following examples demonstrate. Examples: Load a configured Azure DevOps repository block: ```python theme={null} from prefect_azure.repository import AzureDevopsRepository azuredevops_repository_block = AzureDevopsRepository.load("BLOCK_NAME") ``` Clone a public Azure DevOps repository: ```python theme={null} from prefect_azure.repository import AzureDevopsRepository public_repo = AzureDevopsRepository( repository="https://dev.azure.com/myorg/myproject/_git/myrepo" ) public_repo.save(name="my-azuredevops-block") ``` Clone a specific branch or tag: ```python theme={null} from prefect_azure.repository import AzureDevopsRepository branch_repo = AzureDevopsRepository( repository="https://dev.azure.com/myorg/myproject/_git/myrepo", reference="develop" ) branch_repo.save(name="my-azuredevops-branch-block") ``` Clone a private Azure DevOps repository: ```python theme={null} from prefect_azure import AzureDevopsCredentials, AzureDevopsRepository azuredevops_credentials_block = AzureDevopsCredentials.load("my-azuredevops-credentials") private_repo = AzureDevopsRepository( repository="https://dev.azure.com/myorg/myproject/_git/myrepo", credentials=azuredevops_credentials_block ) private_repo.save(name="my-private-azuredevops-block") ``` ## Classes ### `AzureDevopsRepository` Interact with files stored in Azure DevOps Git repositories. **Methods:** #### `aget_directory` ```python theme={null} aget_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> None ``` Asynchronously clones an Azure DevOps repository. This defaults to cloning the repository reference configured on the Block to the present working directory. **Args:** * `from_path`: If provided, interpreted as a subdirectory of the underlying repository that will be copied to the provided local path. * `local_path`: A local path to clone to; defaults to present working directory. #### `get_directory` ```python theme={null} get_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> None ``` Clones an Azure DevOps project within `from_path` to the provided `local_path`. This defaults to cloning the repository reference configured on the Block to the present working directory. **Args:** * `from_path`: If provided, interpreted as a subdirectory of the underlying repository that will be copied to the provided local path. * `local_path`: A local path to clone to; defaults to present working directory. # __init__ Source: https://docs.prefect.io/integrations/prefect-azure/api-ref/prefect_azure-workers-__init__ # `prefect_azure.workers` Worker classes for Azure # container_instance Source: https://docs.prefect.io/integrations/prefect-azure/api-ref/prefect_azure-workers-container_instance # `prefect_azure.workers.container_instance` Module containing the Azure Container Instances worker used for executing flow runs in ACI containers. To start an ACI worker, run the following command: ```bash theme={null} prefect worker start --pool 'my-work-pool' --type azure-container-instance ``` Replace `my-work-pool` with the name of the work pool you want the worker to poll for flow runs. !!! example "Using a custom ARM template" To facilitate easy customization, the Azure Container worker provisions a containing group using an ARM template. The default ARM template is represented in YAML as follows: ```yaml theme={null} --- arm_template: "$schema": https://schema.management.azure.com/schemas/2019-08-01/deploymentTemplate.json# contentVersion: 1.0.0.0 parameters: location: type: string defaultValue: "[resourceGroup().location]" metadata: description: Location for all resources. container_group_name: type: string defaultValue: "[uniqueString(resourceGroup().id)]" metadata: description: The name of the container group to create. container_name: type: string defaultValue: "[uniqueString(resourceGroup().id)]" metadata: description: The name of the container to create. resources: - type: Microsoft.ContainerInstance/containerGroups apiVersion: '2022-09-01' name: "[parameters('container_group_name')]" location: "[parameters('location')]" properties: containers: - name: "[parameters('container_name')]" properties: image: rpeden/my-aci-flow:latest command: "{{ command }}" resources: requests: cpu: "{{ cpu }}" memoryInGB: "{{ memory }}" environmentVariables: [] osType: Linux restartPolicy: Never ``` Each values enclosed in `{{ }}` is a placeholder that will be replaced with a value at runtime. The values that can be used a placeholders are defined by the `variables` schema defined in the base job template. The default job manifest and available variables can be customized on a work pool by work pool basis. These customizations can be made via the Prefect UI when creating or editing a work pool. Using an ARM template makes the worker flexible; you're not limited to using the features the worker provides out of the box. Instead, you can modify the ARM template to use any features available in Azure Container Instances. ## Classes ### `ContainerGroupProvisioningState` Terminal provisioning states for ACI container groups. Per the Azure docs, the states in this Enum are the only ones that can be relied on as dependencies. ### `ContainerRunState` Terminal run states for ACI containers. ### `AzureContainerJobConfiguration` Configuration for an Azure Container Instance flow run. **Methods:** #### `prepare_for_flow_run` ```python theme={null} prepare_for_flow_run(self, flow_run: 'FlowRun', deployment: Optional['DeploymentResponse'] = None, flow: Optional['Flow'] = None, work_pool: Optional['WorkPool'] = None, worker_name: Optional[str] = None, worker_id: Optional['UUID'] = None) ``` Prepares the job configuration for a flow run. ### `AzureContainerVariables` Variables for an Azure Container Instance flow run. ### `AzureContainerWorkerResult` Contains information about the final state of a completed process ### `AzureContainerWorker` A Prefect worker that runs flows in an Azure Container Instance. **Methods:** #### `kill_infrastructure` ```python theme={null} kill_infrastructure(self, infrastructure_pid: str, configuration: AzureContainerJobConfiguration, grace_seconds: int = 30) -> None ``` Kill an Azure Container Instance by stopping or deleting its container group. If `configuration.keep_container_group` is True, the container group will be stopped but not deleted. Otherwise, the container group will be deleted. **Args:** * `infrastructure_pid`: The infrastructure identifier in format "flow\_run\_id:container\_group\_name". * `configuration`: The job configuration used to connect to Azure. * `grace_seconds`: Not directly used for ACI (Azure handles graceful shutdown). **Raises:** * `InfrastructureNotFound`: If the container group doesn't exist. #### `run` ```python theme={null} run(self, flow_run: 'FlowRun', configuration: AzureContainerJobConfiguration, task_status: Optional[anyio.abc.TaskStatus] = None) ``` Run a flow in an Azure Container Instance. Args: flow\_run: The flow run to run. configuration: The configuration for the flow run. task\_status: The task status object for the current task. Used to provide an identifier that can be used to cancel the task. **Returns:** * The result of the flow run. # prefect-azure Source: https://docs.prefect.io/integrations/prefect-azure/index `prefect-azure` makes it easy to leverage the capabilities of Azure in your workflows. For example, you can retrieve secrets, read and write Blob Storage objects, and deploy your flows on Azure Container Instances (ACI). ## Getting started ### Prerequisites * An [Azure account](https://azure.microsoft.com/) and the necessary permissions to access desired services. ### Install `prefect-azure` The following command will install a version of `prefect-azure` compatible with your installed version of `prefect`. If you don't already have `prefect` installed, it will install the newest version of `prefect` as well. ```bash theme={null} pip install "prefect[azure]" ``` Upgrade to the latest versions of `prefect` and `prefect-azure`: ```bash theme={null} pip install -U "prefect[azure]" ``` If necessary, see [additional installation options for Blob Storage, Cosmos DB, and ML Datastore](#additional-installation-options). To install prefect-azure with all additional capabilities, run the install command above and then run the following command: ```bash theme={null} pip install "prefect-azure[all_extras]" ``` ### Register newly installed block types Register the block types in the module to make them available for use. ```bash theme={null} prefect block register -m prefect_azure ``` ## Examples ### Download a blob ```python theme={null} from prefect import flow from prefect_azure import AzureBlobStorageCredentials from prefect_azure.blob_storage import blob_storage_download @flow def example_blob_storage_download_flow(): connection_string = "connection_string" blob_storage_credentials = AzureBlobStorageCredentials( connection_string=connection_string, ) data = blob_storage_download( blob="prefect.txt", container="prefect", blob_storage_credentials=blob_storage_credentials, ) return data example_blob_storage_download_flow() ``` Use `with_options` to customize options on any existing task or flow: ```python theme={null} custom_blob_storage_download_flow = example_blob_storage_download_flow.with_options( name="My custom task name", retries=2, retry_delay_seconds=10, ) ``` ### Using Service Principal Authentication Azure Blob Storage credentials support Service Principal Name (SPN) authentication for secure access to your storage account. Create an `AzureBlobStorageCredentials` block with your service principal credentials: ```python theme={null} from prefect_azure import AzureBlobStorageCredentials credentials = AzureBlobStorageCredentials( account_url="https://myaccount.blob.core.windows.net/", tenant_id="your-tenant-id", client_id="your-client-id", client_secret="your-client-secret" ) credentials.save("my-spn-credentials") ``` Then reference this block in your `prefect.yaml` deployment steps: ```yaml theme={null} push: - prefect_azure.deployments.steps.push_to_azure_blob_storage: requires: prefect-azure[blob_storage] container: my-container folder: my-folder credentials: "{{ prefect.blocks.azure-blob-storage-credentials.my-spn-credentials }}" pull: - prefect_azure.deployments.steps.pull_from_azure_blob_storage: requires: prefect-azure[blob_storage] container: "{{ container }}" folder: "{{ folder }}" credentials: "{{ prefect.blocks.azure-blob-storage-credentials.my-spn-credentials }}" ``` When all three SPN fields (`tenant_id`, `client_id`, `client_secret`) are provided, the credentials will use `ClientSecretCredential` for authentication. If any SPN fields are missing, it will fall back to `DefaultAzureCredential`. ### Run flows on Azure Container Instances Run flows on [Azure Container Instances (ACI)](https://learn.microsoft.com/en-us/azure/container-instances/) to dynamically scale your infrastructure. See the [Azure Container Instances Worker Guide](/integrations/prefect-azure/aci_worker/) for a walkthrough of using ACI in a hybrid work pool. If you're using Prefect Cloud, [ACI push work pools](/v3/how-to-guides/deployment_infra/serverless#azure-container-instances) provide all the benefits of ACI with a quick setup and no worker needed. ## Resources For assistance using Azure, consult the [Azure documentation](https://learn.microsoft.com/en-us/azure). Refer to the [prefect-azure SDK Reference](/integrations/prefect-azure/api-ref/prefect_azure-credentials) to explore all the capabilities of the `prefect-azure` library. ### Additional installation options First install the main library compatible with your `prefect` version: ```bash theme={null} pip install "prefect[azure]" ``` Then install the additional capabilities you need. To use Blob Storage: ```bash theme={null} pip install "prefect-azure[blob_storage]" ``` To use Cosmos DB: ```bash theme={null} pip install "prefect-azure[cosmos_db]" ``` To use ML Datastore: ```bash theme={null} pip install "prefect-azure[ml_datastore]" ``` # credentials Source: https://docs.prefect.io/integrations/prefect-bitbucket/api-ref/prefect_bitbucket-credentials # `prefect_bitbucket.credentials` Module to enable authenticate interactions with BitBucket. ## Classes ### `ClientType` The client type to use. ### `BitBucketCredentials` Store BitBucket credentials to interact with private BitBucket repositories. **Attributes:** * `token`: An access token to authenticate with BitBucket. This is required for accessing private repositories. * `username`: Identification name unique across entire BitBucket site. * `password`: The password to authenticate to BitBucket. * `url`: The base URL of your BitBucket instance. **Examples:** Load stored BitBucket credentials: ```python theme={null} from prefect_bitbucket import BitBucketCredentials bitbucket_credentials_block = BitBucketCredentials.load("BLOCK_NAME") ``` **Methods:** #### `format_git_credentials` ```python theme={null} format_git_credentials(self, url: str) -> str ``` Format and return the full git URL with BitBucket credentials embedded. BitBucket has different authentication formats: * BitBucket Server: username:token format required * BitBucket Cloud: x-token-auth:token prefix * Self-hosted instances: If username is provided, username:token format is used regardless of hostname (supports instances without 'bitbucketserver' in URL) **Args:** * `url`: Repository URL (e.g., "[https://bitbucket.org/org/repo.git](https://bitbucket.org/org/repo.git)") **Returns:** * Complete URL with credentials embedded **Raises:** * `ValueError`: If credentials are not properly configured #### `get_client` ```python theme={null} get_client(self, client_type: Union[str, ClientType], **client_kwargs) -> Union[Cloud, Bitbucket] ``` Get an authenticated local or cloud Bitbucket client. **Args:** * `client_type`: Whether to use a local or cloud client. **Returns:** * An authenticated Bitbucket client. # repository Source: https://docs.prefect.io/integrations/prefect-bitbucket/api-ref/prefect_bitbucket-repository # `prefect_bitbucket.repository` Allows for interaction with a BitBucket repository. The `BitBucket` class in this collection is a storage block that lets Prefect agents pull Prefect flow code from BitBucket repositories. The `BitBucket` block is ideally configured via the Prefect UI, but can also be used in Python as the following examples demonstrate. Examples ````python theme={null} from prefect_bitbucket.repository import BitBucketRepository # public BitBucket repository public_bitbucket_block = BitBucketRepository( repository="https://bitbucket.com/my-project/my-repository.git" ) public_bitbucket_block.save(name="my-bitbucket-block") # specific branch or tag branch_bitbucket_block = BitBucketRepository( reference="branch-or-tag-name", repository="https://bitbucket.com/my-project/my-repository.git" ) branch_bitbucket_block.save(name="my-bitbucket-block") # private BitBucket repository private_bitbucket_block = BitBucketRepository( repository="https://bitbucket.com/my-project/my-repository.git", bitbucket_credentials=BitBucketCredentials.load("my-bitbucket-credentials-block") ) private_bitbucket_block.save(name="my-private-bitbucket-block") ## Classes ### `BitBucketRepository` Interact with files stored in BitBucket repositories. An accessible installation of git is required for this block to function properly. **Methods:** #### `aget_directory` ```python aget_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> None ```` Clones a BitBucket project within `from_path` to the provided `local_path`. This defaults to cloning the repository reference configured on the Block to the present working directory. Async version. **Args:** * `from_path`: If provided, interpreted as a subdirectory of the underlying repository that will be copied to the provided local path. * `local_path`: A local path to clone to; defaults to present working directory. #### `get_directory` ```python theme={null} get_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> None ``` Clones a BitBucket project within `from_path` to the provided `local_path`. This defaults to cloning the repository reference configured on the Block to the present working directory. **Args:** * `from_path`: If provided, interpreted as a subdirectory of the underlying repository that will be copied to the provided local path. * `local_path`: A local path to clone to; defaults to present working directory. # prefect-bitbucket Source: https://docs.prefect.io/integrations/prefect-bitbucket/index The `prefect-bitbucket` library makes it easy to interact with Bitbucket repositories and credentials. ## Getting started ### Prerequisites * A [Bitbucket account](https://bitbucket.org/product). ### Install `prefect-bitbucket` The following command will install a version of `prefect-bitbucket` compatible with your installed version of `prefect`. If you don't already have `prefect` installed, it will install the newest version of `prefect` as well. ```bash theme={null} pip install "prefect[bitbucket]" ``` Upgrade to the latest versions of `prefect` and `prefect-bitbucket`: ```bash theme={null} pip install -U "prefect[bitbucket]" ``` ### Register newly installed block types Register the block types in the `prefect-bitbucket` module to make them available for use. ```bash theme={null} prefect block register -m prefect_bitbucket ``` ## Examples In the examples below, you create blocks with Python code. Alternatively, blocks can be created through the Prefect UI. ## Store deployment flow code in a private Bitbucket repository To create a deployment and run a deployment where the flow code is stored in a private Bitbucket repository, you can use the `BitBucketCredentials` block. A deployment can use flow code stored in a Bitbucket repository without using this library in either of the following cases: * The repository is public * The deployment uses a [Secret block](/v3/develop/secrets) to store the token Create a Bitbucket Credentials block: ```python theme={null} from prefect_bitbucket import BitBucketCredentials bitbucket_credentials_block = BitBucketCredentials(token="x-token-auth:my-token") bitbucket_credentials_block.save(name="my-bitbucket-credentials-block") ``` **Difference between Bitbucket Server and Bitbucket Cloud authentication** If using a token to authenticate to Bitbucket Cloud, only set the `token` to authenticate. Do not include a value in the `username` field or authentication will fail. If using Bitbucket Server, provide both the `token` and `username` values. ### Access flow code stored in a private Bitbucket repository in a deployment Use the credentials block you created above to pass the Bitbucket access token during deployment creation. The code below assumes there's flow code stored in a private Bitbucket repository. ```python theme={null} from prefect import flow from prefect.runner.storage import GitRepository from prefect_bitbucket import BitBucketCredentials if __name__ == "__main__": source = GitRepository( url="https://bitbucket.com/org/private-repo.git", credentials=BitBucketCredentials.load("my-bitbucket-credentials-block") ) flow.from_source( source=source, entrypoint="my_file.py:my_flow", ).deploy( name="private-bitbucket-deploy", work_pool_name="my_pool", ) ``` Alternatively, if you use a `prefect.yaml` file to create the deployment, reference the Bitbucket Credentials block in the `pull` step: ```yaml theme={null} pull: - prefect.deployments.steps.git_clone: repository: https://bitbucket.org/org/private-repo.git credentials: "{{ prefect.blocks.bitbucket-credentials.my-bitbucket-credentials-block }}" ``` Or, use `pull_with_block` to pull code using a Bitbucket Repository block directly: ```yaml theme={null} pull: - prefect.deployments.steps.pull_with_block: block_type_slug: bitbucket-repository block_document_name: my-bitbucket-block ``` ### Interact with a Bitbucket repository The code below shows how to reference a particular branch or tag of a Bitbucket repository. ```python theme={null} from prefect_bitbucket import BitbucketRepository def save_bitbucket_block(): bitbucket_block = BitbucketRepository( repository="https://bitbucket.org/testing/my-repository.git", reference="branch-or-tag-name", ) bitbucket_block.save("my-bitbucket-block") if __name__ == "__main__": save_bitbucket_block() ``` Exclude the `reference` field to use the default branch. Reference a BitBucketCredentials block for authentication if the repository is private. Use the newly created block to interact with the Bitbucket repository. For example, download the repository contents with the `.get_directory()` method like this: ```python theme={null} from prefect_bitbucket.repositories import BitbucketRepository def fetch_repo(): bitbucket_block = BitbucketRepository.load("my-bitbucket-block") bitbucket_block.get_directory() if __name__ == "__main__": fetch_repo() ``` ## Resources For assistance using Bitbucket, consult the [Bitbucket documentation](https://bitbucket.org/product/guides). Refer to the `prefect-bitbucket` [SDK documentation](/integrations/prefect-bitbucket/api-ref/prefect_bitbucket-credentials) to explore all the capabilities of the `prefect-bitbucket` library. # client Source: https://docs.prefect.io/integrations/prefect-dask/api-ref/prefect_dask-client # `prefect_dask.client` ## Classes ### `PrefectDaskClient` **Methods:** #### `map` ```python theme={null} map(self, func, *iterables, **kwargs) ``` #### `submit` ```python theme={null} submit(self, func, *args, **kwargs) ``` # task_runners Source: https://docs.prefect.io/integrations/prefect-dask/api-ref/prefect_dask-task_runners # `prefect_dask.task_runners` Interface and implementations of the Dask Task Runner. [Task Runners](https://docs.prefect.io/api-ref/prefect/task-runners/) in Prefect are responsible for managing the execution of Prefect task runs. Generally speaking, users are not expected to interact with task runners outside of configuring and initializing them for a flow. Example: ```python theme={null} import time from prefect import flow, task @task def shout(number): time.sleep(0.5) print(f"#{number}") @flow def count_to(highest_number): for number in range(highest_number): shout.submit(number) if __name__ == "__main__": count_to(10) # outputs #0 #1 #2 #3 #4 #5 #6 #7 #8 #9 ``` Switching to a `DaskTaskRunner`: ```python theme={null} import time from prefect import flow, task from prefect_dask import DaskTaskRunner @task def shout(number): time.sleep(0.5) print(f"#{number}") @flow(task_runner=DaskTaskRunner) def count_to(highest_number): for number in range(highest_number): shout.submit(number) if __name__ == "__main__": count_to(10) # outputs #3 #7 #2 #6 #4 #0 #1 #5 #8 #9 ``` ## Classes ### `PrefectDaskFuture` A Prefect future that wraps a distributed.Future. This future is used when the task run is submitted to a DaskTaskRunner. **Methods:** #### `result` ```python theme={null} result(self, timeout: Optional[float] = None, raise_on_failure: bool = True) -> R ``` #### `wait` ```python theme={null} wait(self, timeout: Optional[float] = None) -> None ``` ### `DaskTaskRunner` A parallel task\_runner that submits tasks to the `dask.distributed` scheduler. By default a temporary `distributed.LocalCluster` is created (and subsequently torn down) within the `start()` contextmanager. To use a different cluster class (e.g. [`dask_kubernetes.KubeCluster`](https://kubernetes.dask.org/)), you can specify `cluster_class`/`cluster_kwargs`. Alternatively, if you already have a dask cluster running, you can provide the cluster object via the `cluster` kwarg or the address of the scheduler via the `address` kwarg. !!! warning "Multiprocessing safety" Note that, because the `DaskTaskRunner` uses multiprocessing, calls to flows in scripts must be guarded with `if __name__ == "__main__":` or warnings will be displayed. **Args:** * `cluster`: Currently running dask cluster; if one is not provider (or specified via `address` kwarg), a temporary cluster will be created in `DaskTaskRunner.start()`. Defaults to `None`. * `address`: Address of a currently running dask scheduler. Defaults to `None`. * `cluster_class`: The cluster class to use when creating a temporary dask cluster. Can be either the full class name (e.g. `"distributed.LocalCluster"`), or the class itself. * `cluster_kwargs`: Additional kwargs to pass to the `cluster_class` when creating a temporary dask cluster. * `adapt_kwargs`: Additional kwargs to pass to `cluster.adapt` when creating a temporary dask cluster. Note that adaptive scaling is only enabled if `adapt_kwargs` are provided. * `client_kwargs`: Additional kwargs to use when creating a [`dask.distributed.Client`](https://distributed.dask.org/en/latest/api.html#client). * `performance_report_path`: Path where the Dask performance report will be saved. If not provided, no performance report will be generated. **Examples:** Using a temporary local dask cluster: ```python theme={null} from prefect import flow from prefect_dask.task_runners import DaskTaskRunner @flow(task_runner=DaskTaskRunner) def my_flow(): ... ``` Using a temporary cluster running elsewhere. Any Dask cluster class should work, here we use [dask-cloudprovider](https://cloudprovider.dask.org): ```python theme={null} DaskTaskRunner( cluster_class="dask_cloudprovider.FargateCluster", cluster_kwargs={ "image": "prefecthq/prefect:latest", "n_workers": 5, }, ) ``` Connecting to an existing dask cluster: ```python theme={null} DaskTaskRunner(address="192.0.2.255:8786") ``` **Methods:** #### `client` ```python theme={null} client(self) -> PrefectDaskClient ``` Get the Dask client for the task runner. The client is created on first access. If a remote cluster is not provided, the client will attempt to create/connect to a local cluster. #### `duplicate` ```python theme={null} duplicate(self) ``` Create a new instance of the task runner with the same settings. #### `map` ```python theme={null} map(self, task: 'Task[P, Coroutine[Any, Any, R]]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None) -> PrefectFutureList[PrefectDaskFuture[R]] ``` #### `map` ```python theme={null} map(self, task: 'Task[P, R]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None) -> PrefectFutureList[PrefectDaskFuture[R]] ``` #### `map` ```python theme={null} map(self, task: 'Task[P, R]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None) ``` #### `submit` ```python theme={null} submit(self, task: 'Task[P, Coroutine[Any, Any, R]]', parameters: dict[str, Any], wait_for: Iterable[PrefectDaskFuture[R]] | None = None, dependencies: dict[str, Set[RunInput]] | None = None) -> PrefectDaskFuture[R] ``` #### `submit` ```python theme={null} submit(self, task: 'Task[Any, R]', parameters: dict[str, Any], wait_for: Iterable[PrefectDaskFuture[R]] | None = None, dependencies: dict[str, Set[RunInput]] | None = None) -> PrefectDaskFuture[R] ``` #### `submit` ```python theme={null} submit(self, task: 'Union[Task[P, R], Task[P, Coroutine[Any, Any, R]]]', parameters: dict[str, Any], wait_for: Iterable[PrefectDaskFuture[R]] | None = None, dependencies: dict[str, Set[RunInput]] | None = None) -> PrefectDaskFuture[R] ``` # utils Source: https://docs.prefect.io/integrations/prefect-dask/api-ref/prefect_dask-utils # `prefect_dask.utils` Utils to use alongside prefect-dask. ## Functions ### `get_dask_client` ```python theme={null} get_dask_client(timeout: Optional[Union[int, float, str, timedelta]] = None, **client_kwargs: Dict[str, Any]) -> Generator[Client, None, None] ``` Yields a temporary synchronous dask client; this is useful for parallelizing operations on dask collections, such as a `dask.DataFrame` or `dask.Bag`. Without invoking this, workers do not automatically get a client to connect to the full cluster. Therefore, it will attempt perform work within the worker itself serially, and potentially overwhelming the single worker. When in an async context, we recommend using `get_async_dask_client` instead. **Args:** * `timeout`: Timeout after which to error out; has no effect in flow run contexts because the client has already started; Defaults to the `distributed.comm.timeouts.connect` configuration value. * `client_kwargs`: Additional keyword arguments to pass to `distributed.Client`, and overwrites inherited keyword arguments from the task runner, if any. **Examples:** Use `get_dask_client` to distribute work across workers. ```python theme={null} import dask from prefect import flow, task from prefect_dask import DaskTaskRunner, get_dask_client @task def compute_task(): with get_dask_client(timeout="120s") as client: df = dask.datasets.timeseries("2000", "2001", partition_freq="4w") summary_df = client.compute(df.describe()).result() return summary_df @flow(task_runner=DaskTaskRunner()) def dask_flow(): prefect_future = compute_task.submit() return prefect_future.result() dask_flow() ``` ### `get_async_dask_client` ```python theme={null} get_async_dask_client(timeout: Optional[Union[int, float, str, timedelta]] = None, **client_kwargs: Dict[str, Any]) -> AsyncGenerator[Client, None] ``` Yields a temporary asynchronous dask client; this is useful for parallelizing operations on dask collections, such as a `dask.DataFrame` or `dask.Bag`. Without invoking this, workers do not automatically get a client to connect to the full cluster. Therefore, it will attempt perform work within the worker itself serially, and potentially overwhelming the single worker. **Args:** * `timeout`: Timeout after which to error out; has no effect in flow run contexts because the client has already started; Defaults to the `distributed.comm.timeouts.connect` configuration value. * `client_kwargs`: Additional keyword arguments to pass to `distributed.Client`, and overwrites inherited keyword arguments from the task runner, if any. **Examples:** Use `get_async_dask_client` to distribute work across workers. ```python theme={null} import dask from prefect import flow, task from prefect_dask import DaskTaskRunner, get_async_dask_client @task async def compute_task(): async with get_async_dask_client(timeout="120s") as client: df = dask.datasets.timeseries("2000", "2001", partition_freq="4w") summary_df = await client.compute(df.describe()) return summary_df @flow(task_runner=DaskTaskRunner()) async def dask_flow(): prefect_future = await compute_task.submit() return await prefect_future.result() asyncio.run(dask_flow()) ``` # prefect-dask Source: https://docs.prefect.io/integrations/prefect-dask/index Accelerate your workflows by running tasks in parallel with Dask Dask can run your tasks in parallel and distribute them over multiple machines. The `prefect-dask` integration makes it easy to accelerate your flow runs with Dask. ## Getting started ### Install `prefect-dask` The following command will install a version of `prefect-dask` compatible with your installed version of `prefect`. If you don't already have `prefect` installed, it will install the newest version of `prefect` as well. ```bash pip theme={null} pip install "prefect[dask]" ``` ```bash uv theme={null} uv pip install "prefect[dask]" ``` Upgrade to the latest versions of `prefect` and `prefect-dask`: ```bash pip theme={null} pip install -U "prefect[dask]" ``` ```bash uv theme={null} uv pip install -U "prefect[dask]" ``` ## Why use Dask? Say your flow downloads many images to train a machine learning model. It takes longer than you'd like for the flow to run because it executes sequentially. To accelerate your flow code, parallelize it with `prefect-dask` in three steps: 1. Add the import: `from prefect_dask import DaskTaskRunner` 2. Specify the task runner in the flow decorator: `@flow(task_runner=DaskTaskRunner)` 3. Submit tasks to the flow's task runner: `a_task.submit(*args, **kwargs)` Below is code with and without the DaskTaskRunner: ```python theme={null} # Completed in 15.2 seconds from typing import List from pathlib import Path import httpx from prefect import flow, task URL_FORMAT = ( "https://www.cpc.ncep.noaa.gov/products/NMME/archive/" "{year:04d}{month:02d}0800/current/images/nino34.rescaling.ENSMEAN.png" ) @task def download_image(year: int, month: int, directory: Path) -> Path: # download image from URL url = URL_FORMAT.format(year=year, month=month) resp = httpx.get(url) # save content to directory/YYYYMM.png file_path = (directory / url.split("/")[-1]).with_stem(f"{year:04d}{month:02d}") file_path.write_bytes(resp.content) return file_path @flow def download_nino_34_plumes_from_year(year: int) -> List[Path]: # create a directory to hold images directory = Path("data") directory.mkdir(exist_ok=True) # download all images file_paths = [] for month in range(1, 12 + 1): file_path = download_image(year, month, directory) file_paths.append(file_path) return file_paths if __name__ == "__main__": download_nino_34_plumes_from_year(2022) ``` ```python theme={null} # Completed in 5.7 seconds from typing import List from pathlib import Path import httpx from prefect import flow, task from prefect_dask import DaskTaskRunner URL_FORMAT = ( "https://www.cpc.ncep.noaa.gov/products/NMME/archive/" "{year:04d}{month:02d}0800/current/images/nino34.rescaling.ENSMEAN.png" ) @task def download_image(year: int, month: int, directory: Path) -> Path: # download image from URL url = URL_FORMAT.format(year=year, month=month) resp = httpx.get(url) # save content to directory/YYYYMM.png file_path = (directory / url.split("/")[-1]).with_stem(f"{year:04d}{month:02d}") file_path.write_bytes(resp.content) return file_path @flow(task_runner=DaskTaskRunner(cluster_kwargs={"processes": False})) def download_nino_34_plumes_from_year(year: int) -> List[Path]: # create a directory to hold images directory = Path("data") directory.mkdir(exist_ok=True) # download all images file_paths = [] for month in range(1, 12 + 1): file_path = download_image.submit(year, month, directory) file_paths.append(file_path) return file_paths if __name__ == "__main__": download_nino_34_plumes_from_year(2022) ``` In our tests, the flow run took 15.2 seconds to execute sequentially. Using the `DaskTaskRunner` reduced the runtime to **5.7** seconds! ## Run tasks on Dask The `DaskTaskRunner` is a [task runner](/v3/develop/task-runners) that submits tasks to the [`dask.distributed`](http://distributed.dask.org/) scheduler. By default, when the `DaskTaskRunner` is specified for a flow run, a temporary Dask cluster is created and used for the duration of the flow run. If you already have a Dask cluster running, either cloud-hosted or local, you can provide the connection URL with the `address` kwarg. `DaskTaskRunner` accepts the following optional parameters: | Parameter | Description | | --------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | address | Address of a currently running Dask scheduler. | | cluster\_class | The cluster class to use when creating a temporary Dask cluster. It can be either the full class name (for example, `"distributed.LocalCluster"`), or the class itself. | | cluster\_kwargs | Additional kwargs to pass to the `cluster_class` when creating a temporary Dask cluster. | | adapt\_kwargs | Additional kwargs to pass to `cluster.adapt` when creating a temporary Dask cluster. Note that adaptive scaling is only enabled if `adapt_kwargs` are provided. | | client\_kwargs | Additional kwargs to use when creating a [`dask.distributed.Client`](https://distributed.dask.org/en/latest/api.html#client). | **Multiprocessing safety** Because the `DaskTaskRunner` uses multiprocessing, calls to flows in scripts must be guarded with `if __name__ == "__main__":` or you will encounter warnings and errors. If you don't provide the `address` of a Dask scheduler, Prefect creates a temporary local cluster automatically. The number of workers used is based on the number of cores on your machine. The default provides a mix of processes and threads that work well for most workloads. To specify this explicitly, pass values for `n_workers` or `threads_per_worker` to `cluster_kwargs`: ```python theme={null} from prefect_dask import DaskTaskRunner # Use 4 worker processes, each with 2 threads DaskTaskRunner( cluster_kwargs={"n_workers": 4, "threads_per_worker": 2} ) ``` ### Use a temporary cluster The `DaskTaskRunner` can create a temporary cluster using any of [Dask's cluster-manager options](https://docs.dask.org/en/latest/setup.html). This is useful when you want each flow run to have its own Dask cluster, allowing for per-flow adaptive scaling. To configure it, provide a `cluster_class`. This can be: * A string specifying the import path to the cluster class (for example, `"dask_cloudprovider.aws.FargateCluster"`) * The cluster class itself * A function for creating a custom cluster You can also configure `cluster_kwargs`. This takes a dictionary of keyword arguments to pass to `cluster_class` when starting the flow run. For example, to configure a flow to use a temporary `dask_cloudprovider.aws.FargateCluster` with four workers running with an image named `my-prefect-image`: ```python theme={null} from prefect_dask import DaskTaskRunner DaskTaskRunner( cluster_class="dask_cloudprovider.aws.FargateCluster", cluster_kwargs={"n_workers": 4, "image": "my-prefect-image"}, ) ``` For larger workloads, you can accelerate execution further by distributing task runs over multiple machines. ### Connect to an existing cluster Multiple Prefect flow runs can use the same existing Dask cluster. You might manage a single long-running Dask cluster (for example, using the Dask [Helm Chart](https://docs.dask.org/en/latest/setup/kubernetes-helm.html)) and configure flows to connect to it during execution. This has disadvantages compared to using a temporary Dask cluster: * All workers in the cluster must have dependencies installed for all flows you intend to run. * Multiple flow runs may compete for resources. Dask tries to do a good job sharing resources between tasks, but you may still run into issues. Still, you may prefer managing a single long-running Dask cluster. To configure a `DaskTaskRunner` to connect to an existing cluster, pass in the address of the scheduler to the `address` argument: ```python theme={null} from prefect_dask import DaskTaskRunner @flow(task_runner=DaskTaskRunner(address="http://my-dask-cluster")) def my_flow(): ... ``` Suppose you have an existing Dask client/cluster such as a `dask.dataframe.DataFrame`. With `prefect-dask`, it takes just a few steps: 1. Add imports 2. Add `task` and `flow` decorators 3. Use `get_dask_client` context manager to distribute work across Dask workers 4. Specify the task runner and client's address in the flow decorator 5. Submit the tasks to the flow's task runner ```python theme={null} import dask.dataframe import dask.distributed client = dask.distributed.Client() def read_data(start: str, end: str) -> dask.dataframe.DataFrame: df = dask.datasets.timeseries(start, end, partition_freq="4w") return df def process_data(df: dask.dataframe.DataFrame) -> dask.dataframe.DataFrame: df_yearly_avg = df.groupby(df.index.year).mean() return df_yearly_avg.compute() def dask_pipeline(): df = read_data("1988", "2022") df_yearly_average = process_data(df) return df_yearly_average if __name__ == "__main__": dask_pipeline() ``` ```python theme={null} import dask.dataframe import dask.distributed from prefect import flow, task from prefect_dask import DaskTaskRunner, get_dask_client client = dask.distributed.Client() @task def read_data(start: str, end: str) -> dask.dataframe.DataFrame: df = dask.datasets.timeseries(start, end, partition_freq="4w") return df @task def process_data(df: dask.dataframe.DataFrame) -> dask.dataframe.DataFrame: with get_dask_client(): df_yearly_avg = df.groupby(df.index.year).mean() return df_yearly_avg.compute() @flow(task_runner=DaskTaskRunner(address=client.scheduler.address)) def dask_pipeline(): df = read_data.submit("1988", "2022") df_yearly_average = process_data.submit(df) return df_yearly_average if __name__ == "__main__": dask_pipeline() ``` ### Configure adaptive scaling A key feature of using a `DaskTaskRunner` is the ability to scale adaptively to the workload. Instead of specifying `n_workers` as a fixed number, you can specify a minimum and maximum number of workers to use, and the Dask cluster scales up and down as needed. To do this, pass `adapt_kwargs` to `DaskTaskRunner`. This takes the following fields: * `maximum` (`int` or `None`, optional): the maximum number of workers to scale to. Set to `None` for no maximum. * `minimum` (`int` or `None`, optional): the minimum number of workers to scale to. Set to `None` for no minimum. For example, this configures a flow to run on a `FargateCluster` scaling up to a maximum of 10 workers: ```python theme={null} from prefect_dask import DaskTaskRunner DaskTaskRunner( cluster_class="dask_cloudprovider.aws.FargateCluster", adapt_kwargs={"maximum": 10} ) ``` ### Use Dask annotations Use Dask annotations to further control the behavior of tasks. For example, set the [priority](http://distributed.dask.org/en/stable/priority.html) of tasks in the Dask scheduler: ```python theme={null} import dask from prefect import flow, task from prefect_dask.task_runners import DaskTaskRunner @task def show(x): print(x) @flow(task_runner=DaskTaskRunner()) def my_flow(): with dask.annotate(priority=-10): future = show.submit(1) # low priority task with dask.annotate(priority=10): future = show.submit(2) # high priority task ``` Another common use case is [resource](http://distributed.dask.org/en/stable/resources.html) annotations: ```python theme={null} import dask from prefect import flow, task from prefect_dask.task_runners import DaskTaskRunner @task def show(x): print(x) # Create a `LocalCluster` with some resource annotations # Annotations are abstract in dask and not inferred from your system. # Here, we claim that our system has 1 GPU and 1 process available per worker @flow( task_runner=DaskTaskRunner( cluster_kwargs={"n_workers": 1, "resources": {"GPU": 1, "process": 1}} ) ) def my_flow(): with dask.annotate(resources={'GPU': 1}): future = show(0) # this task requires 1 GPU resource on a worker with dask.annotate(resources={'process': 1}): # These tasks each require 1 process on a worker; because we've # specified that our cluster has 1 process per worker and 1 worker, # these tasks will run sequentially future = show(1) future = show(2) future = show(3) if __name__ == "__main__": my_flow() ``` ## Additional Resources Refer to the `prefect-dask` [SDK documentation](/integrations/prefect-dask/api-ref/prefect_dask-client) to explore all the capabilities of the `prefect-dask` library. For assistance using Dask, consult the [Dask documentation](https://docs.dask.org/en/stable/) **Resolving futures in sync client** Note, by default, `dask_collection.compute()` returns concrete values while `client.compute(dask_collection)` returns Dask Futures. Therefore, if you call `client.compute`, you must resolve all futures before exiting out of the context manager by either: 1. setting `sync=True` ```python theme={null} with get_dask_client() as client: df = dask.datasets.timeseries("2000", "2001", partition_freq="4w") summary_df = client.compute(df.describe(), sync=True) ``` 2. calling `result()` ```python theme={null} with get_dask_client() as client: df = dask.datasets.timeseries("2000", "2001", partition_freq="4w") summary_df = client.compute(df.describe()).result() ``` For more information, visit the docs on [Waiting on Futures](https://docs.dask.org/en/stable/futures.html#waiting-on-futures). There is also an equivalent context manager for asynchronous tasks and flows: `get_async_dask_client`. When using the async client, you must `await client.compute(dask_collection)` before exiting the context manager. Note that task submission (`.submit()`) and future resolution (`.result()`) are always synchronous operations in Prefect, even when working with async tasks and flows. # credentials Source: https://docs.prefect.io/integrations/prefect-databricks/api-ref/prefect_databricks-credentials # `prefect_databricks.credentials` Credential classes used to perform authenticated interactions with Databricks ## Classes ### `DatabricksCredentials` Block used to manage Databricks authentication. Supports two authentication methods: 1. Personal Access Token (PAT): Provide a `token` field. 2. Service Principal (OAuth 2.0): Provide `client_id`, `client_secret`, and optionally `tenant_id` for Azure Databricks. **Attributes:** * `databricks_instance`: Databricks instance used in formatting the endpoint URL. * `token`: The token to authenticate with Databricks (for PAT authentication). * `client_id`: The service principal client ID (for OAuth authentication). * `client_secret`: The service principal client secret (for OAuth authentication). * `tenant_id`: The tenant ID for Azure Databricks (optional, for OAuth authentication). * `client_kwargs`: Additional keyword arguments to pass to AsyncClient. **Examples:** Load stored Databricks credentials using PAT: ```python theme={null} from prefect_databricks import DatabricksCredentials databricks_credentials_block = DatabricksCredentials.load("BLOCK_NAME") ``` Using service principal authentication: ```python theme={null} from prefect_databricks import DatabricksCredentials credentials = DatabricksCredentials( databricks_instance="dbc-abc123-def4.cloud.databricks.com", client_id="my-client-id", client_secret="my-client-secret", ) client = credentials.get_client() ``` **Methods:** #### `get_client` ```python theme={null} get_client(self) -> AsyncClient ``` Gets a Databricks REST AsyncClient. **Returns:** * A Databricks REST AsyncClient. #### `validate_auth_method` ```python theme={null} validate_auth_method(cls, values: Dict[str, Any]) -> Dict[str, Any] ``` Validates that either PAT or service principal authentication is configured, but not both. Valid configurations: 1. token only (PAT authentication) 2. client\_id + client\_secret (service principal authentication) 3. client\_id + client\_secret + tenant\_id (Azure service principal authentication) # flows Source: https://docs.prefect.io/integrations/prefect-databricks/api-ref/prefect_databricks-flows # `prefect_databricks.flows` Module containing flows for interacting with Databricks ## Functions ### `jobs_runs_submit_and_wait_for_completion` ```python theme={null} jobs_runs_submit_and_wait_for_completion(databricks_credentials: DatabricksCredentials, tasks: Optional[List[RunSubmitTaskSettings]] = None, run_name: Optional[str] = None, max_wait_seconds: int = 900, poll_frequency_seconds: int = 10, git_source: Optional[GitSource] = None, timeout_seconds: Optional[int] = None, idempotency_token: Optional[str] = None, access_control_list: Optional[List[AccessControlRequest]] = None, return_metadata: bool = False, job_submission_handler: Optional[Callable] = None, **jobs_runs_submit_kwargs: Dict[str, Any]) -> Union[NotebookOutput, Tuple[NotebookOutput, JobMetadata], None] ``` Flow that triggers a job run and waits for the triggered run to complete. **Args:** * `databricks_credentials`: Credentials to use for authentication with Databricks. * `tasks`: Tasks to run, e.g. ``` [ { "task_key"\: "Sessionize", "description"\: "Extracts session data from events", "depends_on"\: [], "existing_cluster_id"\: "0923-164208-meows279", "spark_jar_task"\: { "main_class_name"\: "com.databricks.Sessionize", "parameters"\: ["--data", "dbfs\:/path/to/data.json"], }, "libraries"\: [{"jar"\: "dbfs\:/mnt/databricks/Sessionize.jar"}], "timeout_seconds"\: 86400, }, { "task_key"\: "Orders_Ingest", "description"\: "Ingests order data", "depends_on"\: [], "existing_cluster_id"\: "0923-164208-meows279", "spark_jar_task"\: { "main_class_name"\: "com.databricks.OrdersIngest", "parameters"\: ["--data", "dbfs\:/path/to/order-data.json"], }, "libraries"\: [{"jar"\: "dbfs\:/mnt/databricks/OrderIngest.jar"}], "timeout_seconds"\: 86400, }, { "task_key"\: "Match", "description"\: "Matches orders with user sessions", "depends_on"\: [ {"task_key"\: "Orders_Ingest"}, {"task_key"\: "Sessionize"}, ], "new_cluster"\: { "spark_version"\: "7.3.x-scala2.12", "node_type_id"\: "i3.xlarge", "spark_conf"\: {"spark.speculation"\: True}, "aws_attributes"\: { "availability"\: "SPOT", "zone_id"\: "us-west-2a", }, "autoscale"\: {"min_workers"\: 2, "max_workers"\: 16}, }, "notebook_task"\: { "notebook_path"\: "/Users/user.name@databricks.com/Match", "base_parameters"\: {"name"\: "John Doe", "age"\: "35"}, }, "timeout_seconds"\: 86400, }, ] ``` * `run_name`: An optional name for the run. The default value is `Untitled`, e.g. `A multitask job run`. * `git_source`: This functionality is in Public Preview. An optional specification for a remote repository containing the notebooks used by this job's notebook tasks. Key-values: * git\_url: URL of the repository to be cloned by this job. The maximum length is 300 characters, e.g. `https\://github.com/databricks/databricks-cli`. * git\_provider: Unique identifier of the service used to host the Git repository. The value is case insensitive, e.g. `github`. * git\_branch: Name of the branch to be checked out and used by this job. This field cannot be specified in conjunction with git\_tag or git\_commit. The maximum length is 255 characters, e.g. `main`. * git\_tag: Name of the tag to be checked out and used by this job. This field cannot be specified in conjunction with git\_branch or git\_commit. The maximum length is 255 characters, e.g. `release-1.0.0`. * git\_commit: Commit to be checked out and used by this job. This field cannot be specified in conjunction with git\_branch or git\_tag. The maximum length is 64 characters, e.g. `e0056d01`. * git\_snapshot: Read-only state of the remote repository at the time the job was run. This field is only included on job runs. * `timeout_seconds`: An optional timeout applied to each run of this job. The default behavior is to have no timeout, e.g. `86400`. * `idempotency_token`: An optional token that can be used to guarantee the idempotency of job run requests. If a run with the provided token already exists, the request does not create a new run but returns the ID of the existing run instead. If a run with the provided token is deleted, an error is returned. If you specify the idempotency token, upon failure you can retry until the request succeeds. Databricks guarantees that exactly one run is launched with that idempotency token. This token must have at most 64 characters. For more information, see [How to ensure idempotency for jobs](https://kb.databricks.com/jobs/jobs-idempotency.html), e.g. `8f018174-4792-40d5-bcbc-3e6a527352c8`. * `access_control_list`: List of permissions to set on the job. * `max_wait_seconds`: Maximum number of seconds to wait for the entire flow to complete. * `poll_frequency_seconds`: Number of seconds to wait in between checks for run completion. * `return_metadata`: When True, method will return a tuple of notebook output as well as job run metadata; by default though, the method only returns notebook output * `job_submission_handler`: An optional callable to intercept job submission. * `**jobs_runs_submit_kwargs`: Additional keyword arguments to pass to `jobs_runs_submit`. **Returns:** * Either a dict or a tuple (depends on `return_metadata`) comprised of * * task\_notebook\_outputs: dictionary of task keys to its corresponding notebook output; this is the only object returned by default from this method * * jobs\_runs\_metadata: dictionary containing IDs of the jobs runs tasks; this is only returned if `return_metadata=True`. **Examples:** Submit jobs runs and wait. ```python theme={null} from prefect import flow from prefect_databricks import DatabricksCredentials from prefect_databricks.flows import jobs_runs_submit_and_wait_for_completion from prefect_databricks.models.jobs import ( AutoScale, AwsAttributes, JobTaskSettings, NotebookTask, NewCluster, ) @flow def jobs_runs_submit_and_wait_for_completion_flow(notebook_path, **base_parameters): databricks_credentials = await DatabricksCredentials.load("BLOCK_NAME") # specify new cluster settings aws_attributes = AwsAttributes( availability="SPOT", zone_id="us-west-2a", ebs_volume_type="GENERAL_PURPOSE_SSD", ebs_volume_count=3, ebs_volume_size=100, ) auto_scale = AutoScale(min_workers=1, max_workers=2) new_cluster = NewCluster( aws_attributes=aws_attributes, autoscale=auto_scale, node_type_id="m4.large", spark_version="10.4.x-scala2.12", spark_conf={"spark.speculation": True}, ) # specify notebook to use and parameters to pass notebook_task = NotebookTask( notebook_path=notebook_path, base_parameters=base_parameters, ) # compile job task settings job_task_settings = JobTaskSettings( new_cluster=new_cluster, notebook_task=notebook_task, task_key="prefect-task" ) multi_task_runs = jobs_runs_submit_and_wait_for_completion( databricks_credentials=databricks_credentials, run_name="prefect-job", tasks=[job_task_settings] ) return multi_task_runs ``` ### `jobs_runs_wait_for_completion` ```python theme={null} jobs_runs_wait_for_completion(multi_task_jobs_runs_id: int, databricks_credentials: DatabricksCredentials, run_name: Optional[str] = None, max_wait_seconds: int = 900, poll_frequency_seconds: int = 10) ``` Flow that triggers a job run and waits for the triggered run to complete. **Args:** * `run_name`: The name of the jobs runs task. * `multi_task_jobs_run_id`: The ID of the jobs runs task to watch. * `databricks_credentials`: Credentials to use for authentication with Databricks. * `max_wait_seconds`: Maximum number of seconds to wait for the entire flow to complete. * `poll_frequency_seconds`: Number of seconds to wait in between checks for run completion. **Returns:** * A dict containing the jobs runs life cycle state and message. * A dict containing IDs of the jobs runs tasks. ### `jobs_runs_submit_by_id_and_wait_for_completion` ```python theme={null} jobs_runs_submit_by_id_and_wait_for_completion(databricks_credentials: DatabricksCredentials, job_id: int, idempotency_token: Optional[str] = None, jar_params: Optional[List[str]] = None, max_wait_seconds: int = 900, poll_frequency_seconds: int = 10, notebook_params: Optional[Dict] = None, python_params: Optional[List[str]] = None, spark_submit_params: Optional[List[str]] = None, python_named_params: Optional[Dict] = None, pipeline_params: Optional[str] = None, sql_params: Optional[Dict] = None, dbt_commands: Optional[List] = None, job_submission_handler: Optional[Callable] = None, **jobs_runs_submit_kwargs: Dict[str, Any]) -> Dict ``` flow that triggers an existing job and waits for its completion **Args:** * `databricks_credentials`: Credentials to use for authentication with Databricks. * `job_id`: Id of the databricks job. * `idempotency_token`: An optional token that can be used to guarantee the idempotency of job run requests. If a run with the provided token already exists, the request does not create a new run but returns the ID of the existing run instead. If a run with the provided token is deleted, an error is returned. If you specify the idempotency token, upon failure you can retry until the request succeeds. Databricks guarantees that exactly one run is launched with that idempotency token. This token must have at most 64 characters. For more information, see [How to ensure idempotency for jobs](https://kb.databricks.com/jobs/jobs-idempotency.html), e.g. `8f018174-4792-40d5-bcbc-3e6a527352c8`. * `jar_params`: A list of parameters for jobs with Spark JAR tasks, for example "jar\_params" : \["john doe", "35"]. The parameters are used to invoke the main function of the main class specified in the Spark JAR task. If not specified upon run- now, it defaults to an empty list. jar\_params cannot be specified in conjunction with notebook\_params. The JSON representation of this field (for example `{"jar_params"\: ["john doe","35"]}`) cannot exceed 10,000 bytes. * `max_wait_seconds`: Maximum number of seconds to wait for the entire flow to complete. * `poll_frequency_seconds`: Number of seconds to wait in between checks for run completion. * `notebook_params`: A map from keys to values for jobs with notebook task, for example "notebook\_params": `{"name"\: "john doe", "age"\: "35"}`. The map is passed to the notebook and is accessible through the dbutils.widgets.get function. If not specified upon run-now, the triggered run uses the job’s base parameters. notebook\_params cannot be specified in conjunction with jar\_params. Use Task parameter variables to set parameters containing information about job runs. The JSON representation of this field (for example `{"notebook_params"\:{"name"\:"john doe","age"\:"35"}}`) cannot exceed 10,000 bytes. * `python_params`: A list of parameters for jobs with Python tasks, for example "python\_params" :\["john doe", "35"]. The parameters are passed to Python file as command- line parameters. If specified upon run-now, it would overwrite the parameters specified in job setting. The JSON representation of this field (for example `{"python_params"\:["john doe","35"]}`) cannot exceed 10,000 bytes Use Task parameter variables to set parameters containing information about job runs. These parameters accept only Latin characters (ASCII character set). Using non-ASCII characters returns an error. Examples of invalid, non-ASCII characters are Chinese, Japanese kanjis, and emojis. * `spark_submit_params`: A list of parameters for jobs with spark submit task, for example "spark\_submit\_params": \["--class", "org.apache.spark.examples.SparkPi"]. The parameters are passed to spark-submit script as command-line parameters. If specified upon run-now, it would overwrite the parameters specified in job setting. The JSON representation of this field (for example `{"python_params"\:["john doe","35"]}`) cannot exceed 10,000 bytes. Use Task parameter variables to set parameters containing information about job runs. These parameters accept only Latin characters (ASCII character set). Using non-ASCII characters returns an error. Examples of invalid, non-ASCII characters are Chinese, Japanese kanjis, and emojis. * `python_named_params`: A map from keys to values for jobs with Python wheel task, for example "python\_named\_params": `{"name"\: "task", "data"\: "dbfs\:/path/to/data.json"}`. * `pipeline_params`: If `full_refresh` is set to true, trigger a full refresh on the delta live table e.g. ``` "pipeline_params"\: {"full_refresh"\: true} ``` * `sql_params`: A map from keys to values for SQL tasks, for example "sql\_params": `{"name"\: "john doe", "age"\: "35"}`. The SQL alert task does not support custom parameters. * `dbt_commands`: An array of commands to execute for jobs with the dbt task, for example "dbt\_commands": \["dbt deps", "dbt seed", "dbt run"] * `job_submission_handler`: An optional callable to intercept job submission **Raises:** * `DatabricksJobTerminated`: Raised when the Databricks job run is terminated with a non-successful result state. * `DatabricksJobSkipped`: Raised when the Databricks job run is skipped. * `DatabricksJobInternalError`: Raised when the Databricks job run encounters an internal error. **Returns:** * A dictionary containing information about the completed job run. ## Classes ### `DatabricksJobTerminated` Raised when Databricks jobs runs submit terminates ### `DatabricksJobSkipped` Raised when Databricks jobs runs submit skips ### `DatabricksJobInternalError` Raised when Databricks jobs runs submit encounters internal error ### `DatabricksJobRunTimedOut` Raised when Databricks jobs runs does not complete in the configured max wait seconds # jobs Source: https://docs.prefect.io/integrations/prefect-databricks/api-ref/prefect_databricks-jobs # `prefect_databricks.jobs` This is a module containing tasks for interacting with: Databricks jobs ## Functions ### `jobs_runs_export` ```python theme={null} jobs_runs_export(run_id: int, databricks_credentials: 'DatabricksCredentials', views_to_export: Optional['models.ViewsToExport'] = None) -> Dict[str, Any] ``` Export and retrieve the job run task. **Args:** * `run_id`: The canonical identifier for the run. This field is required. * `databricks_credentials`: Credentials to use for authentication with Databricks. * `views_to_export`: Which views to export (CODE, DASHBOARDS, or ALL). Defaults to CODE. **Returns:** * Upon success, a dict of the response. * `views: List["models.ViewItem"]` #### API Endpoint: `/2.0/jobs/runs/export` #### API Responses: | Response | Description | | -------- | --------------------------------------------------------------- | | 200 | Run was exported successfully. | | 400 | The request was malformed. See JSON response for error details. | | 401 | The request was unauthorized. | | 500 | The request was not handled correctly due to a server error. | ### `jobs_create` ```python theme={null} jobs_create(databricks_credentials: 'DatabricksCredentials', name: str = 'Untitled', tags: Dict = None, tasks: Optional[List['models.JobTaskSettings']] = None, job_clusters: Optional[List['models.JobCluster']] = None, email_notifications: 'models.JobEmailNotifications' = None, webhook_notifications: 'models.WebhookNotifications' = None, timeout_seconds: Optional[int] = None, schedule: 'models.CronSchedule' = None, max_concurrent_runs: Optional[int] = None, git_source: 'models.GitSource' = None, format: Optional[str] = None, access_control_list: Optional[List['models.AccessControlRequest']] = None, parameters: Optional[List['models.JobParameter']] = None) -> Dict[str, Any] ``` Create a new job. **Args:** * `databricks_credentials`: Credentials to use for authentication with Databricks. * `name`: An optional name for the job, e.g. `A multitask job`. * `tags`: A map of tags associated with the job. These are forwarded to the cluster as cluster tags for jobs clusters, and are subject to the same limitations as cluster tags. A maximum of 25 tags can be added to the job, e.g. ``` {"cost-center"\: "engineering", "team"\: "jobs"} ``` * `tasks`: A list of task specifications to be executed by this job, e.g. ``` [ { "task_key"\: "Sessionize", "description"\: "Extracts session data from events", "depends_on"\: [], "existing_cluster_id"\: "0923-164208-meows279", "spark_jar_task"\: { "main_class_name"\: "com.databricks.Sessionize", "parameters"\: ["--data", "dbfs\:/path/to/data.json"], }, "libraries"\: [{"jar"\: "dbfs\:/mnt/databricks/Sessionize.jar"}], "timeout_seconds"\: 86400, "max_retries"\: 3, "min_retry_interval_millis"\: 2000, "retry_on_timeout"\: False, }, { "task_key"\: "Orders_Ingest", "description"\: "Ingests order data", "depends_on"\: [], "job_cluster_key"\: "auto_scaling_cluster", "spark_jar_task"\: { "main_class_name"\: "com.databricks.OrdersIngest", "parameters"\: ["--data", "dbfs\:/path/to/order-data.json"], }, "libraries"\: [{"jar"\: "dbfs\:/mnt/databricks/OrderIngest.jar"}], "timeout_seconds"\: 86400, "max_retries"\: 3, "min_retry_interval_millis"\: 2000, "retry_on_timeout"\: False, }, { "task_key"\: "Match", "description"\: "Matches orders with user sessions", "depends_on"\: [ {"task_key"\: "Orders_Ingest"}, {"task_key"\: "Sessionize"}, ], "new_cluster"\: { "spark_version"\: "7.3.x-scala2.12", "node_type_id"\: "i3.xlarge", "spark_conf"\: {"spark.speculation"\: True}, "aws_attributes"\: { "availability"\: "SPOT", "zone_id"\: "us-west-2a", }, "autoscale"\: {"min_workers"\: 2, "max_workers"\: 16}, }, "notebook_task"\: { "notebook_path"\: "/Users/user.name@databricks.com/Match", "source"\: "WORKSPACE", "base_parameters"\: {"name"\: "John Doe", "age"\: "35"}, }, "timeout_seconds"\: 86400, "max_retries"\: 3, "min_retry_interval_millis"\: 2000, "retry_on_timeout"\: False, }, ] ``` * `job_clusters`: A list of job cluster specifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings, e.g. ``` [ { "job_cluster_key"\: "auto_scaling_cluster", "new_cluster"\: { "spark_version"\: "7.3.x-scala2.12", "node_type_id"\: "i3.xlarge", "spark_conf"\: {"spark.speculation"\: True}, "aws_attributes"\: { "availability"\: "SPOT", "zone_id"\: "us-west-2a", }, "autoscale"\: {"min_workers"\: 2, "max_workers"\: 16}, }, } ] ``` * `email_notifications`: An optional set of email addresses that is notified when runs of this job begin or complete as well as when this job is deleted. The default behavior is to not send any emails. Key-values: * on\_start: A list of email addresses to be notified when a run begins. If not specified on job creation, reset, or update, the list is empty, and notifications are not sent, e.g. ``` ["user.name@databricks.com"] ``` * on\_success: A list of email addresses to be notified when a run successfully completes. A run is considered to have completed successfully if it ends with a `TERMINATED` `life_cycle_state` and a `SUCCESSFUL` result\_state. If not specified on job creation, reset, or update, the list is empty, and notifications are not sent, e.g. ``` ["user.name@databricks.com"] ``` * on\_failure: A list of email addresses to notify when a run completes unsuccessfully. A run is considered unsuccessful if it ends with an `INTERNAL_ERROR` `life_cycle_state` or a `SKIPPED`, `FAILED`, or `TIMED_OUT` `result_state`. If not specified on job creation, reset, or update, or the list is empty, then notifications are not sent. Job-level failure notifications are sent only once after the entire job run (including all of its retries) has failed. Notifications are not sent when failed job runs are retried. To receive a failure notification after every failed task (including every failed retry), use task-level notifications instead, e.g. ``` ["user.name@databricks.com"] ``` * no\_alert\_for\_skipped\_runs: If true, do not send email to recipients specified in `on_failure` if the run is skipped. * `webhook_notifications`: A collection of system notification IDs to notify when runs of this job begin or complete. The default behavior is to not send any system notifications. Key-values: * on\_start: An optional list of notification IDs to call when the run starts. A maximum of 3 destinations can be specified for the `on_start` property, e.g. ``` [ {"id"\: "03dd86e4-57ef-4818-a950-78e41a1d71ab"}, {"id"\: "0481e838-0a59-4eff-9541-a4ca6f149574"}, ] ``` * on\_success: An optional list of notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified for the `on_success` property, e.g. ``` [{"id"\: "03dd86e4-57ef-4818-a950-78e41a1d71ab"}] ``` * on\_failure: An optional list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified for the `on_failure` property, e.g. ``` [{"id"\: "0481e838-0a59-4eff-9541-a4ca6f149574"}] ``` * `timeout_seconds`: An optional timeout applied to each run of this job. The default behavior is to have no timeout, e.g. `86400`. * `schedule`: An optional periodic schedule for this job. The default behavior is that the job only runs when triggered by clicking “Run Now” in the Jobs UI or sending an API request to `runNow`. Key-values: * quartz\_cron\_expression: A Cron expression using Quartz syntax that describes the schedule for a job. See \[Cron Trigger]\([http://www.quartz-](http://www.quartz-) scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.html) for details. This field is required, e.g. `20 30 * * * ?`. * timezone\_id: A Java timezone ID. The schedule for a job is resolved with respect to this timezone. See [Java TimeZone](https://docs.oracle.com/javase/7/docs/api/java/util/TimeZone.html) for details. This field is required, e.g. `Europe/London`. * pause\_status: Indicate whether this schedule is paused or not, e.g. `PAUSED`. * `max_concurrent_runs`: An optional maximum allowed number of concurrent runs of the job. Set this value if you want to be able to execute multiple runs of the same job concurrently. This is useful for example if you trigger your job on a frequent schedule and want to allow consecutive runs to overlap with each other, or if you want to trigger multiple runs which differ by their input parameters. This setting affects only new runs. For example, suppose the job’s concurrency is 4 and there are 4 concurrent active runs. Then setting the concurrency to 3 won’t kill any of the active runs. However, from then on, new runs are skipped unless there are fewer than 3 active runs. This value cannot exceed 1000. Setting this value to 0 causes all new runs to be skipped. The default behavior is to allow only 1 concurrent run, e.g. `10`. * `git_source`: This functionality is in Public Preview. An optional specification for a remote repository containing the notebooks used by this job's notebook tasks, e.g. ```` { "git_url"\: "https\://github.com/databricks/databricks-cli", "git_branch"\: "main", "git_provider"\: "gitHub", } ``` Key-values\: - git_url\: URL of the repository to be cloned by this job. The maximum length is 300 characters, e.g. `https\://github.com/databricks/databricks-cli`. - git_provider\: Unique identifier of the service used to host the Git repository. The value is case insensitive, e.g. `github`. - git_branch\: Name of the branch to be checked out and used by this job. This field cannot be specified in conjunction with git_tag or git_commit. The maximum length is 255 characters, e.g. `main`. - git_tag\: Name of the tag to be checked out and used by this job. This field cannot be specified in conjunction with git_branch or git_commit. The maximum length is 255 characters, e.g. `release-1.0.0`. - git_commit\: Commit to be checked out and used by this job. This field cannot be specified in conjunction with git_branch or git_tag. The maximum length is 64 characters, e.g. `e0056d01`. - git_snapshot\: Read-only state of the remote repository at the time the job was run. This field is only included on job runs. - `format`: Used to tell what is the format of the job. This field is ignored in Create/Update/Reset calls. When using the Jobs API 2.1 this value is always set to `'MULTI_TASK'`, e.g. `MULTI_TASK`. - `access_control_list`: List of permissions to set on the job. - `parameters`: Job-level parameter definitions. **Returns:** - Upon success, a dict of the response. - `job_id: int` #### API Endpoint: `/2.1/jobs/create` #### API Responses: | Response | Description | | --- | --- | | 200 | Job was created successfully. | | 400 | The request was malformed. See JSON response for error details. | | 401 | The request was unauthorized. | | 500 | The request was not handled correctly due to a server error. | ### `jobs_delete` ```python jobs_delete(databricks_credentials: 'DatabricksCredentials', job_id: Optional[int] = None) -> Dict[str, Any] ```` Deletes a job. **Args:** * `databricks_credentials`: Credentials to use for authentication with Databricks. * `job_id`: The canonical identifier of the job to delete. This field is required, e.g. `11223344`. **Returns:** * Upon success, an empty dict. #### API Endpoint: `/2.1/jobs/delete` #### API Responses: | Response | Description | | -------- | --------------------------------------------------------------- | | 200 | Job was deleted successfully. | | 400 | The request was malformed. See JSON response for error details. | | 401 | The request was unauthorized. | | 500 | The request was not handled correctly due to a server error. | ### `jobs_get` ```python theme={null} jobs_get(job_id: int, databricks_credentials: 'DatabricksCredentials') -> Dict[str, Any] ``` Retrieves the details for a single job. **Args:** * `job_id`: The canonical identifier of the job to retrieve information about. This field is required. * `databricks_credentials`: Credentials to use for authentication with Databricks. **Returns:** * Upon success, a dict of the response. * `job_id: int` * `creator_user_name: str` * `run_as_user_name: str` * `settings: "models.JobSettings"` * `created_time: int` #### API Endpoint: `/2.1/jobs/get` #### API Responses: | Response | Description | | -------- | --------------------------------------------------------------- | | 200 | Job was retrieved successfully. | | 400 | The request was malformed. See JSON response for error details. | | 401 | The request was unauthorized. | | 500 | The request was not handled correctly due to a server error. | ### `jobs_list` ```python theme={null} jobs_list(databricks_credentials: 'DatabricksCredentials', limit: int = 20, offset: int = 0, name: Optional[str] = None, expand_tasks: bool = False) -> Dict[str, Any] ``` Retrieves a list of jobs. **Args:** * `databricks_credentials`: Credentials to use for authentication with Databricks. * `limit`: The number of jobs to return. This value must be greater than 0 and less or equal to 25. The default value is 20. * `offset`: The offset of the first job to return, relative to the most recently created job. * `name`: A filter on the list based on the exact (case insensitive) job name. * `expand_tasks`: Whether to include task and cluster details in the response. **Returns:** * Upon success, a dict of the response. * `jobs: List["models.Job"]` * `has_more: bool` #### API Endpoint: `/2.1/jobs/list` #### API Responses: | Response | Description | | -------- | --------------------------------------------------------------- | | 200 | List of jobs was retrieved successfully. | | 400 | The request was malformed. See JSON response for error details. | | 401 | The request was unauthorized. | | 500 | The request was not handled correctly due to a server error. | ### `jobs_reset` ```python theme={null} jobs_reset(databricks_credentials: 'DatabricksCredentials', job_id: Optional[int] = None, new_settings: 'models.JobSettings' = None) -> Dict[str, Any] ``` Overwrites all the settings for a specific job. Use the Update endpoint to update job settings partially. **Args:** * `databricks_credentials`: Credentials to use for authentication with Databricks. * `job_id`: The canonical identifier of the job to reset. This field is required, e.g. `11223344`. * `new_settings`: The new settings of the job. These settings completely replace the old settings. Changes to the field `JobSettings.timeout_seconds` are applied to active runs. Changes to other fields are applied to future runs only. Key-values: * name: An optional name for the job, e.g. `A multitask job`. * tags: A map of tags associated with the job. These are forwarded to the cluster as cluster tags for jobs clusters, and are subject to the same limitations as cluster tags. A maximum of 25 tags can be added to the job, e.g. ``` {"cost-center"\: "engineering", "team"\: "jobs"} ``` * tasks: A list of task specifications to be executed by this job, e.g. ``` [ { "task_key"\: "Sessionize", "description"\: "Extracts session data from events", "depends_on"\: [], "existing_cluster_id"\: "0923-164208-meows279", "spark_jar_task"\: { "main_class_name"\: "com.databricks.Sessionize", "parameters"\: [ "--data", "dbfs\:/path/to/data.json", ], }, "libraries"\: [ {"jar"\: "dbfs\:/mnt/databricks/Sessionize.jar"} ], "timeout_seconds"\: 86400, "max_retries"\: 3, "min_retry_interval_millis"\: 2000, "retry_on_timeout"\: False, }, { "task_key"\: "Orders_Ingest", "description"\: "Ingests order data", "depends_on"\: [], "job_cluster_key"\: "auto_scaling_cluster", "spark_jar_task"\: { "main_class_name"\: "com.databricks.OrdersIngest", "parameters"\: [ "--data", "dbfs\:/path/to/order-data.json", ], }, "libraries"\: [ {"jar"\: "dbfs\:/mnt/databricks/OrderIngest.jar"} ], "timeout_seconds"\: 86400, "max_retries"\: 3, "min_retry_interval_millis"\: 2000, "retry_on_timeout"\: False, }, { "task_key"\: "Match", "description"\: "Matches orders with user sessions", "depends_on"\: [ {"task_key"\: "Orders_Ingest"}, {"task_key"\: "Sessionize"}, ], "new_cluster"\: { "spark_version"\: "7.3.x-scala2.12", "node_type_id"\: "i3.xlarge", "spark_conf"\: {"spark.speculation"\: True}, "aws_attributes"\: { "availability"\: "SPOT", "zone_id"\: "us-west-2a", }, "autoscale"\: { "min_workers"\: 2, "max_workers"\: 16, }, }, "notebook_task"\: { "notebook_path"\: "/Users/user.name@databricks.com/Match", "source"\: "WORKSPACE", "base_parameters"\: { "name"\: "John Doe", "age"\: "35", }, }, "timeout_seconds"\: 86400, "max_retries"\: 3, "min_retry_interval_millis"\: 2000, "retry_on_timeout"\: False, }, ] ``` * job\_clusters: A list of job cluster specifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings, e.g. ``` [ { "job_cluster_key"\: "auto_scaling_cluster", "new_cluster"\: { "spark_version"\: "7.3.x-scala2.12", "node_type_id"\: "i3.xlarge", "spark_conf"\: {"spark.speculation"\: True}, "aws_attributes"\: { "availability"\: "SPOT", "zone_id"\: "us-west-2a", }, "autoscale"\: { "min_workers"\: 2, "max_workers"\: 16, }, }, } ] ``` * email\_notifications: An optional set of email addresses that is notified when runs of this job begin or complete as well as when this job is deleted. The default behavior is to not send any emails. * webhook\_notifications: A collection of system notification IDs to notify when runs of this job begin or complete. The default behavior is to not send any system notifications. * timeout\_seconds: An optional timeout applied to each run of this job. The default behavior is to have no timeout, e.g. `86400`. * schedule: An optional periodic schedule for this job. The default behavior is that the job only runs when triggered by clicking “Run Now” in the Jobs UI or sending an API request to `runNow`. * max\_concurrent\_runs: An optional maximum allowed number of concurrent runs of the job. Set this value if you want to be able to execute multiple runs of the same job concurrently. This is useful for example if you trigger your job on a frequent schedule and want to allow consecutive runs to overlap with each other, or if you want to trigger multiple runs which differ by their input parameters. This setting affects only new runs. For example, suppose the job’s concurrency is 4 and there are 4 concurrent active runs. Then setting the concurrency to 3 won’t kill any of the active runs. However, from then on, new runs are skipped unless there are fewer than 3 active runs. This value cannot exceed 1000. Setting this value to 0 causes all new runs to be skipped. The default behavior is to allow only 1 concurrent run, e.g. `10`. * git\_source: This functionality is in Public Preview. An optional specification for a remote repository containing the notebooks used by this job's notebook tasks, e.g. ``` { "git_url"\: "https\://github.com/databricks/databricks-cli", "git_branch"\: "main", "git_provider"\: "gitHub", } ``` * format: Used to tell what is the format of the job. This field is ignored in Create/Update/Reset calls. When using the Jobs API 2.1 this value is always set to `'MULTI_TASK'`, e.g. `MULTI_TASK`. * job\_settings: Job-level parameter definitions. **Returns:** * Upon success, an empty dict. #### API Endpoint: `/2.1/jobs/reset` #### API Responses: | Response | Description | | -------- | --------------------------------------------------------------- | | 200 | Job was overwritten successfully. | | 400 | The request was malformed. See JSON response for error details. | | 401 | The request was unauthorized. | | 500 | The request was not handled correctly due to a server error. | ### `jobs_run_now` ```python theme={null} jobs_run_now(databricks_credentials: 'DatabricksCredentials', job_id: Optional[int] = None, idempotency_token: Optional[str] = None, jar_params: Optional[List[str]] = None, notebook_params: Optional[Dict] = None, python_params: Optional[List[str]] = None, spark_submit_params: Optional[List[str]] = None, python_named_params: Optional[Dict] = None, pipeline_params: Optional[str] = None, sql_params: Optional[Dict] = None, dbt_commands: Optional[List] = None, job_parameters: Optional[Dict] = None) -> Dict[str, Any] ``` Run a job and return the `run_id` of the triggered run. **Args:** * `databricks_credentials`: Credentials to use for authentication with Databricks. * `job_id`: The ID of the job to be executed, e.g. `11223344`. * `idempotency_token`: An optional token to guarantee the idempotency of job run requests. If a run with the provided token already exists, the request does not create a new run but returns the ID of the existing run instead. If a run with the provided token is deleted, an error is returned. If you specify the idempotency token, upon failure you can retry until the request succeeds. Databricks guarantees that exactly one run is launched with that idempotency token. This token must have at most 64 characters. For more information, see \[How to ensure idempotency for jobs]\([https://kb.databricks.com/jobs/jobs-](https://kb.databricks.com/jobs/jobs-) idempotency.html), e.g. `8f018174-4792-40d5-bcbc-3e6a527352c8`. * `jar_params`: A list of parameters for jobs with Spark JAR tasks, for example `'jar_params'\: ['john doe', '35']`. The parameters are used to invoke the main function of the main class specified in the Spark JAR task. If not specified upon `run-now`, it defaults to an empty list. jar\_params cannot be specified in conjunction with notebook\_params. The JSON representation of this field (for example `{'jar_params'\:['john doe','35']}`) cannot exceed 10,000 bytes. Use \[Task parameter variables]\([https://docs.databricks.com/jobs.html](https://docs.databricks.com/jobs.html) parameter-variables) to set parameters containing information about job runs, e.g. ``` ["john", "doe", "35"] ``` * `notebook_params`: A map from keys to values for jobs with notebook task, for example `'notebook_params'\: {'name'\: 'john doe', 'age'\: '35'}`. The map is passed to the notebook and is accessible through the \[dbutils.widgets.get]\([https://docs.databricks.com/dev-](https://docs.databricks.com/dev-) tools/databricks-utils.html dbutils-widgets) function. If not specified upon `run-now`, the triggered run uses the job’s base parameters. notebook\_params cannot be specified in conjunction with jar\_params. Use \[Task parameter variables]\([https://docs.databricks.com/jobs.html](https://docs.databricks.com/jobs.html) parameter-variables) to set parameters containing information about job runs. The JSON representation of this field (for example `{'notebook_params'\:{'name'\:'john doe','age'\:'35'}}`) cannot exceed 10,000 bytes, e.g. ``` {"name"\: "john doe", "age"\: "35"} ``` * `python_params`: A list of parameters for jobs with Python tasks, for example `'python_params'\: ['john doe', '35']`. The parameters are passed to Python file as command-line parameters. If specified upon `run-now`, it would overwrite the parameters specified in job setting. The JSON representation of this field (for example `{'python_params'\:['john doe','35']}`) cannot exceed 10,000 bytes. Use \[Task parameter variables]\([https://docs.databricks.com/jobs.html](https://docs.databricks.com/jobs.html) parameter-variables) to set parameters containing information about job runs. Important These parameters accept only Latin characters (ASCII character set). Using non-ASCII characters returns an error. Examples of invalid, non-ASCII characters are Chinese, Japanese kanjis, and emojis, e.g. ``` ["john doe", "35"] ``` * `spark_submit_params`: A list of parameters for jobs with spark submit task, for example `'spark_submit_params'\: ['--class', 'org.apache.spark.examples.SparkPi']`. The parameters are passed to spark-submit script as command-line parameters. If specified upon `run-now`, it would overwrite the parameters specified in job setting. The JSON representation of this field (for example `{'python_params'\:['john doe','35']}`) cannot exceed 10,000 bytes. Use \[Task parameter variables]\([https://docs.databricks.com/jobs.html](https://docs.databricks.com/jobs.html) parameter-variables) to set parameters containing information about job runs. Important These parameters accept only Latin characters (ASCII character set). Using non-ASCII characters returns an error. Examples of invalid, non-ASCII characters are Chinese, Japanese kanjis, and emojis, e.g. ``` ["--class", "org.apache.spark.examples.SparkPi"] ``` * `python_named_params`: A map from keys to values for jobs with Python wheel task, for example `'python_named_params'\: {'name'\: 'task', 'data'\: 'dbfs\:/path/to/data.json'}`, e.g. ``` {"name"\: "task", "data"\: "dbfs\:/path/to/data.json"} ``` * `pipeline_params`: * `sql_params`: A map from keys to values for SQL tasks, for example `'sql_params'\: {'name'\: 'john doe', 'age'\: '35'}`. The SQL alert task does not support custom parameters, e.g. ``` {"name"\: "john doe", "age"\: "35"} ``` * `dbt_commands`: An array of commands to execute for jobs with the dbt task, for example `'dbt_commands'\: ['dbt deps', 'dbt seed', 'dbt run']`, e.g. ``` ["dbt deps", "dbt seed", "dbt run"] ``` * `job_parameters`: A map from keys to values for job-level parameters used in the run, for example `'job_parameters'\: {'param'\: 'overriding_val'}`, e.g. ``` {"param"\: "overriding_val"} ``` **Returns:** * Upon success, a dict of the response. * `run_id: int` * `number_in_job: int` #### API Endpoint: `/2.1/jobs/run-now` #### API Responses: | Response | Description | | -------- | --------------------------------------------------------------- | | 200 | Run was started successfully. | | 400 | The request was malformed. See JSON response for error details. | | 401 | The request was unauthorized. | | 500 | The request was not handled correctly due to a server error. | ### `jobs_runs_cancel` ```python theme={null} jobs_runs_cancel(databricks_credentials: 'DatabricksCredentials', run_id: Optional[int] = None) -> Dict[str, Any] ``` Cancels a job run. The run is canceled asynchronously, so it may still be running when this request completes. **Args:** * `databricks_credentials`: Credentials to use for authentication with Databricks. * `run_id`: This field is required, e.g. `455644833`. **Returns:** * Upon success, an empty dict. #### API Endpoint: `/2.1/jobs/runs/cancel` #### API Responses: | Response | Description | | -------- | --------------------------------------------------------------- | | 200 | Run was cancelled successfully. | | 400 | The request was malformed. See JSON response for error details. | | 401 | The request was unauthorized. | | 500 | The request was not handled correctly due to a server error. | ### `jobs_runs_cancel_all` ```python theme={null} jobs_runs_cancel_all(databricks_credentials: 'DatabricksCredentials', job_id: Optional[int] = None) -> Dict[str, Any] ``` Cancels all active runs of a job. The runs are canceled asynchronously, so it doesn't prevent new runs from being started. **Args:** * `databricks_credentials`: Credentials to use for authentication with Databricks. * `job_id`: The canonical identifier of the job to cancel all runs of. This field is required, e.g. `11223344`. **Returns:** * Upon success, an empty dict. #### API Endpoint: `/2.1/jobs/runs/cancel-all` #### API Responses: | Response | Description | | -------- | --------------------------------------------------------------- | | 200 | All runs were cancelled successfully. | | 400 | The request was malformed. See JSON response for error details. | | 401 | The request was unauthorized. | | 500 | The request was not handled correctly due to a server error. | ### `jobs_runs_delete` ```python theme={null} jobs_runs_delete(databricks_credentials: 'DatabricksCredentials', run_id: Optional[int] = None) -> Dict[str, Any] ``` Deletes a non-active run. Returns an error if the run is active. **Args:** * `databricks_credentials`: Credentials to use for authentication with Databricks. * `run_id`: The canonical identifier of the run for which to retrieve the metadata, e.g. `455644833`. **Returns:** * Upon success, an empty dict. #### API Endpoint: `/2.1/jobs/runs/delete` #### API Responses: | Response | Description | | -------- | --------------------------------------------------------------- | | 200 | Run was deleted successfully. | | 400 | The request was malformed. See JSON response for error details. | | 401 | The request was unauthorized. | | 500 | The request was not handled correctly due to a server error. | ### `jobs_runs_get` ```python theme={null} jobs_runs_get(run_id: int, databricks_credentials: 'DatabricksCredentials', include_history: Optional[bool] = None) -> Dict[str, Any] ``` Retrieve the metadata of a run. **Args:** * `run_id`: The canonical identifier of the run for which to retrieve the metadata. This field is required. * `databricks_credentials`: Credentials to use for authentication with Databricks. * `include_history`: Whether to include the repair history in the response. **Returns:** * Upon success, a dict of the response. * `job_id: int` * `run_id: int` * `number_in_job: int` * `creator_user_name: str` * `original_attempt_run_id: int` * `state: "models.RunState"` * `schedule: "models.CronSchedule"` * `tasks: List["models.RunTask"]` * `job_clusters: List["models.JobCluster"]` * `cluster_spec: "models.ClusterSpec"` * `cluster_instance: "models.ClusterInstance"` * `git_source: "models.GitSource"` * `overriding_parameters: "models.RunParameters"` * `start_time: int` * `setup_duration: int` * `execution_duration: int` * `cleanup_duration: int` * `end_time: int` * `trigger: "models.TriggerType"` * `run_name: str` * `run_page_url: str` * `run_type: "models.RunType"` * `attempt_number: int` * `repair_history: List["models.RepairHistoryItem"]` * `job_parameters: List["models.RunJobParameter]"` #### API Endpoint: `/2.1/jobs/runs/get` #### API Responses: | Response | Description | | -------- | --------------------------------------------------------------- | | 200 | Run was retrieved successfully. | | 400 | The request was malformed. See JSON response for error details. | | 401 | The request was unauthorized. | | 500 | The request was not handled correctly due to a server error. | ### `jobs_runs_get_output` ```python theme={null} jobs_runs_get_output(run_id: int, databricks_credentials: 'DatabricksCredentials') -> Dict[str, Any] ``` Retrieve the output and metadata of a single task run. When a notebook task returns a value through the dbutils.notebook.exit() call, you can use this endpoint to retrieve that value. Databricks restricts this API to return the first 5 MB of the output. To return a larger result, you can store job results in a cloud storage service. This endpoint validates that the run\_id parameter is valid and returns an HTTP status code 400 if the run\_id parameter is invalid. Runs are automatically removed after 60 days. If you to want to reference them beyond 60 days, you must save old run results before they expire. To export using the UI, see Export job run results. To export using the Jobs API, see Runs export. **Args:** * `run_id`: The canonical identifier for the run. This field is required. * `databricks_credentials`: Credentials to use for authentication with Databricks. **Returns:** * Upon success, a dict of the response. * `notebook_output: "models.NotebookOutput"` * `sql_output: "models.SqlOutput"` * `dbt_output: "models.DbtOutput"` * `logs: str` * `logs_truncated: bool` * `error: str` * `error_trace: str` * `metadata: "models.Run"` #### API Endpoint: `/2.1/jobs/runs/get-output` #### API Responses: | Response | Description | | -------- | ------------------------------------------------------------ | | 200 | Run output was retrieved successfully. | | 400 | A job run with multiple tasks was provided. | | 401 | The request was unauthorized. | | 500 | The request was not handled correctly due to a server error. | ### `jobs_runs_list` ```python theme={null} jobs_runs_list(databricks_credentials: 'DatabricksCredentials', active_only: bool = False, completed_only: bool = False, job_id: Optional[int] = None, offset: int = 0, limit: int = 25, run_type: Optional[str] = None, expand_tasks: bool = False, start_time_from: Optional[int] = None, start_time_to: Optional[int] = None) -> Dict[str, Any] ``` List runs in descending order by start time. **Args:** * `databricks_credentials`: Credentials to use for authentication with Databricks. * `active_only`: If active\_only is `true`, only active runs are included in the results; otherwise, lists both active and completed runs. An active run is a run in the `PENDING`, `RUNNING`, or `TERMINATING`. This field cannot be `true` when completed\_only is `true`. * `completed_only`: If completed\_only is `true`, only completed runs are included in the results; otherwise, lists both active and completed runs. This field cannot be `true` when active\_only is `true`. * `job_id`: The job for which to list runs. If omitted, the Jobs service lists runs from all jobs. * `offset`: The offset of the first run to return, relative to the most recent run. * `limit`: The number of runs to return. This value must be greater than 0 and less than 25. The default value is 25. If a request specifies a limit of 0, the service instead uses the maximum limit. * `run_type`: The type of runs to return. For a description of run types, see \[Run]\([https://docs.databricks.com/dev-](https://docs.databricks.com/dev-) tools/api/latest/jobs.html operation/JobsRunsGet). * `expand_tasks`: Whether to include task and cluster details in the response. * `start_time_from`: Show runs that started *at or after* this value. The value must be a UTC timestamp in milliseconds. Can be combined with *start\_time\_to* to filter by a time range. * `start_time_to`: Show runs that started *at or before* this value. The value must be a UTC timestamp in milliseconds. Can be combined with *start\_time\_from* to filter by a time range. **Returns:** * Upon success, a dict of the response. * `runs: List["models.Run"]` * `has_more: bool` #### API Endpoint: `/2.1/jobs/runs/list` #### API Responses: | Response | Description | | -------- | --------------------------------------------------------------- | | 200 | List of runs was retrieved successfully. | | 400 | The request was malformed. See JSON response for error details. | | 401 | The request was unauthorized. | | 500 | The request was not handled correctly due to a server error. | ### `jobs_runs_repair` ```python theme={null} jobs_runs_repair(databricks_credentials: 'DatabricksCredentials', run_id: Optional[int] = None, rerun_tasks: Optional[List[str]] = None, latest_repair_id: Optional[int] = None, rerun_all_failed_tasks: bool = False, jar_params: Optional[List[str]] = None, notebook_params: Optional[Dict] = None, python_params: Optional[List[str]] = None, spark_submit_params: Optional[List[str]] = None, python_named_params: Optional[Dict] = None, pipeline_params: Optional[str] = None, sql_params: Optional[Dict] = None, dbt_commands: Optional[List] = None, job_parameters: Optional[Dict] = None) -> Dict[str, Any] ``` Re-run one or more tasks. Tasks are re-run as part of the original job run, use the current job and task settings, and can be viewed in the history for the original job run. **Args:** * `databricks_credentials`: Credentials to use for authentication with Databricks. * `run_id`: The job run ID of the run to repair. The run must not be in progress, e.g. `455644833`. * `rerun_tasks`: The task keys of the task runs to repair, e.g. ``` ["task0", "task1"] ``` * `latest_repair_id`: The ID of the latest repair. This parameter is not required when repairing a run for the first time, but must be provided on subsequent requests to repair the same run, e.g. `734650698524280`. * `rerun_all_failed_tasks`: If true, repair all failed tasks. Only one of rerun\_tasks or rerun\_all\_failed\_tasks can be used. * `jar_params`: A list of parameters for jobs with Spark JAR tasks, for example `'jar_params'\: ['john doe', '35']`. The parameters are used to invoke the main function of the main class specified in the Spark JAR task. If not specified upon `run-now`, it defaults to an empty list. jar\_params cannot be specified in conjunction with notebook\_params. The JSON representation of this field (for example `{'jar_params'\:['john doe','35']}`) cannot exceed 10,000 bytes. Use \[Task parameter variables]\([https://docs.databricks.com/jobs.html](https://docs.databricks.com/jobs.html) parameter-variables) to set parameters containing information about job runs, e.g. ``` ["john", "doe", "35"] ``` * `notebook_params`: A map from keys to values for jobs with notebook task, for example `'notebook_params'\: {'name'\: 'john doe', 'age'\: '35'}`. The map is passed to the notebook and is accessible through the \[dbutils.widgets.get]\([https://docs.databricks.com/dev-](https://docs.databricks.com/dev-) tools/databricks-utils.html dbutils-widgets) function. If not specified upon `run-now`, the triggered run uses the job’s base parameters. notebook\_params cannot be specified in conjunction with jar\_params. Use \[Task parameter variables]\([https://docs.databricks.com/jobs.html](https://docs.databricks.com/jobs.html) parameter-variables) to set parameters containing information about job runs. The JSON representation of this field (for example `{'notebook_params'\:{'name'\:'john doe','age'\:'35'}}`) cannot exceed 10,000 bytes, e.g. ``` {"name"\: "john doe", "age"\: "35"} ``` * `python_params`: A list of parameters for jobs with Python tasks, for example `'python_params'\: ['john doe', '35']`. The parameters are passed to Python file as command-line parameters. If specified upon `run-now`, it would overwrite the parameters specified in job setting. The JSON representation of this field (for example `{'python_params'\:['john doe','35']}`) cannot exceed 10,000 bytes. Use \[Task parameter variables]\([https://docs.databricks.com/jobs.html](https://docs.databricks.com/jobs.html) parameter-variables) to set parameters containing information about job runs. Important These parameters accept only Latin characters (ASCII character set). Using non-ASCII characters returns an error. Examples of invalid, non-ASCII characters are Chinese, Japanese kanjis, and emojis, e.g. ``` ["john doe", "35"] ``` * `spark_submit_params`: A list of parameters for jobs with spark submit task, for example `'spark_submit_params'\: ['--class', 'org.apache.spark.examples.SparkPi']`. The parameters are passed to spark-submit script as command-line parameters. If specified upon `run-now`, it would overwrite the parameters specified in job setting. The JSON representation of this field (for example `{'python_params'\:['john doe','35']}`) cannot exceed 10,000 bytes. Use \[Task parameter variables]\([https://docs.databricks.com/jobs.html](https://docs.databricks.com/jobs.html) parameter-variables) to set parameters containing information about job runs. Important These parameters accept only Latin characters (ASCII character set). Using non-ASCII characters returns an error. Examples of invalid, non-ASCII characters are Chinese, Japanese kanjis, and emojis, e.g. ``` ["--class", "org.apache.spark.examples.SparkPi"] ``` * `python_named_params`: A map from keys to values for jobs with Python wheel task, for example `'python_named_params'\: {'name'\: 'task', 'data'\: 'dbfs\:/path/to/data.json'}`, e.g. ``` {"name"\: "task", "data"\: "dbfs\:/path/to/data.json"} ``` * `pipeline_params`: * `sql_params`: A map from keys to values for SQL tasks, for example `'sql_params'\: {'name'\: 'john doe', 'age'\: '35'}`. The SQL alert task does not support custom parameters, e.g. ``` {"name"\: "john doe", "age"\: "35"} ``` * `dbt_commands`: An array of commands to execute for jobs with the dbt task, for example `'dbt_commands'\: ['dbt deps', 'dbt seed', 'dbt run']`, e.g. ``` ["dbt deps", "dbt seed", "dbt run"] ``` * `job_parameters`: A map from keys to values for job-level parameters used in the run, for example `'job_parameters'\: {'param'\: 'overriding_val'}`, e.g. ``` {"param"\: "overriding_val"} ``` **Returns:** * Upon success, a dict of the response. * `repair_id: int` #### API Endpoint: `/2.1/jobs/runs/repair` #### API Responses: | Response | Description | | -------- | --------------------------------------------------------------- | | 200 | Run repair was initiated. | | 400 | The request was malformed. See JSON response for error details. | | 401 | The request was unauthorized. | | 500 | The request was not handled correctly due to a server error. | ### `jobs_runs_submit` ```python theme={null} jobs_runs_submit(databricks_credentials: 'DatabricksCredentials', tasks: Optional[List['models.RunSubmitTaskSettings']] = None, run_name: Optional[str] = None, webhook_notifications: 'models.WebhookNotifications' = None, git_source: 'models.GitSource' = None, timeout_seconds: Optional[int] = None, idempotency_token: Optional[str] = None, access_control_list: Optional[List['models.AccessControlRequest']] = None) -> Dict[str, Any] ``` Submit a one-time run. This endpoint allows you to submit a workload directly without creating a job. Use the `jobs/runs/get` API to check the run state after the job is submitted. **Args:** * `databricks_credentials`: Credentials to use for authentication with Databricks. * `tasks`: , e.g. ``` [ { "task_key"\: "Sessionize", "description"\: "Extracts session data from events", "depends_on"\: [], "existing_cluster_id"\: "0923-164208-meows279", "spark_jar_task"\: { "main_class_name"\: "com.databricks.Sessionize", "parameters"\: ["--data", "dbfs\:/path/to/data.json"], }, "libraries"\: [{"jar"\: "dbfs\:/mnt/databricks/Sessionize.jar"}], "timeout_seconds"\: 86400, }, { "task_key"\: "Orders_Ingest", "description"\: "Ingests order data", "depends_on"\: [], "existing_cluster_id"\: "0923-164208-meows279", "spark_jar_task"\: { "main_class_name"\: "com.databricks.OrdersIngest", "parameters"\: ["--data", "dbfs\:/path/to/order-data.json"], }, "libraries"\: [{"jar"\: "dbfs\:/mnt/databricks/OrderIngest.jar"}], "timeout_seconds"\: 86400, }, { "task_key"\: "Match", "description"\: "Matches orders with user sessions", "depends_on"\: [ {"task_key"\: "Orders_Ingest"}, {"task_key"\: "Sessionize"}, ], "new_cluster"\: { "spark_version"\: "7.3.x-scala2.12", "node_type_id"\: "i3.xlarge", "spark_conf"\: {"spark.speculation"\: True}, "aws_attributes"\: { "availability"\: "SPOT", "zone_id"\: "us-west-2a", }, "autoscale"\: {"min_workers"\: 2, "max_workers"\: 16}, }, "notebook_task"\: { "notebook_path"\: "/Users/user.name@databricks.com/Match", "source"\: "WORKSPACE", "base_parameters"\: {"name"\: "John Doe", "age"\: "35"}, }, "timeout_seconds"\: 86400, }, ] ``` * `run_name`: An optional name for the run. The default value is `Untitled`, e.g. `A multitask job run`. * `webhook_notifications`: A collection of system notification IDs to notify when runs of this job begin or complete. The default behavior is to not send any system notifications. Key-values: * on\_start: An optional list of notification IDs to call when the run starts. A maximum of 3 destinations can be specified for the `on_start` property, e.g. ``` [ {"id"\: "03dd86e4-57ef-4818-a950-78e41a1d71ab"}, {"id"\: "0481e838-0a59-4eff-9541-a4ca6f149574"}, ] ``` * on\_success: An optional list of notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified for the `on_success` property, e.g. ``` [{"id"\: "03dd86e4-57ef-4818-a950-78e41a1d71ab"}] ``` * on\_failure: An optional list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified for the `on_failure` property, e.g. ``` [{"id"\: "0481e838-0a59-4eff-9541-a4ca6f149574"}] ``` * `git_source`: This functionality is in Public Preview. An optional specification for a remote repository containing the notebooks used by this job's notebook tasks, e.g. ```` { "git_url"\: "https\://github.com/databricks/databricks-cli", "git_branch"\: "main", "git_provider"\: "gitHub", } ``` Key-values\: - git_url\: URL of the repository to be cloned by this job. The maximum length is 300 characters, e.g. `https\://github.com/databricks/databricks-cli`. - git_provider\: Unique identifier of the service used to host the Git repository. The value is case insensitive, e.g. `github`. - git_branch\: Name of the branch to be checked out and used by this job. This field cannot be specified in conjunction with git_tag or git_commit. The maximum length is 255 characters, e.g. `main`. - git_tag\: Name of the tag to be checked out and used by this job. This field cannot be specified in conjunction with git_branch or git_commit. The maximum length is 255 characters, e.g. `release-1.0.0`. - git_commit\: Commit to be checked out and used by this job. This field cannot be specified in conjunction with git_branch or git_tag. The maximum length is 64 characters, e.g. `e0056d01`. - git_snapshot\: Read-only state of the remote repository at the time the job was run. This field is only included on job runs. - `timeout_seconds`: An optional timeout applied to each run of this job. The default behavior is to have no timeout, e.g. `86400`. - `idempotency_token`: An optional token that can be used to guarantee the idempotency of job run requests. If a run with the provided token already exists, the request does not create a new run but returns the ID of the existing run instead. If a run with the provided token is deleted, an error is returned. If you specify the idempotency token, upon failure you can retry until the request succeeds. Databricks guarantees that exactly one run is launched with that idempotency token. This token must have at most 64 characters. For more information, see [How to ensure idempotency for jobs](https\://kb.databricks.com/jobs/jobs-idempotency.html), e.g. `8f018174-4792-40d5-bcbc-3e6a527352c8`. - `access_control_list`: List of permissions to set on the job. **Returns:** - Upon success, a dict of the response. - `run_id: int` #### API Endpoint: `/2.1/jobs/runs/submit` #### API Responses: | Response | Description | | --- | --- | | 200 | Run was created and started successfully. | | 400 | The request was malformed. See JSON response for error details. | | 401 | The request was unauthorized. | | 500 | The request was not handled correctly due to a server error. | ### `jobs_update` ```python jobs_update(databricks_credentials: 'DatabricksCredentials', job_id: Optional[int] = None, new_settings: 'models.JobSettings' = None, fields_to_remove: Optional[List[str]] = None) -> Dict[str, Any] ```` Add, update, or remove specific settings of an existing job. Use the Reset endpoint to overwrite all job settings. **Args:** * `databricks_credentials`: Credentials to use for authentication with Databricks. * `job_id`: The canonical identifier of the job to update. This field is required, e.g. `11223344`. * `new_settings`: The new settings for the job. Any top-level fields specified in `new_settings` are completely replaced. Partially updating nested fields is not supported. Changes to the field `JobSettings.timeout_seconds` are applied to active runs. Changes to other fields are applied to future runs only. Key-values: * name: An optional name for the job, e.g. `A multitask job`. * tags: A map of tags associated with the job. These are forwarded to the cluster as cluster tags for jobs clusters, and are subject to the same limitations as cluster tags. A maximum of 25 tags can be added to the job, e.g. ``` {"cost-center"\: "engineering", "team"\: "jobs"} ``` * tasks: A list of task specifications to be executed by this job, e.g. ``` [ { "task_key"\: "Sessionize", "description"\: "Extracts session data from events", "depends_on"\: [], "existing_cluster_id"\: "0923-164208-meows279", "spark_jar_task"\: { "main_class_name"\: "com.databricks.Sessionize", "parameters"\: [ "--data", "dbfs\:/path/to/data.json", ], }, "libraries"\: [ {"jar"\: "dbfs\:/mnt/databricks/Sessionize.jar"} ], "timeout_seconds"\: 86400, "max_retries"\: 3, "min_retry_interval_millis"\: 2000, "retry_on_timeout"\: False, }, { "task_key"\: "Orders_Ingest", "description"\: "Ingests order data", "depends_on"\: [], "job_cluster_key"\: "auto_scaling_cluster", "spark_jar_task"\: { "main_class_name"\: "com.databricks.OrdersIngest", "parameters"\: [ "--data", "dbfs\:/path/to/order-data.json", ], }, "libraries"\: [ {"jar"\: "dbfs\:/mnt/databricks/OrderIngest.jar"} ], "timeout_seconds"\: 86400, "max_retries"\: 3, "min_retry_interval_millis"\: 2000, "retry_on_timeout"\: False, }, { "task_key"\: "Match", "description"\: "Matches orders with user sessions", "depends_on"\: [ {"task_key"\: "Orders_Ingest"}, {"task_key"\: "Sessionize"}, ], "new_cluster"\: { "spark_version"\: "7.3.x-scala2.12", "node_type_id"\: "i3.xlarge", "spark_conf"\: {"spark.speculation"\: True}, "aws_attributes"\: { "availability"\: "SPOT", "zone_id"\: "us-west-2a", }, "autoscale"\: { "min_workers"\: 2, "max_workers"\: 16, }, }, "notebook_task"\: { "notebook_path"\: "/Users/user.name@databricks.com/Match", "source"\: "WORKSPACE", "base_parameters"\: { "name"\: "John Doe", "age"\: "35", }, }, "timeout_seconds"\: 86400, "max_retries"\: 3, "min_retry_interval_millis"\: 2000, "retry_on_timeout"\: False, }, ] ``` * job\_clusters: A list of job cluster specifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings, e.g. ``` [ { "job_cluster_key"\: "auto_scaling_cluster", "new_cluster"\: { "spark_version"\: "7.3.x-scala2.12", "node_type_id"\: "i3.xlarge", "spark_conf"\: {"spark.speculation"\: True}, "aws_attributes"\: { "availability"\: "SPOT", "zone_id"\: "us-west-2a", }, "autoscale"\: { "min_workers"\: 2, "max_workers"\: 16, }, }, } ] ``` * email\_notifications: An optional set of email addresses that is notified when runs of this job begin or complete as well as when this job is deleted. The default behavior is to not send any emails. * webhook\_notifications: A collection of system notification IDs to notify when runs of this job begin or complete. The default behavior is to not send any system notifications. * timeout\_seconds: An optional timeout applied to each run of this job. The default behavior is to have no timeout, e.g. `86400`. * schedule: An optional periodic schedule for this job. The default behavior is that the job only runs when triggered by clicking “Run Now” in the Jobs UI or sending an API request to `runNow`. * max\_concurrent\_runs: An optional maximum allowed number of concurrent runs of the job. Set this value if you want to be able to execute multiple runs of the same job concurrently. This is useful for example if you trigger your job on a frequent schedule and want to allow consecutive runs to overlap with each other, or if you want to trigger multiple runs which differ by their input parameters. This setting affects only new runs. For example, suppose the job’s concurrency is 4 and there are 4 concurrent active runs. Then setting the concurrency to 3 won’t kill any of the active runs. However, from then on, new runs are skipped unless there are fewer than 3 active runs. This value cannot exceed 1000. Setting this value to 0 causes all new runs to be skipped. The default behavior is to allow only 1 concurrent run, e.g. `10`. * git\_source: This functionality is in Public Preview. An optional specification for a remote repository containing the notebooks used by this job's notebook tasks, e.g. ``` { "git_url"\: "https\://github.com/databricks/databricks-cli", "git_branch"\: "main", "git_provider"\: "gitHub", } ``` * format: Used to tell what is the format of the job. This field is ignored in Create/Update/Reset calls. When using the Jobs API 2.1 this value is always set to `'MULTI_TASK'`, e.g. `MULTI_TASK`. * parameters: Job-level parameter definitions. * `fields_to_remove`: Remove top-level fields in the job settings. Removing nested fields is not supported. This field is optional, e.g. ``` ["libraries", "schedule"] ``` **Returns:** * Upon success, an empty dict. #### API Endpoint: `/2.1/jobs/update` #### API Responses: | Response | Description | | -------- | --------------------------------------------------------------- | | 200 | Job was updated successfully. | | 400 | The request was malformed. See JSON response for error details. | | 401 | The request was unauthorized. | | 500 | The request was not handled correctly due to a server error. | # __init__ Source: https://docs.prefect.io/integrations/prefect-databricks/api-ref/prefect_databricks-models-__init__ # `prefect_databricks.models` *This module is empty or contains only private/internal implementations.* # jobs Source: https://docs.prefect.io/integrations/prefect-databricks/api-ref/prefect_databricks-models-jobs # `prefect_databricks.models.jobs` ## Classes ### `AutoScale` See source code for the fields' description. ### `AwsAttributes` See source code for the fields' description. ### `CanManage` Permission to manage the job. ### `CanManageRun` Permission to run and/or manage runs for the job. ### `CanView` Permission to view the settings of the job. ### `ClusterCloudProviderNodeStatus` * NotEnabledOnSubscription: Node type not available for subscription. * NotAvailableInRegion: Node type not available in region. ### `ClusterEventType` * `CREATING`: Indicates that the cluster is being created. * `DID_NOT_EXPAND_DISK`: Indicates that a disk is low on space, but adding disks would put it over the max capacity. * `EXPANDED_DISK`: Indicates that a disk was low on space and the disks were expanded. * `FAILED_TO_EXPAND_DISK`: Indicates that a disk was low on space and disk space could not be expanded. * `INIT_SCRIPTS_STARTING`: Indicates that the cluster scoped init script has started. * `INIT_SCRIPTS_FINISHED`: Indicates that the cluster scoped init script has finished. * `STARTING`: Indicates that the cluster is being started. * `RESTARTING`: Indicates that the cluster is being started. * `TERMINATING`: Indicates that the cluster is being terminated. * `EDITED`: Indicates that the cluster has been edited. * `RUNNING`: Indicates the cluster has finished being created. Includes the number of nodes in the cluster and a failure reason if some nodes could not be acquired. * `RESIZING`: Indicates a change in the target size of the cluster (upsize or downsize). * `UPSIZE_COMPLETED`: Indicates that nodes finished being added to the cluster. Includes the number of nodes in the cluster and a failure reason if some nodes could not be acquired. * `NODES_LOST`: Indicates that some nodes were lost from the cluster. * `DRIVER_HEALTHY`: Indicates that the driver is healthy and the cluster is ready for use. * `DRIVER_UNAVAILABLE`: Indicates that the driver is unavailable. * `SPARK_EXCEPTION`: Indicates that a Spark exception was thrown from the driver. * `DRIVER_NOT_RESPONDING`: Indicates that the driver is up but is not responsive, likely due to GC. * `DBFS_DOWN`: Indicates that the driver is up but DBFS is down. * `METASTORE_DOWN`: Indicates that the driver is up but the metastore is down. * `NODE_BLACKLISTED`: Indicates that a node is not allowed by Spark. * `PINNED`: Indicates that the cluster was pinned. * `UNPINNED`: Indicates that the cluster was unpinned. ### `ClusterInstance` See source code for the fields' description. ### `ClusterSize` See source code for the fields' description. ### `ClusterSource` * UI: Cluster created through the UI. * JOB: Cluster created by the Databricks job scheduler. * API: Cluster created through an API call. ### `ClusterState` * PENDING: Indicates that a cluster is in the process of being created. * RUNNING: Indicates that a cluster has been started and is ready for use. * RESTARTING: Indicates that a cluster is in the process of restarting. * RESIZING: Indicates that a cluster is in the process of adding or removing nodes. * TERMINATING: Indicates that a cluster is in the process of being destroyed. * TERMINATED: Indicates that a cluster has been successfully destroyed. * ERROR: This state is no longer used. It was used to indicate a cluster that failed to be created. `TERMINATING` and `TERMINATED` are used instead. * UNKNOWN: Indicates that a cluster is in an unknown state. A cluster should never be in this state. ### `ClusterTag` See source code for the fields' description. An object with key value pairs. The key length must be between 1 and 127 UTF-8 characters, inclusive. The value length must be less than or equal to 255 UTF-8 characters. For a list of all restrictions, see AWS Tag Restrictions: \<[https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using\_Tags.html#tag-restrictions](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#tag-restrictions)> ### `CronSchedule` See source code for the fields' description. ### `DbfsStorageInfo` See source code for the fields' description. ### `DbtOutput` See source code for the fields' description. ### `DbtTask` See source code for the fields' description. ### `DockerBasicAuth` See source code for the fields' description. ### `DockerImage` See source code for the fields' description. ### `Error` See source code for the fields' description. ### `FileStorageInfo` See source code for the fields' description. ### `GitSnapshot` See source code for the fields' description. Read-only state of the remote repository at the time the job was run. This field is only included on job runs. ### `GitSource` See source code for the fields' description. This functionality is in Public Preview. An optional specification for a remote repository containing the notebooks used by this job's notebook tasks. ### `GitSource1` See source code for the fields' description. ### `GroupName` See source code for the fields' description. ### `IsOwner` Perimssion that represents ownership of the job. ### `JobEmailNotifications` See source code for the fields' description. ### `LibraryInstallStatus` * `PENDING`: No action has yet been taken to install the library. This state should be very short lived. * `RESOLVING`: Metadata necessary to install the library is being retrieved from the provided repository. For Jar, Egg, and Whl libraries, this step is a no-op. * `INSTALLING`: The library is actively being installed, either by adding resources to Spark or executing system commands inside the Spark nodes. * `INSTALLED`: The library has been successfully instally. * `SKIPPED`: Installation on a Databricks Runtime 7.0 or above cluster was skipped due to Scala version incompatibility. * `FAILED`: Some step in installation failed. More information can be found in the messages field. * `UNINSTALL_ON_RESTART`: The library has been marked for removal. Libraries can be removed only when clusters are restarted, so libraries that enter this state remains until the cluster is restarted. ### `ListOrder` * `DESC`: Descending order. * `ASC`: Ascending order. ### `RuntimeEngine` Decides which runtime engine to be use, e.g. Standard vs. Photon. If unspecified, the runtime engine is inferred from spark\_version. ### `LogSyncStatus` See source code for the fields' description. ### `MavenLibrary` See source code for the fields' description. ### `NotebookOutput` See source code for the fields' description. ### `NotebookTask` See source code for the fields' description. ### `ParameterPair` See source code for the fields' description. An object with additional information about why a cluster was terminated. The object keys are one of `TerminationParameter` and the value is the termination information. ### `PermissionLevel` See source code for the fields' description. ### `PermissionLevelForGroup` See source code for the fields' description. ### `PipelineTask` See source code for the fields' description. ### `PoolClusterTerminationCode` * INSTANCE\_POOL\_MAX\_CAPACITY\_FAILURE: The pool max capacity has been reached. * INSTANCE\_POOL\_NOT\_FOUND\_FAILURE: The pool specified by the cluster is no longer active or doesn’t exist. ### `PythonPyPiLibrary` See source code for the fields' description. ### `PythonWheelTask` See source code for the fields' description. ### `RCranLibrary` See source code for the fields' description. ### `RepairRunInput` See source code for the fields' description. ### `ResizeCause` * `AUTOSCALE`: Automatically resized based on load. * `USER_REQUEST`: User requested a new size. * `AUTORECOVERY`: Autorecovery monitor resized the cluster after it lost a node. ### `RunLifeCycleState` * `PENDING`: The run has been triggered. If there is not already an active run of the same job, the cluster and execution context are being prepared. If there is already an active run of the same job, the run immediately transitions into the `SKIPPED` state without preparing any resources. * `RUNNING`: The task of this run is being executed. * `TERMINATING`: The task of this run has completed, and the cluster and execution context are being cleaned up. * `TERMINATED`: The task of this run has completed, and the cluster and execution context have been cleaned up. This state is terminal. * `SKIPPED`: This run was aborted because a previous run of the same job was already active. This state is terminal. * `INTERNAL_ERROR`: An exceptional state that indicates a failure in the Jobs service, such as network failure over a long period. If a run on a new cluster ends in the `INTERNAL_ERROR` state, the Jobs service terminates the cluster as soon as possible. This state is terminal. * `BLOCKED`: The run is blocked on an upstream dependency. * `WAITING_FOR_RETRY`: The run is waiting for a retry. ### `RunNowInput` See source code for the fields' description. ### `PipelineParams` See source code for the fields' description. ### `RunParameters` See source code for the fields' description. ### `RunResultState` * `SUCCESS`: The task completed successfully. * `FAILED`: The task completed with an error. * `TIMEDOUT`: The run was stopped after reaching the timeout. * `CANCELED`: The run was canceled at user request. ### `RunState` See source code for the fields' description. The result and lifecycle state of the run. ### `RunType` The type of the run. * `JOB_RUN`: Normal job run. A run created with [Run now](https://docs.databricks.com/dev-tools/api/latest/jobs.html#operation/JobsRunNow). * `WORKFLOW_RUN`: Workflow run. A run created with [dbutils.notebook.run](https://docs.databricks.com/dev-tools/databricks-utils.html#dbutils-workflow). * `SUBMIT_RUN`: Submit run. A run created with [Run Submit](https://docs.databricks.com/dev-tools/api/latest/jobs.html#operation/JobsRunsSubmit). ### `S3StorageInfo` See source code for the fields' description. ### `ServicePrincipalName` See source code for the fields' description. ### `SparkConfPair` See source code for the fields' description. An arbitrary object where the object key is a configuration property name and the value is a configuration property value. ### `SparkEnvPair` See source code for the fields' description. An arbitrary object where the object key is an environment variable name and the value is an environment variable value. ### `SparkJarTask` See source code for the fields' description. ### `SparkNodeAwsAttributes` See source code for the fields' description. ### `SparkPythonTask` See source code for the fields' description. ### `SparkSubmitTask` See source code for the fields' description. ### `SparkVersion` See source code for the fields' description. ### `SqlOutputError` See source code for the fields' description. ### `SqlStatementOutput` See source code for the fields' description. ### `SqlTaskAlert` See source code for the fields' description. ### `SqlTaskDashboard` See source code for the fields' description. ### `SqlTaskQuery` See source code for the fields' description. ### `TaskDependency` See source code for the fields' description. ### `TaskDependencies` See source code for the fields' description. An optional array of objects specifying the dependency graph of the task. All tasks specified in this field must complete successfully before executing this task. The key is `task_key`, and the value is the name assigned to the dependent task. This field is required when a job consists of more than one task. ### `TaskDescription` See source code for the fields' description. ### `TaskKey` See source code for the fields' description. ### `TerminationCode` * USER\_REQUEST: A user terminated the cluster directly. Parameters should include a `username` field that indicates the specific user who terminated the cluster. * JOB\_FINISHED: The cluster was launched by a job, and terminated when the job completed. * INACTIVITY: The cluster was terminated since it was idle. * CLOUD\_PROVIDER\_SHUTDOWN: The instance that hosted the Spark driver was terminated by the cloud provider. In AWS, for example, AWS may retire instances and directly shut them down. Parameters should include an `aws_instance_state_reason` field indicating the AWS-provided reason why the instance was terminated. * COMMUNICATION\_LOST: Databricks lost connection to services on the driver instance. For example, this can happen when problems arise in cloud networking infrastructure, or when the instance itself becomes unhealthy. * CLOUD\_PROVIDER\_LAUNCH\_FAILURE: Databricks experienced a cloud provider failure when requesting instances to launch clusters. For example, AWS limits the number of running instances and EBS volumes. If you ask Databricks to launch a cluster that requires instances or EBS volumes that exceed your AWS limit, the cluster fails with this status code. Parameters should include one of `aws_api_error_code`, `aws_instance_state_reason`, or `aws_spot_request_status` to indicate the AWS-provided reason why Databricks could not request the required instances for the cluster. * SPARK\_STARTUP\_FAILURE: The cluster failed to initialize. Possible reasons may include failure to create the environment for Spark or issues launching the Spark master and worker processes. * INVALID\_ARGUMENT: Cannot launch the cluster because the user specified an invalid argument. For example, the user might specify an invalid runtime version for the cluster. * UNEXPECTED\_LAUNCH\_FAILURE: While launching this cluster, Databricks failed to complete critical setup steps, terminating the cluster. * INTERNAL\_ERROR: Databricks encountered an unexpected error that forced the running cluster to be terminated. Contact Databricks support for additional details. * SPARK\_ERROR: The Spark driver failed to start. Possible reasons may include incompatible libraries and initialization scripts that corrupted the Spark container. * METASTORE\_COMPONENT\_UNHEALTHY: The cluster failed to start because the external metastore could not be reached. Refer to [Troubleshooting](https://docs.databricks.com/data/metastores/external-hive-metastore.html#troubleshooting). * DBFS\_COMPONENT\_UNHEALTHY: The cluster failed to start because Databricks File System (DBFS) could not be reached. * DRIVER\_UNREACHABLE: Databricks was not able to access the Spark driver, because it was not reachable. * DRIVER\_UNRESPONSIVE: Databricks was not able to access the Spark driver, because it was unresponsive. * INSTANCE\_UNREACHABLE: Databricks was not able to access instances in order to start the cluster. This can be a transient networking issue. If the problem persists, this usually indicates a networking environment misconfiguration. * CONTAINER\_LAUNCH\_FAILURE: Databricks was unable to launch containers on worker nodes for the cluster. Have your admin check your network configuration. * INSTANCE\_POOL\_CLUSTER\_FAILURE: Pool backed cluster specific failure. Refer to [Pools](https://docs.databricks.com/clusters/instance-pools/index.html) for details. * REQUEST\_REJECTED: Databricks cannot handle the request at this moment. Try again later and contact Databricks if the problem persists. * INIT\_SCRIPT\_FAILURE: Databricks cannot load and run a cluster-scoped init script on one of the cluster’s nodes, or the init script terminates with a non-zero exit code. Refer to [Init script logs](https://docs.databricks.com/clusters/init-scripts.html#init-script-log). * TRIAL\_EXPIRED: The Databricks trial subscription expired. ### `TerminationParameter` See source code for the fields' description. ### `TerminationType` * SUCCESS: Termination succeeded. * CLIENT\_ERROR: Non-retriable. Client must fix parameters before reattempting the cluster creation. * SERVICE\_FAULT: Databricks service issue. Client can retry. * CLOUD\_FAILURECloud provider infrastructure issue. Client can retry after the underlying issue is resolved. ### `TriggerType` * `PERIODIC`: Schedules that periodically trigger runs, such as a cron scheduler. * `ONE_TIME`: One time triggers that fire a single run. This occurs you triggered a single run on demand through the UI or the API. * `RETRY`: Indicates a run that is triggered as a retry of a previously failed run. This occurs when you request to re-run the job in case of failures. ### `UserName` See source code for the fields' description. ### `ViewType` * `NOTEBOOK`: Notebook view item. * `DASHBOARD`: Dashboard view item. ### `ViewsToExport` * `CODE`: Code view of the notebook. * `DASHBOARDS`: All dashboard views of the notebook. * `ALL`: All views of the notebook. ### `OnFailureItem` See source code for the fields' description. ### `OnStartItem` See source code for the fields' description. ### `OnSucces` See source code for the fields' description. ### `WebhookNotifications` See source code for the fields' description. ### `AccessControlRequestForGroup` See source code for the fields' description. ### `AccessControlRequestForServicePrincipal` See source code for the fields' description. ### `AccessControlRequestForUser` See source code for the fields' description. ### `ClusterCloudProviderNodeInfo` See source code for the fields' description. ### `ClusterLogConf` See source code for the fields' description. ### `InitScriptInfo` See source code for the fields' description. ### `Library` See source code for the fields' description. ### `LibraryFullStatus` See source code for the fields' description. ### `NewCluster` See source code for the fields' description. ### `NodeType` See source code for the fields' description. ### `RepairHistoryItem` See source code for the fields' description. ### `SparkNode` See source code for the fields' description. ### `SqlAlertOutput` See source code for the fields' description. ### `SqlDashboardWidgetOutput` See source code for the fields' description. ### `SqlQueryOutput` See source code for the fields' description. ### `SqlTask` See source code for the fields' description. ### `TerminationReason` See source code for the fields' description. ### `ViewItem` See source code for the fields' description. ### `AccessControlRequest` See source code for the fields' description. ### `ClusterAttributes` See source code for the fields' description. ### `ClusterInfo` See source code for the fields' description. ### `ClusterLibraryStatuses` See source code for the fields' description. ### `ClusterSpec` See source code for the fields' description. ### `EventDetails` See source code for the fields' description. ### `JobCluster` See source code for the fields' description. ### `JobTask` See source code for the fields' description. ### `JobTaskSettings` See source code for the fields' description. ### `RepairHistory` See source code for the fields' description. ### `RunSubmitTaskSettings` See source code for the fields' description. ### `RunTask` See source code for the fields' description. ### `SqlDashboardOutput` See source code for the fields' description. ### `SqlOutput` See source code for the fields' description. ### `AccessControlList` See source code for the fields' description. ### `ClusterEvent` See source code for the fields' description. ### `JobParameter` See source code for the fields' description. ### `JobSettings` See source code for the fields' description. ### `Run` See source code for the fields' description. ### `RunSubmitSettings` See source code for the fields' description. ### `RunJobParameter` See source code for the fields' description. ### `Job` See source code for the fields' description. # rest Source: https://docs.prefect.io/integrations/prefect-databricks/api-ref/prefect_databricks-rest # `prefect_databricks.rest` This is a module containing generic REST tasks. ## Functions ### `serialize_model` ```python theme={null} serialize_model(obj: Any) -> Any ``` Recursively serializes `pydantic.BaseModel` into JSON; returns original obj if not a `BaseModel`. **Args:** * `obj`: Input object to serialize. **Returns:** * Serialized version of object. ### `strip_kwargs` ```python theme={null} strip_kwargs(**kwargs: Dict) -> Dict ``` Recursively drops keyword arguments if value is None, and serializes any `pydantic.BaseModel` types. **Args:** * `**kwargs`: Input keyword arguments. **Returns:** * Stripped version of kwargs. ### `execute_endpoint` ```python theme={null} execute_endpoint(endpoint: str, databricks_credentials: 'DatabricksCredentials', http_method: HTTPMethod = HTTPMethod.GET, params: Dict[str, Any] = None, json: Dict[str, Any] = None, **kwargs: Dict[str, Any]) -> httpx.Response ``` Generic function for executing REST endpoints. **Args:** * `endpoint`: The endpoint route. * `databricks_credentials`: Credentials to use for authentication with Databricks. * `http_method`: Either GET, POST, PUT, DELETE, or PATCH. * `params`: URL query parameters in the request. * `json`: JSON serializable object to include in the body of the request. * `**kwargs`: Additional keyword arguments to pass. **Returns:** * The httpx.Response from interacting with the endpoint. **Examples:** Lists jobs on the Databricks instance. ```python theme={null} from prefect import flow from prefect_databricks import DatabricksCredentials from prefect_databricks.rest import execute_endpoint @flow def example_execute_endpoint_flow(): endpoint = "/2.1/jobs/list" databricks_credentials = DatabricksCredentials.load("my-block") params = { "limit": 5, "offset": None, "expand_tasks": True, } response = execute_endpoint( endpoint, databricks_credentials, params=params ) return response.json() ``` ## Classes ### `HTTPMethod` Available HTTP request methods. # prefect-databricks Source: https://docs.prefect.io/integrations/prefect-databricks/index Prefect integrations for interacting with Databricks. ## Getting started ### Prerequisites * A [Databricks account](https://databricks.com/) and the necessary permissions to access desired services. ### Install `prefect-databricks` The following command will install a version of `prefect-databricks` compatible with your installed version of `prefect`. If you don't already have `prefect` installed, it will install the newest version of `prefect` as well. ```bash theme={null} pip install "prefect[databricks]" ``` Upgrade to the latest versions of `prefect` and `prefect-databricks`: ```bash theme={null} pip install -U "prefect[databricks]" ``` ### List jobs on the Databricks instance ```python theme={null} from prefect import flow from prefect_databricks import DatabricksCredentials from prefect_databricks.jobs import jobs_list @flow def example_execute_endpoint_flow(): databricks_credentials = DatabricksCredentials.load("my-block") jobs = jobs_list( databricks_credentials, limit=5 ) return jobs if __name__ == "__main__": example_execute_endpoint_flow() ``` ### Use `with_options` to customize options on any existing task or flow ```python theme={null} custom_example_execute_endpoint_flow = example_execute_endpoint_flow.with_options( name="My custom flow name", retries=2, retry_delay_seconds=10, ) ``` ### Launch a new cluster and run a Databricks notebook Notebook named `example.ipynb` on Databricks which accepts a name parameter: ```python theme={null} name = dbutils.widgets.get("name") message = f"Don't worry {name}, I got your request! Welcome to prefect-databricks!" print(message) ``` Prefect flow that launches a new cluster to run `example.ipynb`: ```python theme={null} from prefect import flow from prefect_databricks import DatabricksCredentials from prefect_databricks.jobs import jobs_runs_submit from prefect_databricks.models.jobs import ( AutoScale, AwsAttributes, JobTaskSettings, NotebookTask, NewCluster, ) @flow def jobs_runs_submit_flow(notebook_path, **base_parameters): databricks_credentials = DatabricksCredentials.load("my-block") # specify new cluster settings aws_attributes = AwsAttributes( availability="SPOT", zone_id="us-west-2a", ebs_volume_type="GENERAL_PURPOSE_SSD", ebs_volume_count=3, ebs_volume_size=100, ) auto_scale = AutoScale(min_workers=1, max_workers=2) new_cluster = NewCluster( aws_attributes=aws_attributes, autoscale=auto_scale, node_type_id="m4.large", spark_version="10.4.x-scala2.12", spark_conf={"spark.speculation": True}, ) # specify notebook to use and parameters to pass notebook_task = NotebookTask( notebook_path=notebook_path, base_parameters=base_parameters, ) # compile job task settings job_task_settings = JobTaskSettings( new_cluster=new_cluster, notebook_task=notebook_task, task_key="prefect-task" ) run = jobs_runs_submit( databricks_credentials=databricks_credentials, run_name="prefect-job", tasks=[job_task_settings] ) return run if __name__ == "__main__": jobs_runs_submit_flow("/Users/username@gmail.com/example.ipynb", name="Marvin") ``` Note, instead of using the built-in models, you may also input valid JSON. For example, `AutoScale(min_workers=1, max_workers=2)` is equivalent to `{"min_workers": 1, "max_workers": 2}`. ## Resources For assistance using Databricks, consult the [Databricks documentation](https://www.databricks.com/databricks-documentation). Refer to the `prefect-databricks` [SDK documentation](/integrations/prefect-databricks/api-ref/prefect_databricks-credentials) to explore all the capabilities of the `prefect-databricks` library. # __init__ Source: https://docs.prefect.io/integrations/prefect-dbt/api-ref/prefect_dbt-cli-__init__ # `prefect_dbt.cli` *This module is empty or contains only private/internal implementations.* # commands Source: https://docs.prefect.io/integrations/prefect-dbt/api-ref/prefect_dbt-cli-commands # `prefect_dbt.cli.commands` Module containing tasks and flows for interacting with dbt CLI ## Functions ### `atrigger_dbt_cli_command` ```python theme={null} atrigger_dbt_cli_command(command: str, profiles_dir: Optional[Union[Path, str]] = None, project_dir: Optional[Union[Path, str]] = None, overwrite_profiles: bool = False, dbt_cli_profile: Optional[DbtCliProfile] = None, create_summary_artifact: bool = False, summary_artifact_key: Optional[str] = 'dbt-cli-command-summary', extra_command_args: Optional[List[str]] = None, stream_output: bool = True) -> Optional[dbtRunnerResult] ``` Async task for running dbt commands. See trigger\_dbt\_cli\_command for full docs. ### `trigger_dbt_cli_command` ```python theme={null} trigger_dbt_cli_command(command: str, profiles_dir: Optional[Union[Path, str]] = None, project_dir: Optional[Union[Path, str]] = None, overwrite_profiles: bool = False, dbt_cli_profile: Optional[DbtCliProfile] = None, create_summary_artifact: bool = False, summary_artifact_key: Optional[str] = 'dbt-cli-command-summary', extra_command_args: Optional[List[str]] = None, stream_output: bool = True) -> Optional[dbtRunnerResult] ``` Task for running dbt commands. If no profiles.yml file is found or if overwrite\_profiles flag is set to True, this will first generate a profiles.yml file in the profiles\_dir directory. Then run the dbt CLI shell command. **Args:** * `command`: The dbt command to be executed. * `profiles_dir`: The directory to search for the profiles.yml file. Setting this appends the `--profiles-dir` option to the command provided. If this is not set, will try using the DBT\_PROFILES\_DIR environment variable, but if that's also not set, will use the default directory `$HOME/.dbt/`. * `project_dir`: The directory to search for the dbt\_project.yml file. Default is the current working directory and its parents. * `overwrite_profiles`: Whether the existing profiles.yml file under profiles\_dir should be overwritten with a new profile. * `dbt_cli_profile`: Profiles class containing the profile written to profiles.yml. Note! This is optional and will raise an error if profiles.yml already exists under profile\_dir and overwrite\_profiles is set to False. * `create_summary_artifact`: If True, creates a Prefect artifact on the task run with the dbt results using the specified artifact key. Defaults to False. * `summary_artifact_key`: The key under which to store the dbt results artifact in Prefect. Defaults to 'dbt-cli-command-summary'. * `extra_command_args`: Additional command arguments to pass to the dbt command. These arguments get appended to the command that gets passed to the dbtRunner client. Example: extra\_command\_args=\["--model", "foo\_model"] * `stream_output`: If True, the output from the dbt command will be logged in Prefect as it happens. Defaults to True. **Returns:** * The result from the dbt CLI invocation. **Examples:** Execute `dbt debug` with a pre-populated profiles.yml. ```python theme={null} from prefect import flow from prefect_dbt.cli.commands import trigger_dbt_cli_command @flow def trigger_dbt_cli_command_flow(): result = trigger_dbt_cli_command("dbt debug") return result trigger_dbt_cli_command_flow() ``` ### `arun_dbt_build` ```python theme={null} arun_dbt_build(profiles_dir: Optional[Union[Path, str]] = None, project_dir: Optional[Union[Path, str]] = None, overwrite_profiles: bool = False, dbt_cli_profile: Optional[DbtCliProfile] = None, create_summary_artifact: bool = False, summary_artifact_key: str = 'dbt-build-task-summary', extra_command_args: Optional[List[str]] = None, stream_output: bool = True) ``` Async version of run\_dbt\_build. See run\_dbt\_build for full documentation. ### `run_dbt_build` ```python theme={null} run_dbt_build(profiles_dir: Optional[Union[Path, str]] = None, project_dir: Optional[Union[Path, str]] = None, overwrite_profiles: bool = False, dbt_cli_profile: Optional[DbtCliProfile] = None, create_summary_artifact: bool = False, summary_artifact_key: str = 'dbt-build-task-summary', extra_command_args: Optional[List[str]] = None, stream_output: bool = True) ``` Executes the 'dbt build' command within a Prefect task, and optionally creates a Prefect artifact summarizing the dbt build results. **Args:** * `profiles_dir`: The directory to search for the profiles.yml file. Setting this appends the `--profiles-dir` option to the command provided. If this is not set, will try using the DBT\_PROFILES\_DIR env variable, but if that's also not set, will use the default directory `$HOME/.dbt/`. * `project_dir`: The directory to search for the dbt\_project.yml file. Default is the current working directory and its parents. * `overwrite_profiles`: Whether the existing profiles.yml file under profiles\_dir should be overwritten with a new profile. * `dbt_cli_profile`: Profiles class containing the profile written to profiles.yml. Note! This is optional and will raise an error if profiles.yml already exists under profile\_dir and overwrite\_profiles is set to False. * `create_summary_artifact`: If True, creates a Prefect artifact on the task run with the dbt build results using the specified artifact key. Defaults to False. * `summary_artifact_key`: The key under which to store the dbt build results artifact in Prefect. Defaults to 'dbt-build-task-summary'. * `extra_command_args`: Additional command arguments to pass to the dbt build command. * `stream_output`: If True, the output from the dbt command will be logged in Prefect as it happens. Defaults to True. **Raises:** * `ValueError`: If required dbt\_cli\_profile is not provided when needed for profile writing. * `RuntimeError`: If the dbt build fails for any reason, it will be indicated by the exception raised. ### `arun_dbt_model` ```python theme={null} arun_dbt_model(profiles_dir: Optional[Union[Path, str]] = None, project_dir: Optional[Union[Path, str]] = None, overwrite_profiles: bool = False, dbt_cli_profile: Optional[DbtCliProfile] = None, create_summary_artifact: bool = False, summary_artifact_key: str = 'dbt-run-task-summary', extra_command_args: Optional[List[str]] = None, stream_output: bool = True) ``` Async version of run\_dbt\_model. See run\_dbt\_model for full documentation. ### `run_dbt_model` ```python theme={null} run_dbt_model(profiles_dir: Optional[Union[Path, str]] = None, project_dir: Optional[Union[Path, str]] = None, overwrite_profiles: bool = False, dbt_cli_profile: Optional[DbtCliProfile] = None, create_summary_artifact: bool = False, summary_artifact_key: str = 'dbt-run-task-summary', extra_command_args: Optional[List[str]] = None, stream_output: bool = True) ``` Executes the 'dbt run' command within a Prefect task, and optionally creates a Prefect artifact summarizing the dbt model results. **Args:** * `profiles_dir`: The directory to search for the profiles.yml file. Setting this appends the `--profiles-dir` option to the command provided. If this is not set, will try using the DBT\_PROFILES\_DIR env variable, but if that's also not set, will use the default directory `$HOME/.dbt/`. * `project_dir`: The directory to search for the dbt\_project.yml file. Default is the current working directory and its parents. * `overwrite_profiles`: Whether the existing profiles.yml file under profiles\_dir should be overwritten with a new profile. * `dbt_cli_profile`: Profiles class containing the profile written to profiles.yml. Note! This is optional and will raise an error if profiles.yml already exists under profile\_dir and overwrite\_profiles is set to False. * `create_summary_artifact`: If True, creates a Prefect artifact on the task run with the dbt model run results using the specified artifact key. Defaults to False. * `summary_artifact_key`: The key under which to store the dbt model run results artifact in Prefect. Defaults to 'dbt-run-task-summary'. * `extra_command_args`: Additional command arguments to pass to the dbt run command. * `stream_output`: If True, the output from the dbt command will be logged in Prefect as it happens. Defaults to True. **Raises:** * `ValueError`: If required dbt\_cli\_profile is not provided when needed for profile writing. * `RuntimeError`: If the dbt build fails for any reason, it will be indicated by the exception raised. ### `arun_dbt_test` ```python theme={null} arun_dbt_test(profiles_dir: Optional[Union[Path, str]] = None, project_dir: Optional[Union[Path, str]] = None, overwrite_profiles: bool = False, dbt_cli_profile: Optional[DbtCliProfile] = None, create_summary_artifact: bool = False, summary_artifact_key: str = 'dbt-test-task-summary', extra_command_args: Optional[List[str]] = None, stream_output: bool = True) ``` Async version of run\_dbt\_test. See run\_dbt\_test for full documentation. ### `run_dbt_test` ```python theme={null} run_dbt_test(profiles_dir: Optional[Union[Path, str]] = None, project_dir: Optional[Union[Path, str]] = None, overwrite_profiles: bool = False, dbt_cli_profile: Optional[DbtCliProfile] = None, create_summary_artifact: bool = False, summary_artifact_key: str = 'dbt-test-task-summary', extra_command_args: Optional[List[str]] = None, stream_output: bool = True) ``` Executes the 'dbt test' command within a Prefect task, and optionally creates a Prefect artifact summarizing the dbt test results. **Args:** * `profiles_dir`: The directory to search for the profiles.yml file. Setting this appends the `--profiles-dir` option to the command provided. If this is not set, will try using the DBT\_PROFILES\_DIR env variable, but if that's also not set, will use the default directory `$HOME/.dbt/`. * `project_dir`: The directory to search for the dbt\_project.yml file. Default is the current working directory and its parents. * `overwrite_profiles`: Whether the existing profiles.yml file under profiles\_dir should be overwritten with a new profile. * `dbt_cli_profile`: Profiles class containing the profile written to profiles.yml. Note! This is optional and will raise an error if profiles.yml already exists under profile\_dir and overwrite\_profiles is set to False. * `create_summary_artifact`: If True, creates a Prefect artifact on the task run with the dbt test results using the specified artifact key. Defaults to False. * `summary_artifact_key`: The key under which to store the dbt test results artifact in Prefect. Defaults to 'dbt-test-task-summary'. * `extra_command_args`: Additional command arguments to pass to the dbt test command. * `stream_output`: If True, the output from the dbt command will be logged in Prefect as it happens. Defaults to True. **Raises:** * `ValueError`: If required dbt\_cli\_profile is not provided when needed for profile writing. * `RuntimeError`: If the dbt build fails for any reason, it will be indicated by the exception raised. ### `arun_dbt_snapshot` ```python theme={null} arun_dbt_snapshot(profiles_dir: Optional[Union[Path, str]] = None, project_dir: Optional[Union[Path, str]] = None, overwrite_profiles: bool = False, dbt_cli_profile: Optional[DbtCliProfile] = None, create_summary_artifact: bool = False, summary_artifact_key: str = 'dbt-snapshot-task-summary', extra_command_args: Optional[List[str]] = None, stream_output: bool = True) ``` Async version of run\_dbt\_snapshot. See run\_dbt\_snapshot for full documentation. ### `run_dbt_snapshot` ```python theme={null} run_dbt_snapshot(profiles_dir: Optional[Union[Path, str]] = None, project_dir: Optional[Union[Path, str]] = None, overwrite_profiles: bool = False, dbt_cli_profile: Optional[DbtCliProfile] = None, create_summary_artifact: bool = False, summary_artifact_key: str = 'dbt-snapshot-task-summary', extra_command_args: Optional[List[str]] = None, stream_output: bool = True) ``` Executes the 'dbt snapshot' command within a Prefect task, and optionally creates a Prefect artifact summarizing the dbt snapshot results. **Args:** * `profiles_dir`: The directory to search for the profiles.yml file. Setting this appends the `--profiles-dir` option to the command provided. If this is not set, will try using the DBT\_PROFILES\_DIR env variable, but if that's also not set, will use the default directory `$HOME/.dbt/`. * `project_dir`: The directory to search for the dbt\_project.yml file. Default is the current working directory and its parents. * `overwrite_profiles`: Whether the existing profiles.yml file under profiles\_dir should be overwritten with a new profile. * `dbt_cli_profile`: Profiles class containing the profile written to profiles.yml. Note! This is optional and will raise an error if profiles.yml already exists under profile\_dir and overwrite\_profiles is set to False. * `create_summary_artifact`: If True, creates a Prefect artifact on the task run with the dbt snapshot results using the specified artifact key. Defaults to False. * `summary_artifact_key`: The key under which to store the dbt snapshot results artifact in Prefect. Defaults to 'dbt-snapshot-task-summary'. * `extra_command_args`: Additional command arguments to pass to the dbt snapshot command. * `stream_output`: If True, the output from the dbt command will be logged in Prefect as it happens. Defaults to True. **Raises:** * `ValueError`: If required dbt\_cli\_profile is not provided when needed for profile writing. * `RuntimeError`: If the dbt build fails for any reason, it will be indicated by the exception raised. ### `arun_dbt_seed` ```python theme={null} arun_dbt_seed(profiles_dir: Optional[Union[Path, str]] = None, project_dir: Optional[Union[Path, str]] = None, overwrite_profiles: bool = False, dbt_cli_profile: Optional[DbtCliProfile] = None, create_summary_artifact: bool = False, summary_artifact_key: str = 'dbt-seed-task-summary', extra_command_args: Optional[List[str]] = None, stream_output: bool = True) ``` Async version of run\_dbt\_seed. See run\_dbt\_seed for full documentation. ### `run_dbt_seed` ```python theme={null} run_dbt_seed(profiles_dir: Optional[Union[Path, str]] = None, project_dir: Optional[Union[Path, str]] = None, overwrite_profiles: bool = False, dbt_cli_profile: Optional[DbtCliProfile] = None, create_summary_artifact: bool = False, summary_artifact_key: str = 'dbt-seed-task-summary', extra_command_args: Optional[List[str]] = None, stream_output: bool = True) ``` Executes the 'dbt seed' command within a Prefect task, and optionally creates a Prefect artifact summarizing the dbt seed results. **Args:** * `profiles_dir`: The directory to search for the profiles.yml file. Setting this appends the `--profiles-dir` option to the command provided. If this is not set, will try using the DBT\_PROFILES\_DIR env variable, but if that's also not set, will use the default directory `$HOME/.dbt/`. * `project_dir`: The directory to search for the dbt\_project.yml file. Default is the current working directory and its parents. * `overwrite_profiles`: Whether the existing profiles.yml file under profiles\_dir should be overwritten with a new profile. * `dbt_cli_profile`: Profiles class containing the profile written to profiles.yml. Note! This is optional and will raise an error if profiles.yml already exists under profile\_dir and overwrite\_profiles is set to False. * `create_summary_artifact`: If True, creates a Prefect artifact on the task run with the dbt seed results using the specified artifact key. Defaults to False. * `summary_artifact_key`: The key under which to store the dbt seed results artifact in Prefect. Defaults to 'dbt-seed-task-summary'. * `extra_command_args`: Additional command arguments to pass to the dbt seed command. * `stream_output`: If True, the output from the dbt command will be logged in Prefect as it happens. Defaults to True. **Raises:** * `ValueError`: If required dbt\_cli\_profile is not provided when needed for profile writing. * `RuntimeError`: If the dbt build fails for any reason, it will be indicated by the exception raised. ### `arun_dbt_source_freshness` ```python theme={null} arun_dbt_source_freshness(profiles_dir: Optional[Union[Path, str]] = None, project_dir: Optional[Union[Path, str]] = None, overwrite_profiles: bool = False, dbt_cli_profile: Optional[DbtCliProfile] = None, create_summary_artifact: bool = False, summary_artifact_key: str = 'dbt-source-freshness-task-summary', extra_command_args: Optional[List[str]] = None, stream_output: bool = True) ``` Async version of run\_dbt\_source\_freshness. See run\_dbt\_source\_freshness for full documentation. ### `run_dbt_source_freshness` ```python theme={null} run_dbt_source_freshness(profiles_dir: Optional[Union[Path, str]] = None, project_dir: Optional[Union[Path, str]] = None, overwrite_profiles: bool = False, dbt_cli_profile: Optional[DbtCliProfile] = None, create_summary_artifact: bool = False, summary_artifact_key: str = 'dbt-source-freshness-task-summary', extra_command_args: Optional[List[str]] = None, stream_output: bool = True) ``` Executes the 'dbt source freshness' command within a Prefect task, and optionally creates a Prefect artifact summarizing the dbt source freshness results. **Args:** * `profiles_dir`: The directory to search for the profiles.yml file. Setting this appends the `--profiles-dir` option to the command provided. If this is not set, will try using the DBT\_PROFILES\_DIR env variable, but if that's also not set, will use the default directory `$HOME/.dbt/`. * `project_dir`: The directory to search for the dbt\_project.yml file. Default is the current working directory and its parents. * `overwrite_profiles`: Whether the existing profiles.yml file under profiles\_dir should be overwritten with a new profile. * `dbt_cli_profile`: Profiles class containing the profile written to profiles.yml. Note! This is optional and will raise an error if profiles.yml already exists under profile\_dir and overwrite\_profiles is set to False. * `create_summary_artifact`: If True, creates a Prefect artifact on the task run with the dbt source freshness results using the specified artifact key. Defaults to False. * `summary_artifact_key`: The key under which to store the dbt source freshness results artifact in Prefect. Defaults to 'dbt-source-freshness-task-summary'. * `extra_command_args`: Additional command arguments to pass to the dbt source freshness command. * `stream_output`: If True, the output from the dbt command will be logged in Prefect as it happens. Defaults to True. **Raises:** * `ValueError`: If required dbt\_cli\_profile is not provided when needed for profile writing. * `RuntimeError`: If the dbt build fails for any reason, it will be indicated by the exception raised. ### `create_summary_markdown` ```python theme={null} create_summary_markdown(run_results: dict[str, Any], command: str) -> str ``` Creates a Prefect task artifact summarizing the results of the above predefined prefrect-dbt task. ### `consolidate_run_results` ```python theme={null} consolidate_run_results(results: dbtRunnerResult) -> dict ``` ## Classes ### `DbtCoreOperation` A block representing a dbt operation, containing multiple dbt and shell commands. For long-lasting operations, use the trigger method and utilize the block as a context manager for automatic closure of processes when context is exited. If not, manually call the close method to close processes. For short-lasting operations, use the run method. Context is automatically managed with this method. **Attributes:** * `commands`: A list of commands to execute sequentially. * `stream_output`: Whether to stream output. * `env`: A dictionary of environment variables to set for the shell operation. * `working_dir`: The working directory context the commands will be executed within. * `shell`: The shell to use to execute the commands. * `extension`: The extension to use for the temporary file. if unset defaults to `.ps1` on Windows and `.sh` on other platforms. * `profiles_dir`: The directory to search for the profiles.yml file. Setting this appends the `--profiles-dir` option to the dbt commands provided. If this is not set, will try using the DBT\_PROFILES\_DIR environment variable, but if that's also not set, will use the default directory `$HOME/.dbt/`. * `project_dir`: The directory to search for the dbt\_project.yml file. Default is the current working directory and its parents. * `overwrite_profiles`: Whether the existing profiles.yml file under profiles\_dir should be overwritten with a new profile. * `dbt_cli_profile`: Profiles class containing the profile written to profiles.yml. Note! This is optional and will raise an error if profiles.yml already exists under profile\_dir and overwrite\_profiles is set to False. **Examples:** Load a configured block. ```python theme={null} from prefect_dbt import DbtCoreOperation dbt_op = DbtCoreOperation.load("BLOCK_NAME") ``` Execute short-lasting dbt debug and list with a custom DbtCliProfile. ```python theme={null} from prefect_dbt import DbtCoreOperation, DbtCliProfile from prefect_dbt.cli.configs import SnowflakeTargetConfigs from prefect_snowflake import SnowflakeConnector snowflake_connector = await SnowflakeConnector.load("snowflake-connector") target_configs = SnowflakeTargetConfigs(connector=snowflake_connector) dbt_cli_profile = DbtCliProfile( name="jaffle_shop", target="dev", target_configs=target_configs, ) dbt_init = DbtCoreOperation( commands=["dbt debug", "dbt list"], dbt_cli_profile=dbt_cli_profile, overwrite_profiles=True ) dbt_init.run() ``` Execute a longer-lasting dbt run as a context manager. ```python theme={null} with DbtCoreOperation(commands=["dbt run"]) as dbt_run: dbt_process = dbt_run.trigger() # do other things dbt_process.wait_for_completion() dbt_output = dbt_process.fetch_result() ``` # __init__ Source: https://docs.prefect.io/integrations/prefect-dbt/api-ref/prefect_dbt-cli-configs-__init__ # `prefect_dbt.cli.configs` *This module is empty or contains only private/internal implementations.* # base Source: https://docs.prefect.io/integrations/prefect-dbt/api-ref/prefect_dbt-cli-configs-base # `prefect_dbt.cli.configs.base` Module containing models for base configs ## Classes ### `DbtConfigs` Abstract class for other dbt Configs. **Attributes:** * `extras`: Extra target configs' keywords, not yet exposed in prefect-dbt, but available in dbt; if there are duplicate keys between extras and TargetConfigs, an error will be raised. **Methods:** #### `get_configs` ```python theme={null} get_configs(self) -> dict[str, Any] ``` Returns the dbt configs, likely used eventually for writing to profiles.yml. **Returns:** * A configs JSON. ### `BaseTargetConfigs` **Methods:** #### `get_configs` ```python theme={null} get_configs(self) -> dict[str, Any] ``` Returns the dbt configs, likely used eventually for writing to profiles.yml. **Returns:** * A configs JSON. #### `handle_target_configs` ```python theme={null} handle_target_configs(cls, v: Any) -> Any ``` Handle target configs field aliasing during validation ### `TargetConfigs` Target configs contain credentials and settings, specific to the warehouse you're connecting to. To find valid keys, head to the [Available adapters](https://docs.getdbt.com/docs/available-adapters) page and click the desired adapter's "Profile Setup" hyperlink. **Attributes:** * `type`: The name of the database warehouse. * `schema`: The schema that dbt will build objects into; in BigQuery, a schema is actually a dataset. * `threads`: The number of threads representing the max number of paths through the graph dbt may work on at once. **Examples:** Load stored TargetConfigs: ```python theme={null} from prefect_dbt.cli.configs import TargetConfigs dbt_cli_target_configs = TargetConfigs.load("BLOCK_NAME") ``` **Methods:** #### `from_profiles_yml` ```python theme={null} from_profiles_yml(cls: Type[Self], profile_name: Optional[str] = None, target_name: Optional[str] = None, profiles_dir: Optional[str] = None, allow_field_overrides: bool = False) -> 'TargetConfigs' ``` Create a TargetConfigs instance from a dbt profiles.yml file. **Args:** * `profile_name`: Name of the profile to use from profiles.yml. If None, uses the first profile. * `target_name`: Name of the target to use from the profile. If None, uses the default target in the selected profile. * `profiles_dir`: Path to the directory containing profiles.yml. If None, uses the default profiles directory. * `allow_field_overrides`: If enabled, fields from dbt target configs will override fields provided in extras and credentials. **Returns:** * A TargetConfigs instance populated from the profiles.yml target. **Raises:** * `ValueError`: If profiles.yml is not found or if profile/target is invalid #### `handle_target_configs` ```python theme={null} handle_target_configs(cls, v: Any) -> Any ``` Handle target configs field aliasing during validation ### `GlobalConfigs` Global configs control things like the visual output of logs, the manner in which dbt parses your project, and what to do when dbt finds a version mismatch or a failing model. Docs can be found [here](https://docs.getdbt.com/reference/global-configs). **Attributes:** * `send_anonymous_usage_stats`: Whether usage stats are sent to dbt. * `use_colors`: Colorize the output it prints in your terminal. * `partial_parse`: When partial parsing is enabled, dbt will use an stored internal manifest to determine which files have been changed (if any) since it last parsed the project. * `printer_width`: Length of characters before starting a new line. * `write_json`: Determines whether dbt writes JSON artifacts to the target/ directory. * `warn_error`: Whether to convert dbt warnings into errors. * `log_format`: The LOG\_FORMAT config specifies how dbt's logs should be formatted. If the value of this config is json, dbt will output fully structured logs in JSON format. * `debug`: Whether to redirect dbt's debug logs to standard out. * `version_check`: Whether to raise an error if a project's version is used with an incompatible dbt version. * `fail_fast`: Make dbt exit immediately if a single resource fails to build. * `use_experimental_parser`: Opt into the latest experimental version of the static parser. * `static_parser`: Whether to use the [static parser](https://docs.getdbt.com/reference/parsing#static-parser). **Examples:** Load stored GlobalConfigs: ```python theme={null} from prefect_dbt.cli.configs import GlobalConfigs dbt_cli_global_configs = GlobalConfigs.load("BLOCK_NAME") ``` **Methods:** #### `get_configs` ```python theme={null} get_configs(self) -> dict[str, Any] ``` Returns the dbt configs, likely used eventually for writing to profiles.yml. **Returns:** * A configs JSON. ### `MissingExtrasRequireError` # bigquery Source: https://docs.prefect.io/integrations/prefect-dbt/api-ref/prefect_dbt-cli-configs-bigquery # `prefect_dbt.cli.configs.bigquery` Module containing models for BigQuery configs ## Classes ### `BigQueryTargetConfigs` Target configs contain credentials and settings, specific to BigQuery. To find valid keys, head to the [BigQuery Profile](https://docs.getdbt.com/reference/warehouse-profiles/bigquery-profile) page. **Attributes:** * `credentials`: The credentials to use to authenticate; if there are duplicate keys between credentials and TargetConfigs, e.g. schema, an error will be raised. **Examples:** Load stored BigQueryTargetConfigs. ```python theme={null} from prefect_dbt.cli.configs import BigQueryTargetConfigs bigquery_target_configs = BigQueryTargetConfigs.load("BLOCK_NAME") ``` Instantiate BigQueryTargetConfigs. ```python theme={null} from prefect_dbt.cli.configs import BigQueryTargetConfigs from prefect_gcp.credentials import GcpCredentials credentials = GcpCredentials.load("BLOCK-NAME-PLACEHOLDER") target_configs = BigQueryTargetConfigs( schema="schema", # also known as dataset credentials=credentials, ) ``` **Methods:** #### `get_configs` ```python theme={null} get_configs(self) -> Dict[str, Any] ``` Returns the dbt configs specific to BigQuery profile. **Returns:** * A configs JSON. #### `handle_target_configs` ```python theme={null} handle_target_configs(cls, v: Any) -> Any ``` Handle target configs field aliasing during validation # postgres Source: https://docs.prefect.io/integrations/prefect-dbt/api-ref/prefect_dbt-cli-configs-postgres # `prefect_dbt.cli.configs.postgres` Module containing models for Postgres configs ## Classes ### `PostgresTargetConfigs` Target configs contain credentials and settings, specific to Postgres. To find valid keys, head to the [Postgres Profile](https://docs.getdbt.com/reference/warehouse-profiles/postgres-profile) page. **Attributes:** * `credentials`: The credentials to use to authenticate; if there are duplicate keys between credentials and TargetConfigs, e.g. schema, an error will be raised. **Methods:** #### `get_configs` ```python theme={null} get_configs(self) -> Dict[str, Any] ``` Returns the dbt configs specific to Postgres profile. **Returns:** * A configs JSON. #### `handle_target_configs` ```python theme={null} handle_target_configs(cls, v: Any) -> Any ``` Handle target configs field aliasing during validation # snowflake Source: https://docs.prefect.io/integrations/prefect-dbt/api-ref/prefect_dbt-cli-configs-snowflake # `prefect_dbt.cli.configs.snowflake` Module containing models for Snowflake configs ## Classes ### `SnowflakeTargetConfigs` Target configs contain credentials and settings, specific to Snowflake. To find valid keys, head to the [Snowflake Profile](https://docs.getdbt.com/reference/warehouse-profiles/snowflake-profile) page. **Attributes:** * `connector`: The connector to use. **Examples:** Load stored SnowflakeTargetConfigs: ```python theme={null} from prefect_dbt.cli.configs import SnowflakeTargetConfigs snowflake_target_configs = SnowflakeTargetConfigs.load("BLOCK_NAME") ``` Instantiate SnowflakeTargetConfigs. ```python theme={null} from prefect_dbt.cli.configs import SnowflakeTargetConfigs from prefect_snowflake.credentials import SnowflakeCredentials from prefect_snowflake.database import SnowflakeConnector credentials = SnowflakeCredentials( user="user", password="password", account="account.region.aws", role="role", ) connector = SnowflakeConnector( schema="public", database="database", warehouse="warehouse", credentials=credentials, ) target_configs = SnowflakeTargetConfigs( connector=connector, extras={"retry_on_database_errors": True}, ) ``` **Methods:** #### `get_configs` ```python theme={null} get_configs(self) -> Dict[str, Any] ``` Returns the dbt configs specific to Snowflake profile. **Returns:** * A configs JSON. #### `handle_target_configs` ```python theme={null} handle_target_configs(cls, v: Any) -> Any ``` Handle target configs field aliasing during validation # credentials Source: https://docs.prefect.io/integrations/prefect-dbt/api-ref/prefect_dbt-cli-credentials # `prefect_dbt.cli.credentials` Module containing credentials for interacting with dbt CLI ## Functions ### `target_configs_discriminator` ```python theme={null} target_configs_discriminator(v: Any) -> str ``` Discriminator function for target configs. Returns the block type slug. ## Classes ### `DbtCliProfile` Profile for use across dbt CLI tasks and flows. **Attributes:** * `name`: Profile name used for populating profiles.yml. * `target`: The default target your dbt project will use. * `target_configs`: Target configs contain credentials and settings, specific to the warehouse you're connecting to. To find valid keys, head to the [Available adapters](https://docs.getdbt.com/docs/available-adapters) page and click the desired adapter's "Profile Setup" hyperlink. * `global_configs`: Global configs control things like the visual output of logs, the manner in which dbt parses your project, and what to do when dbt finds a version mismatch or a failing model. Valid keys can be found [here](https://docs.getdbt.com/reference/global-configs). **Examples:** Load stored dbt CLI profile: ```python theme={null} from prefect_dbt.cli import DbtCliProfile dbt_cli_profile = DbtCliProfile.load("BLOCK_NAME").get_profile() ``` Get a dbt Snowflake profile from DbtCliProfile by using SnowflakeTargetConfigs: ```python theme={null} from prefect_dbt.cli import DbtCliProfile from prefect_dbt.cli.configs import SnowflakeTargetConfigs from prefect_snowflake.credentials import SnowflakeCredentials from prefect_snowflake.database import SnowflakeConnector credentials = SnowflakeCredentials( user="user", password="password", account="account.region.aws", role="role", ) connector = SnowflakeConnector( schema="public", database="database", warehouse="warehouse", credentials=credentials, ) target_configs = SnowflakeTargetConfigs( connector=connector ) dbt_cli_profile = DbtCliProfile( name="jaffle_shop", target="dev", target_configs=target_configs, ) profile = dbt_cli_profile.get_profile() ``` Get a dbt Redshift profile from DbtCliProfile by using generic TargetConfigs: ```python theme={null} from prefect_dbt.cli import DbtCliProfile from prefect_dbt.cli.configs import GlobalConfigs, TargetConfigs target_configs_extras = dict( host="hostname.region.redshift.amazonaws.com", user="username", password="password1", port=5439, dbname="analytics", ) target_configs = TargetConfigs( type="redshift", schema="schema", threads=4, extras=target_configs_extras ) dbt_cli_profile = DbtCliProfile( name="jaffle_shop", target="dev", target_configs=target_configs, ) profile = dbt_cli_profile.get_profile() ``` **Methods:** #### `get_profile` ```python theme={null} get_profile(self) -> Dict[str, Any] ``` Returns the dbt profile, likely used for writing to profiles.yml. **Returns:** * A JSON compatible dictionary with the expected format of profiles.yml. # __init__ Source: https://docs.prefect.io/integrations/prefect-dbt/api-ref/prefect_dbt-cloud-__init__ # `prefect_dbt.cloud` *This module is empty or contains only private/internal implementations.* # clients Source: https://docs.prefect.io/integrations/prefect-dbt/api-ref/prefect_dbt-cloud-clients # `prefect_dbt.cloud.clients` Module containing clients for interacting with the dbt Cloud API ## Classes ### `DbtCloudAdministrativeClient` Client for interacting with the dbt cloud Administrative API. **Args:** * `api_key`: API key to authenticate with the dbt Cloud administrative API. * `account_id`: ID of dbt Cloud account with which to interact. * `domain`: Domain at which the dbt Cloud API is hosted. **Methods:** #### `call_endpoint` ```python theme={null} call_endpoint(self, http_method: str, path: str, params: Optional[Dict[str, Any]] = None, json: Optional[Dict[str, Any]] = None) -> Response ``` Call an endpoint in the dbt Cloud API. **Args:** * `path`: The partial path for the request (e.g. /projects/). Will be appended onto the base URL as determined by the client configuration. * `http_method`: HTTP method to call on the endpoint. * `params`: Query parameters to include in the request. * `json`: JSON serializable body to send in the request. **Returns:** * The response from the dbt Cloud administrative API. #### `create_job` ```python theme={null} create_job(self, project_id: int, environment_id: int, name: str, execute_steps: Optional[List[str]] = None, **kwargs: Any) -> Response ``` Creates a new dbt Cloud job. **Args:** * `project_id`: Numeric ID of the project for the job. * `environment_id`: Numeric ID of the environment for the job. * `name`: Name of the job. * `execute_steps`: List of dbt commands to execute (e.g. \["dbt run", "dbt test"]). * `**kwargs`: Additional job configuration options (e.g. triggers, settings). **Returns:** * The response from the dbt Cloud administrative API. #### `delete_job` ```python theme={null} delete_job(self, job_id: int) -> Response ``` Deletes a dbt Cloud job. **Args:** * `job_id`: Numeric ID of the job to delete. **Returns:** * The response from the dbt Cloud administrative API. #### `get_job` ```python theme={null} get_job(self, job_id: int, order_by: Optional[str] = None) -> Response ``` Return job details for a job on an account. **Args:** * `job_id`: Numeric ID of the job. * `order_by`: Field to order the result by. Use - to indicate reverse order. **Returns:** * The response from the dbt Cloud administrative API. #### `get_job_artifact` ```python theme={null} get_job_artifact(self, job_id: int, path: str) -> Response ``` Fetches an artifact from the most recent successful run of a job. **Args:** * `job_id`: The ID of the job whose latest artifact to fetch. * `path`: The relative artifact path (e.g. "manifest.json"). **Returns:** * The response from the dbt Cloud administrative API. #### `get_run` ```python theme={null} get_run(self, run_id: int, include_related: Optional[List[Literal['trigger', 'job', 'debug_logs', 'run_steps']]] = None) -> Response ``` Sends a request to the [get run endpoint](https://docs.getdbt.com/dbt-cloud/api-v2#tag/Runs/operation/getRunById) to get details about a job run. **Args:** * `run_id`: The ID of the run to get details for. * `include_related`: List of related fields to pull with the run. Valid values are "trigger", "job", "debug\_logs", and "run\_steps". If "debug\_logs" is not provided in a request, then the included debug logs will be truncated to the last 1,000 lines of the debug log output file. **Returns:** * The response from the dbt Cloud administrative API. #### `get_run_artifact` ```python theme={null} get_run_artifact(self, run_id: int, path: str, step: Optional[int] = None) -> Response ``` Sends a request to the [get run artifact endpoint](https://docs.getdbt.com/dbt-cloud/api-v2#tag/Runs/operation/getArtifactsByRunId) to fetch an artifact generated for a completed run. **Args:** * `run_id`: The ID of the run to list run artifacts for. * `path`: The relative path to the run artifact (e.g. manifest.json, catalog.json, run\_results.json) * `step`: The index of the step in the run to query for artifacts. The first step in the run has the index 1. If the step parameter is omitted, then this method will return the artifacts compiled for the last step in the run. **Returns:** * The response from the dbt Cloud administrative API. #### `list_run_artifacts` ```python theme={null} list_run_artifacts(self, run_id: int, step: Optional[int] = None) -> Response ``` Sends a request to the [list run artifacts endpoint](https://docs.getdbt.com/dbt-cloud/api-v2#tag/Runs/operation/listArtifactsByRunId) to fetch a list of paths of artifacts generated for a completed run. **Args:** * `run_id`: The ID of the run to list run artifacts for. * `step`: The index of the step in the run to query for artifacts. The first step in the run has the index 1. If the step parameter is omitted, then this method will return the artifacts compiled for the last step in the run. **Returns:** * The response from the dbt Cloud administrative API. #### `trigger_job_run` ```python theme={null} trigger_job_run(self, job_id: int, options: Optional[TriggerJobRunOptions] = None) -> Response ``` Sends a request to the [trigger job run endpoint](https://docs.getdbt.com/dbt-cloud/api-v2#tag/Jobs/operation/triggerRun) to initiate a job run. **Args:** * `job_id`: The ID of the job to trigger. * `options`: An optional TriggerJobRunOptions instance to specify overrides for the triggered job run. **Returns:** * The response from the dbt Cloud administrative API. ### `DbtCloudMetadataClient` Client for interacting with the dbt cloud Administrative API. **Args:** * `api_key`: API key to authenticate with the dbt Cloud administrative API. * `domain`: Domain at which the dbt Cloud API is hosted. **Methods:** #### `query` ```python theme={null} query(self, query: str, variables: Optional[Dict] = None, operation_name: Optional[str] = None) -> Dict[str, Any] ``` Run a GraphQL query against the dbt Cloud metadata API. **Args:** * `query`: The GraphQL query to run. * `variables`: The values of any variables defined in the GraphQL query. * `operation_name`: The name of the operation to run if multiple operations are defined in the provided query. **Returns:** * The result of the GraphQL query. # credentials Source: https://docs.prefect.io/integrations/prefect-dbt/api-ref/prefect_dbt-cloud-credentials # `prefect_dbt.cloud.credentials` Module containing credentials for interacting with dbt Cloud ## Classes ### `DbtCloudCredentials` Credentials block for credential use across dbt Cloud tasks and flows. **Attributes:** * `api_key`: API key to authenticate with the dbt Cloud administrative API. Refer to the [Authentication docs](https://docs.getdbt.com/dbt-cloud/api-v2#section/Authentication) for retrieving the API key. * `account_id`: ID of dbt Cloud account with which to interact. * `domain`: Domain at which the dbt Cloud API is hosted. **Examples:** Load stored dbt Cloud credentials: ```python theme={null} from prefect_dbt.cloud import DbtCloudCredentials dbt_cloud_credentials = DbtCloudCredentials.load("BLOCK_NAME") ``` Use DbtCloudCredentials instance to trigger a job run: ```python theme={null} from prefect_dbt.cloud import DbtCloudCredentials credentials = DbtCloudCredentials(api_key="my_api_key", account_id=123456789) async with dbt_cloud_credentials.get_administrative_client() as client: client.trigger_job_run(job_id=1) ``` Load saved dbt Cloud credentials within a flow: ```python theme={null} from prefect import flow from prefect_dbt.cloud import DbtCloudCredentials from prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run @flow def trigger_dbt_cloud_job_run_flow(): credentials = DbtCloudCredentials.load("my-dbt-credentials") trigger_dbt_cloud_job_run(dbt_cloud_credentials=credentials, job_id=1) trigger_dbt_cloud_job_run_flow() ``` **Methods:** #### `get_administrative_client` ```python theme={null} get_administrative_client(self) -> DbtCloudAdministrativeClient ``` Returns a newly instantiated client for working with the dbt Cloud administrative API. **Returns:** * An authenticated dbt Cloud administrative API client. #### `get_client` ```python theme={null} get_client(self, client_type: Literal['administrative', 'metadata']) -> Union[DbtCloudAdministrativeClient, DbtCloudMetadataClient] ``` Returns a newly instantiated client for working with the dbt Cloud API. **Args:** * `client_type`: Type of client to return. Accepts either 'administrative' or 'metadata'. **Returns:** * The authenticated client of the requested type. #### `get_metadata_client` ```python theme={null} get_metadata_client(self) -> DbtCloudMetadataClient ``` Returns a newly instantiated client for working with the dbt Cloud metadata API. **Returns:** * An authenticated dbt Cloud metadata API client. # exceptions Source: https://docs.prefect.io/integrations/prefect-dbt/api-ref/prefect_dbt-cloud-exceptions # `prefect_dbt.cloud.exceptions` ## Classes ### `DbtCloudException` Base class for dbt Cloud exceptions ### `DbtCloudGetRunFailed` Raised when unable to retrieve dbt Cloud run ### `DbtCloudListRunArtifactsFailed` Raised when unable to list dbt Cloud run artifacts ### `DbtCloudGetRunArtifactFailed` Raised when unable to get a dbt Cloud run artifact ### `DbtCloudJobRunFailed` Raised when a triggered job run fails ### `DbtCloudJobRunCancelled` Raised when a triggered job run is cancelled ### `DbtCloudJobRunTimedOut` Raised when a triggered job run does not complete in the configured max wait seconds ### `DbtCloudJobRunTriggerFailed` Raised when a dbt Cloud job trigger fails. ### `DbtCloudGetJobFailed` Raised when unable to retrieve dbt Cloud job. ### `DbtCloudJobRunIncomplete` Raised when a triggered job run is not complete. ### `DbtCloudCreateJobFailed` Raised when unable to create a dbt Cloud job. ### `DbtCloudDeleteJobFailed` Raised when unable to delete a dbt Cloud job. # jobs Source: https://docs.prefect.io/integrations/prefect-dbt/api-ref/prefect_dbt-cloud-jobs # `prefect_dbt.cloud.jobs` Module containing tasks and flows for interacting with dbt Cloud jobs ## Functions ### `get_dbt_cloud_job_info` ```python theme={null} get_dbt_cloud_job_info(dbt_cloud_credentials: DbtCloudCredentials, job_id: int, order_by: Optional[str] = None) -> Dict ``` A task to retrieve information about a dbt Cloud job. **Args:** * `dbt_cloud_credentials`: Credentials for authenticating with dbt Cloud. * `job_id`: The ID of the job to get. **Returns:** * The job data returned by the dbt Cloud administrative API. ### `create_dbt_cloud_job` ```python theme={null} create_dbt_cloud_job(dbt_cloud_credentials: DbtCloudCredentials, project_id: int, environment_id: int, name: str, execute_steps: Optional[List[str]] = None, **kwargs: Any) -> Dict ``` A task to create a new dbt Cloud job. **Args:** * `dbt_cloud_credentials`: Credentials for authenticating with dbt Cloud. * `project_id`: The ID of the project to create the job in. * `environment_id`: The ID of the environment for the job. * `name`: The name of the job. * `execute_steps`: List of dbt commands to execute (e.g. \["dbt run", "dbt test"]). Defaults to \["dbt build"]. * `**kwargs`: Additional job configuration options. **Returns:** * The job data returned by the dbt Cloud administrative API. ### `delete_dbt_cloud_job` ```python theme={null} delete_dbt_cloud_job(dbt_cloud_credentials: DbtCloudCredentials, job_id: int) -> None ``` A task to delete a dbt Cloud job. **Args:** * `dbt_cloud_credentials`: Credentials for authenticating with dbt Cloud. * `job_id`: The ID of the job to delete. ### `trigger_dbt_cloud_job_run` ```python theme={null} trigger_dbt_cloud_job_run(dbt_cloud_credentials: DbtCloudCredentials, job_id: int, options: Optional[TriggerJobRunOptions] = None) -> Dict ``` A task to trigger a dbt Cloud job run. **Args:** * `dbt_cloud_credentials`: Credentials for authenticating with dbt Cloud. * `job_id`: The ID of the job to trigger. * `options`: An optional TriggerJobRunOptions instance to specify overrides for the triggered job run. **Returns:** * The run data returned from the dbt Cloud administrative API. **Examples:** Trigger a dbt Cloud job run: ```python theme={null} from prefect import flow from prefect_dbt.cloud import DbtCloudCredentials from prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run @flow def trigger_dbt_cloud_job_run_flow(): credentials = DbtCloudCredentials(api_key="my_api_key", account_id=123456789) trigger_dbt_cloud_job_run(dbt_cloud_credentials=credentials, job_id=1) trigger_dbt_cloud_job_run_flow() ``` Trigger a dbt Cloud job run with overrides: ```python theme={null} from prefect import flow from prefect_dbt.cloud import DbtCloudCredentials from prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run from prefect_dbt.cloud.models import TriggerJobRunOptions @flow def trigger_dbt_cloud_job_run_flow(): credentials = DbtCloudCredentials(api_key="my_api_key", account_id=123456789) trigger_dbt_cloud_job_run( dbt_cloud_credentials=credentials, job_id=1, options=TriggerJobRunOptions( git_branch="staging", schema_override="dbt_cloud_pr_123", dbt_version_override="0.18.0", target_name_override="staging", timeout_seconds_override=3000, generate_docs_override=True, threads_override=8, steps_override=[ "dbt seed", "dbt run --fail-fast", "dbt test --fail-fast", ], ), ) trigger_dbt_cloud_job_run() ``` ### `get_run_id` ```python theme={null} get_run_id(obj: Dict) ``` Task that extracts the run ID from a trigger job run API response, This task is mainly used to maintain dependency tracking between the `trigger_dbt_cloud_job_run` task and downstream tasks/flows that use the run ID. **Args:** * `obj`: The JSON body from the trigger job run response. ### `trigger_dbt_cloud_job_run_and_wait_for_completion` ```python theme={null} trigger_dbt_cloud_job_run_and_wait_for_completion(dbt_cloud_credentials: DbtCloudCredentials, job_id: int, trigger_job_run_options: Optional[TriggerJobRunOptions] = None, max_wait_seconds: int = 900, poll_frequency_seconds: int = 10, retry_filtered_models_attempts: int = 3) -> Dict ``` Flow that triggers a job run and waits for the triggered run to complete. **Args:** * `dbt_cloud_credentials`: Credentials for authenticating with dbt Cloud. * `job_id`: The ID of the job to trigger. * `trigger_job_run_options`: An optional TriggerJobRunOptions instance to specify overrides for the triggered job run. * `max_wait_seconds`: Maximum number of seconds to wait for job to complete * `poll_frequency_seconds`: Number of seconds to wait in between checks for run completion. * `retry_filtered_models_attempts`: Number of times to retry models selected by `retry_status_filters`. **Raises:** * `DbtCloudJobRunCancelled`: The triggered dbt Cloud job run was cancelled. * `DbtCloudJobRunFailed`: The triggered dbt Cloud job run failed. * `RuntimeError`: The triggered dbt Cloud job run ended in an unexpected state. **Returns:** * The run data returned by the dbt Cloud administrative API. **Examples:** Trigger a dbt Cloud job and wait for completion as a stand alone flow: ```python theme={null} import asyncio from prefect_dbt.cloud import DbtCloudCredentials from prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run_and_wait_for_completion asyncio.run( trigger_dbt_cloud_job_run_and_wait_for_completion( dbt_cloud_credentials=DbtCloudCredentials( api_key="my_api_key", account_id=123456789 ), job_id=1 ) ) ``` Trigger a dbt Cloud job and wait for completion as a sub-flow: ```python theme={null} from prefect import flow from prefect_dbt.cloud import DbtCloudCredentials from prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run_and_wait_for_completion @flow def my_flow(): ... run_result = trigger_dbt_cloud_job_run_and_wait_for_completion( dbt_cloud_credentials=DbtCloudCredentials( api_key="my_api_key", account_id=123456789 ), job_id=1 ) ... my_flow() ``` Trigger a dbt Cloud job with overrides: ```python theme={null} import asyncio from prefect_dbt.cloud import DbtCloudCredentials from prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run_and_wait_for_completion from prefect_dbt.cloud.models import TriggerJobRunOptions asyncio.run( trigger_dbt_cloud_job_run_and_wait_for_completion( dbt_cloud_credentials=DbtCloudCredentials( api_key="my_api_key", account_id=123456789 ), job_id=1, trigger_job_run_options=TriggerJobRunOptions( git_branch="staging", schema_override="dbt_cloud_pr_123", dbt_version_override="0.18.0", target_name_override="staging", timeout_seconds_override=3000, generate_docs_override=True, threads_override=8, steps_override=[ "dbt seed", "dbt run --fail-fast", "dbt test --fail fast", ], ), ) ) ``` ### `retry_dbt_cloud_job_run_subset_and_wait_for_completion` ```python theme={null} retry_dbt_cloud_job_run_subset_and_wait_for_completion(dbt_cloud_credentials: DbtCloudCredentials, run_id: int, trigger_job_run_options: Optional[TriggerJobRunOptions] = None, max_wait_seconds: int = 900, poll_frequency_seconds: int = 10) -> Dict ``` Flow that retrys a subset of dbt Cloud job run, filtered by select statuses, and waits for the triggered retry to complete. **Args:** * `dbt_cloud_credentials`: Credentials for authenticating with dbt Cloud. * `trigger_job_run_options`: An optional TriggerJobRunOptions instance to specify overrides for the triggered job run. * `max_wait_seconds`: Maximum number of seconds to wait for job to complete * `poll_frequency_seconds`: Number of seconds to wait in between checks for run completion. * `run_id`: The ID of the job run to retry. **Raises:** * `ValueError`: If `trigger_job_run_options.steps_override` is set by the user. **Returns:** * The run data returned by the dbt Cloud administrative API. **Examples:** Retry a subset of models in a dbt Cloud job run and wait for completion: ```python theme={null} from prefect import flow from prefect_dbt.cloud import DbtCloudCredentials from prefect_dbt.cloud.jobs import retry_dbt_cloud_job_run_subset_and_wait_for_completion @flow def retry_dbt_cloud_job_run_subset_and_wait_for_completion_flow(): credentials = DbtCloudCredentials.load("MY_BLOCK_NAME") retry_dbt_cloud_job_run_subset_and_wait_for_completion( dbt_cloud_credentials=credentials, run_id=88640123, ) retry_dbt_cloud_job_run_subset_and_wait_for_completion_flow() ``` ### `run_dbt_cloud_job` ```python theme={null} run_dbt_cloud_job(dbt_cloud_job: DbtCloudJob, targeted_retries: int = 3) -> Dict[str, Any] ``` Flow that triggers and waits for a dbt Cloud job run, retrying a subset of failed nodes if necessary. **Args:** * `dbt_cloud_job`: Block that holds the information and methods to interact with a dbt Cloud job. * `targeted_retries`: The number of times to retry failed steps. **Examples:** ```python theme={null} from prefect import flow from prefect_dbt.cloud import DbtCloudCredentials, DbtCloudJob from prefect_dbt.cloud.jobs import run_dbt_cloud_job @flow def run_dbt_cloud_job_flow(): dbt_cloud_credentials = DbtCloudCredentials.load("dbt-token") dbt_cloud_job = DbtCloudJob( dbt_cloud_credentials=dbt_cloud_credentials, job_id=154217 ) return run_dbt_cloud_job(dbt_cloud_job=dbt_cloud_job) run_dbt_cloud_job_flow() ``` ## Classes ### `DbtCloudJobRun` Class that holds the information and methods to interact with the resulting run of a dbt Cloud job. **Methods:** #### `fetch_result` ```python theme={null} fetch_result(self, step: Optional[int] = None) -> Dict[str, Any] ``` Gets the results from the job run. Since the results may not be ready, use wait\_for\_completion before calling this method. **Args:** * `step`: The index of the step in the run to query for artifacts. The first step in the run has the index 1. If the step parameter is omitted, then this method will return the artifacts compiled for the last step in the run. #### `get_run` ```python theme={null} get_run(self) -> Dict[str, Any] ``` Makes a request to the dbt Cloud API to get the run data. **Returns:** * The run data. #### `get_run_artifacts` ```python theme={null} get_run_artifacts(self, path: Literal['manifest.json', 'catalog.json', 'run_results.json'], step: Optional[int] = None) -> Union[Dict[str, Any], str] ``` Get an artifact generated for a completed run. **Args:** * `path`: The relative path to the run artifact. * `step`: The index of the step in the run to query for artifacts. The first step in the run has the index 1. If the step parameter is omitted, then this method will return the artifacts compiled for the last step in the run. **Returns:** * The contents of the requested manifest. Returns a `Dict` if the requested artifact is a JSON file and a `str` otherwise. #### `get_status_code` ```python theme={null} get_status_code(self) -> int ``` Makes a request to the dbt Cloud API to get the run status. **Returns:** * The run status code. #### `retry_failed_steps` ```python theme={null} retry_failed_steps(self) -> 'DbtCloudJobRun' ``` Retries steps that did not complete successfully in a run. **Returns:** * A representation of the dbt Cloud job run. #### `wait_for_completion` ```python theme={null} wait_for_completion(self) -> None ``` Waits for the job run to reach a terminal state. ### `DbtCloudJob` Block that holds the information and methods to interact with a dbt Cloud job. **Attributes:** * `dbt_cloud_credentials`: The credentials to use to authenticate with dbt Cloud. * `job_id`: The id of the dbt Cloud job. * `timeout_seconds`: The number of seconds to wait for the job to complete. * `interval_seconds`: The number of seconds to wait between polling for job completion. * `trigger_job_run_options`: The options to use when triggering a job run. **Examples:** Load a configured dbt Cloud job block. ```python theme={null} from prefect_dbt.cloud import DbtCloudJob dbt_cloud_job = DbtCloudJob.load("BLOCK_NAME") ``` Triggers a dbt Cloud job, waits for completion, and fetches the results. ```python theme={null} from prefect import flow from prefect_dbt.cloud import DbtCloudCredentials, DbtCloudJob @flow def dbt_cloud_job_flow(): dbt_cloud_credentials = DbtCloudCredentials.load("dbt-token") dbt_cloud_job = DbtCloudJob.load( dbt_cloud_credentials=dbt_cloud_credentials, job_id=154217 ) dbt_cloud_job_run = dbt_cloud_job.trigger() dbt_cloud_job_run.wait_for_completion() dbt_cloud_job_run.fetch_result() return dbt_cloud_job_run dbt_cloud_job_flow() ``` **Methods:** #### `get_job` ```python theme={null} get_job(self, order_by: Optional[str] = None) -> Dict[str, Any] ``` Retrieve information about a dbt Cloud job. **Args:** * `order_by`: The field to order the results by. **Returns:** * The job data. #### `trigger` ```python theme={null} trigger(self, trigger_job_run_options: Optional[TriggerJobRunOptions] = None) -> DbtCloudJobRun ``` Triggers a dbt Cloud job. **Returns:** * A representation of the dbt Cloud job run. # models Source: https://docs.prefect.io/integrations/prefect-dbt/api-ref/prefect_dbt-cloud-models # `prefect_dbt.cloud.models` Module containing models used for passing data to dbt Cloud ## Functions ### `default_cause_factory` ```python theme={null} default_cause_factory() ``` Factory function to populate the default cause for a job run to include information from the Prefect run context. ## Classes ### `TriggerJobRunOptions` Defines options that can be defined when triggering a dbt Cloud job run. # runs Source: https://docs.prefect.io/integrations/prefect-dbt/api-ref/prefect_dbt-cloud-runs # `prefect_dbt.cloud.runs` Module containing tasks and flows for interacting with dbt Cloud job runs ## Functions ### `get_dbt_cloud_run_info` ```python theme={null} get_dbt_cloud_run_info(dbt_cloud_credentials: DbtCloudCredentials, run_id: int, include_related: Optional[List[Literal['trigger', 'job', 'debug_logs', 'run_steps']]] = None) -> Dict ``` A task to retrieve information about a dbt Cloud job run. **Args:** * `dbt_cloud_credentials`: Credentials for authenticating with dbt Cloud. * `run_id`: The ID of the job to trigger. * `include_related`: List of related fields to pull with the run. Valid values are "trigger", "job", "debug\_logs", and "run\_steps". If "debug\_logs" is not provided in a request, then the included debug logs will be truncated to the last 1,000 lines of the debug log output file. **Returns:** * The run data returned by the dbt Cloud administrative API. ### `list_dbt_cloud_run_artifacts` ```python theme={null} list_dbt_cloud_run_artifacts(dbt_cloud_credentials: DbtCloudCredentials, run_id: int, step: Optional[int] = None) -> List[str] ``` A task to list the artifact files generated for a completed run. **Args:** * `dbt_cloud_credentials`: Credentials for authenticating with dbt Cloud. * `run_id`: The ID of the run to list run artifacts for. * `step`: The index of the step in the run to query for artifacts. The first step in the run has the index 1. If the step parameter is omitted, then this method will return the artifacts compiled for the last step in the run. **Returns:** * A list of paths to artifact files that can be used to retrieve the generated artifacts. ### `get_dbt_cloud_run_artifact` ```python theme={null} get_dbt_cloud_run_artifact(dbt_cloud_credentials: DbtCloudCredentials, run_id: int, path: str, step: Optional[int] = None) -> Union[Dict, str] ``` A task to get an artifact generated for a completed run. The requested artifact is saved to a file in the current working directory. **Args:** * `dbt_cloud_credentials`: Credentials for authenticating with dbt Cloud. * `run_id`: The ID of the run to list run artifacts for. * `path`: The relative path to the run artifact (e.g. manifest.json, catalog.json, run\_results.json) * `step`: The index of the step in the run to query for artifacts. The first step in the run has the index 1. If the step parameter is omitted, then this method will return the artifacts compiled for the last step in the run. **Returns:** * The contents of the requested manifest. Returns a `Dict` if the requested artifact is a JSON file and a `str` otherwise. **Examples:** Get an artifact of a dbt Cloud job run: ```python theme={null} from prefect import flow from prefect_dbt.cloud import DbtCloudCredentials from prefect_dbt.cloud.runs import get_dbt_cloud_run_artifact @flow def get_artifact_flow(): credentials = DbtCloudCredentials(api_key="my_api_key", account_id=123456789) return get_dbt_cloud_run_artifact( dbt_cloud_credentials=credentials, run_id=42, path="manifest.json" ) get_artifact_flow() ``` Get an artifact of a dbt Cloud job run and write it to a file: ```python theme={null} import json from prefect import flow from prefect_dbt.cloud import DbtCloudCredentials from prefect_dbt.cloud.jobs import get_dbt_cloud_run_artifact @flow def get_artifact_flow(): credentials = DbtCloudCredentials(api_key="my_api_key", account_id=123456789) get_run_artifact_result = get_dbt_cloud_run_artifact( dbt_cloud_credentials=credentials, run_id=42, path="manifest.json" ) with open("manifest.json", "w") as file: json.dump(get_run_artifact_result, file) get_artifact_flow() ``` ### `wait_for_dbt_cloud_job_run` ```python theme={null} wait_for_dbt_cloud_job_run(run_id: int, dbt_cloud_credentials: DbtCloudCredentials, max_wait_seconds: int = 900, poll_frequency_seconds: int = 10) -> Tuple[DbtCloudJobRunStatus, Dict] ``` Waits for the given dbt Cloud job run to finish running. **Args:** * `run_id`: The ID of the run to wait for. * `dbt_cloud_credentials`: Credentials for authenticating with dbt Cloud. * `max_wait_seconds`: Maximum number of seconds to wait for job to complete * `poll_frequency_seconds`: Number of seconds to wait in between checks for run completion. **Raises:** * `DbtCloudJobRunTimedOut`: When the elapsed wait time exceeds `max_wait_seconds`. **Returns:** * An enum representing the final dbt Cloud job run status * A dictionary containing information about the run after completion. Example: ## Classes ### `DbtCloudJobRunStatus` dbt Cloud Job statuses. **Methods:** #### `is_terminal_status_code` ```python theme={null} is_terminal_status_code(cls, status_code: Any) -> bool ``` Returns True if a status code is terminal for a job run. Returns False otherwise. # utils Source: https://docs.prefect.io/integrations/prefect-dbt/api-ref/prefect_dbt-cloud-utils # `prefect_dbt.cloud.utils` Utilities for common interactions with the dbt Cloud API ## Functions ### `extract_user_message` ```python theme={null} extract_user_message(ex: HTTPStatusError) -> Optional[str] ``` Extracts user message from a error response from the dbt Cloud administrative API. **Args:** * `ex`: An HTTPStatusError raised by httpx **Returns:** * user\_message from dbt Cloud administrative API response or None if a * user\_message cannot be extracted ### `extract_developer_message` ```python theme={null} extract_developer_message(ex: HTTPStatusError) -> Optional[str] ``` Extracts developer message from a error response from the dbt Cloud administrative API. **Args:** * `ex`: An HTTPStatusError raised by httpx **Returns:** * developer\_message from dbt Cloud administrative API response or None if a * developer\_message cannot be extracted ### `call_dbt_cloud_administrative_api_endpoint` ```python theme={null} call_dbt_cloud_administrative_api_endpoint(dbt_cloud_credentials: DbtCloudCredentials, path: str, http_method: str, params: Optional[Dict[str, Any]] = None, json: Optional[Dict[str, Any]] = None) -> Any ``` Task that calls a specified endpoint in the dbt Cloud administrative API. Use this task if a prebuilt one is not yet available. **Args:** * `dbt_cloud_credentials`: Credentials for authenticating with dbt Cloud. * `path`: The partial path for the request (e.g. /projects/). Will be appended onto the base URL as determined by the client configuration. * `http_method`: HTTP method to call on the endpoint. * `params`: Query parameters to include in the request. * `json`: JSON serializable body to send in the request. **Returns:** * The body of the response. If the body is JSON serializable, then the result of `json.loads` with the body as the input will be returned. Otherwise, the body will be returned directly. **Examples:** List projects for an account: ```python theme={null} from prefect import flow from prefect_dbt.cloud import DbtCloudCredentials from prefect_dbt.cloud.utils import call_dbt_cloud_administrative_api_endpoint @flow def get_projects_flow(): credentials = DbtCloudCredentials(api_key="my_api_key", account_id=123456789) result = call_dbt_cloud_administrative_api_endpoint( dbt_cloud_credentials=credentials, path="/projects/", http_method="GET", ) return result["data"] get_projects_flow() ``` Create a new job: ```python theme={null} from prefect import flow from prefect_dbt.cloud import DbtCloudCredentials from prefect_dbt.cloud.utils import call_dbt_cloud_administrative_api_endpoint @flow def create_job_flow(): credentials = DbtCloudCredentials(api_key="my_api_key", account_id=123456789) result = call_dbt_cloud_administrative_api_endpoint( dbt_cloud_credentials=credentials, path="/jobs/", http_method="POST", json={ "id": None, "account_id": 123456789, "project_id": 100, "environment_id": 10, "name": "Nightly run", "dbt_version": None, "triggers": {"github_webhook": True, "schedule": True}, "execute_steps": ["dbt run", "dbt test", "dbt source snapshot-freshness"], "settings": {"threads": 4, "target_name": "prod"}, "state": 1, "schedule": { "date": {"type": "every_day"}, "time": {"type": "every_hour", "interval": 1}, }, }, ) return result["data"] create_job_flow() ``` ## Classes ### `DbtCloudAdministrativeApiCallFailed` Raised when a call to dbt Cloud administrative API fails. # __init__ Source: https://docs.prefect.io/integrations/prefect-dbt/api-ref/prefect_dbt-core-__init__ # `prefect_dbt.core` *This module is empty or contains only private/internal implementations.* # runner Source: https://docs.prefect.io/integrations/prefect-dbt/api-ref/prefect_dbt-core-runner # `prefect_dbt.core.runner` Runner for dbt commands ## Functions ### `execute_dbt_node` ```python theme={null} execute_dbt_node(task_state: NodeTaskTracker, node_id: str, asset_id: Union[str, None]) ``` Execute a dbt node and wait for its completion. This function will: 1. Set up the task logger 2. Wait for the node to finish using efficient threading.Event 3. Check the node's status and fail if it's in a failure state ## Classes ### `PrefectDbtRunner` A runner for executing dbt commands with Prefect integration. This class enables the invocation of dbt commands while integrating with Prefect's logging and assets capabilities. **Args:** * `manifest`: Optional pre-loaded dbt manifest * `settings`: Optional PrefectDbtSettings instance for configuring dbt * `raise_on_failure`: Whether to raise an error if the dbt command encounters a non-exception failure * `client`: Optional Prefect client instance * `include_compiled_code`: Whether to include compiled code in the asset description * `disable_assets`: Global override for disabling asset generation for dbt nodes. If True, assets will not be created for any dbt nodes, even if the node's prefect config has enable\_assets set to True. * `_force_nodes_as_tasks`: Whether to force each dbt node execution to have a Prefect task representation when `.invoke()` is called outside of a flow or task run **Methods:** #### `get_dbt_event_msg` ```python theme={null} get_dbt_event_msg(event: EventMsg) -> str ``` #### `graph` ```python theme={null} graph(self) -> Graph ``` #### `invoke` ```python theme={null} invoke(self, args: list[str], **kwargs: Any) ``` Invokes a dbt command. Supports the same arguments as `dbtRunner.invoke()`. [https://docs.getdbt.com/reference/programmatic-invocations](https://docs.getdbt.com/reference/programmatic-invocations) **Args:** * `args`: List of command line arguments * `**kwargs`: Additional keyword arguments **Returns:** * The result of the dbt command invocation #### `log_level` ```python theme={null} log_level(self) -> EventLevel ``` #### `manifest` ```python theme={null} manifest(self) -> Manifest ``` #### `profiles_dir` ```python theme={null} profiles_dir(self) -> Path ``` #### `project_dir` ```python theme={null} project_dir(self) -> Path ``` #### `project_name` ```python theme={null} project_name(self) -> str ``` #### `target_path` ```python theme={null} target_path(self) -> Path ``` # settings Source: https://docs.prefect.io/integrations/prefect-dbt/api-ref/prefect_dbt-core-settings # `prefect_dbt.core.settings` A class for configuring or automatically discovering settings to be used with PrefectDbtRunner. ## Classes ### `PrefectDbtSettings` dbt settings that directly affect the PrefectDbtRunner. These settings will be collected automatically from their corresponding 'DBT\_'-prefixed environment variables. If a setting is not set in the environment or in the fields of this class, the default value will be used. All other dbt settings should be used as normal, e.g. in the dbt\_project.yml file, env vars, or kwargs to `invoke()`. **Methods:** #### `load_profiles_yml` ```python theme={null} load_profiles_yml(self) -> dict[str, Any] ``` Load and parse the profiles.yml file. **Returns:** * Dict containing the parsed profiles.yml contents **Raises:** * `ValueError`: If profiles.yml is not found #### `resolve_profiles_yml` ```python theme={null} resolve_profiles_yml(self) -> Generator[str, None, None] ``` Context manager that creates a temporary directory with a resolved profiles.yml file. **Args:** * `include_profiles`: Whether to include the resolved profiles.yml in the yield. Example: ```python theme={null} with resolve_profiles_yml() as temp_dir: # temp_dir contains resolved profiles.yml # use temp_dir for dbt operations # temp_dir is automatically cleaned up ``` #### `validate_for_orchestrator` ```python theme={null} validate_for_orchestrator(self) -> None ``` Validate that configured directories exist and contain expected files. Call this before running dbt operations that require both `profiles_dir` and `project_dir` to be valid on disk. **Raises:** * `ValueError`: If `profiles_dir` or `project_dir` do not exist, or if the expected files are missing. # utilities Source: https://docs.prefect.io/integrations/prefect-dbt/api-ref/prefect_dbt-utilities # `prefect_dbt.utilities` Utility functions for prefect-dbt ## Functions ### `find_profiles_dir` ```python theme={null} find_profiles_dir() -> Path ``` Find the directory containing profiles.yml. Returns the current working directory if profiles.yml exists there, otherwise returns the default .dbt directory in the user's home. **Returns:** * Directory containing profiles.yml ### `replace_with_env_var_call` ```python theme={null} replace_with_env_var_call(placeholder: str, value: Any) -> str ``` A block reference replacement function that returns template text for an env var call. **Args:** * `placeholder`: The placeholder text to replace * `value`: The value to replace the placeholder with **Returns:** * The template text for an env var call ### `format_resource_id` ```python theme={null} format_resource_id(adapter_type: str, relation_name: str) -> str ``` Format a relation name to be a valid asset key. **Args:** * `adapter_type`: The type of adapter used to connect to the database * `relation_name`: The name of the relation to format **Returns:** * The formatted asset key ### `kwargs_to_args` ```python theme={null} kwargs_to_args(kwargs: dict, args: Optional[list[str]] = None) -> list[str] ``` Convert a dictionary of kwargs to a list of args in the dbt CLI format. If args are provided, they take priority over kwargs when conflicts exist. **Args:** * `kwargs`: A dictionary of kwargs. * `args`: Optional list of existing args that take priority over kwargs. **Returns:** * A list of args. # prefect-dbt Source: https://docs.prefect.io/integrations/prefect-dbt/index With `prefect-dbt`, you can trigger and observe dbt Cloud jobs, execute dbt Core CLI commands, and incorporate other tools, such as [Snowflake](/integrations/prefect-snowflake/index), into your dbt runs. Prefect provides a global view of the state of your workflows and allows you to take action based on state changes. Prefect integrations may provide pre-built [blocks](/v3/develop/blocks), [flows](/v3/develop/write-flows), or [tasks](/v3/develop/write-tasks) for interacting with external systems. Block types in this library allow you to do things such as run a dbt Cloud job or execute a dbt Core command. ## Getting started ### Prerequisites * A [dbt Cloud account](https://cloud.getdbt.com/) if using dbt Cloud. * A [dbt adapter](https://docs.getdbt.com/docs/supported-data-platforms) for your target database if using dbt Core (for example, `dbt-duckdb`, `dbt-snowflake`, or `dbt-bigquery`). Install it alongside `prefect-dbt`: ```bash theme={null} pip install "prefect[dbt]" dbt-duckdb # for DuckDB pip install "prefect[dbt]" dbt-snowflake # for Snowflake pip install "prefect[dbt]" dbt-bigquery # for BigQuery ``` `prefect-dbt` depends on `dbt-core` but does **not** include a database adapter. Without one you will see an error like `Could not find adapter type !` at runtime. The `[snowflake]`, `[bigquery]`, and `[postgres]` extras for `prefect-dbt` **do** include the corresponding dbt adapter alongside Prefect Block types, but adapters without an extra (such as `dbt-duckdb`) must be installed separately. ### Install `prefect-dbt` The following command will install a version of `prefect-dbt` compatible with your installed version of `prefect`. If you don't already have `prefect` installed, it will install the newest version of `prefect` as well. ```bash theme={null} pip install "prefect[dbt]" ``` Upgrade to the latest versions of `prefect` and `prefect-dbt`: ```bash theme={null} pip install -U "prefect[dbt]" ``` If necessary, see [additional installation options for dbt Core with BigQuery, Snowflake, and Postgres](#additional-installation-options). ### Register newly installed blocks types Register the block types in the `prefect-dbt` module to make them available for use. ```bash theme={null} prefect block register -m prefect_dbt ``` ## dbt Cloud If you have an existing dbt Cloud job, use the pre-built flow `run_dbt_cloud_job` to trigger a job run and wait until the job run is finished. If some nodes fail, `run_dbt_cloud_job` can efficiently retry the unsuccessful nodes. Prior to running this flow, save your dbt Cloud credentials to a DbtCloudCredentials block and create a dbt Cloud Job block: ### Save dbt Cloud credentials to a block Blocks can be [created through code](/v3/develop/blocks) or through the UI. To create a dbt Cloud Credentials block: 1. Log into your [dbt Cloud account](https://cloud.getdbt.com/settings/profile). 2. Click **API Tokens** on the sidebar. 3. Copy a Service Token. 4. Copy the account ID from the URL: `https://cloud.getdbt.com/settings/accounts/`. 5. Create and run the following script, replacing the placeholders: ```python theme={null} from prefect_dbt.cloud import DbtCloudCredentials DbtCloudCredentials( api_key="API-KEY-PLACEHOLDER", account_id="ACCOUNT-ID-PLACEHOLDER" ).save("CREDENTIALS-BLOCK-NAME-PLACEHOLDER") ``` ### Create a dbt Cloud job block 1. In dbt Cloud, click on **Deploy** -> **Jobs**. 2. Select a job. 3. Copy the job ID from the URL: `https://cloud.getdbt.com/deploy//projects//jobs/` 4. Create and run the following script, replacing the placeholders. ```python theme={null} from prefect_dbt.cloud import DbtCloudCredentials, DbtCloudJob dbt_cloud_credentials = DbtCloudCredentials.load("CREDENTIALS-BLOCK-PLACEHOLDER") dbt_cloud_job = DbtCloudJob( dbt_cloud_credentials=dbt_cloud_credentials, job_id="JOB-ID-PLACEHOLDER" ).save("JOB-BLOCK-NAME-PLACEHOLDER") ``` ### Run a dbt Cloud job and wait for completion ```python theme={null} from prefect import flow from prefect_dbt.cloud import DbtCloudJob from prefect_dbt.cloud.jobs import run_dbt_cloud_job import asyncio @flow async def run_dbt_job_flow(): result = await run_dbt_cloud_job( dbt_cloud_job = await DbtCloudJob.load("JOB-BLOCK-NAME-PLACEHOLDER"), targeted_retries = 0, ) return await result if __name__ == "__main__": asyncio.run(run_dbt_job_flow()) ``` ## dbt Core ### prefect-dbt 0.7.0 and later Versions 0.7.0 and later of `prefect-dbt` include the `PrefectDbtRunner` class, which provides an improved interface for running dbt Core commands with better logging, failure handling, and automatic asset lineage. The `PrefectDbtRunner` is inspired by the `DbtRunner` from dbt Core, and its `invoke` method accepts the same arguments. Refer to the [`DbtRunner` documentation](https://docs.getdbt.com/reference/programmatic-invocations) for more information on how to call `invoke`. Basic usage: ```python theme={null} from prefect import flow from prefect_dbt import PrefectDbtRunner @flow def run_dbt(): PrefectDbtRunner().invoke(["build"]) if __name__ == "__main__": run_dbt() ``` When calling `.invoke()` in a flow or task, each node in dbt's execution graph is reflected as a task in Prefect's execution graph. Logs from each node will belong to the corresponding task, and each task's state is determined by the state of that node's execution. ```bash theme={null} 15:54:59.119 | INFO | Flow run 'imposing-partridge' - Found 8 models, 3 seeds, 18 data tests, 543 macros 15:54:59.134 | INFO | Flow run 'imposing-partridge' - 15:54:59.148 | INFO | Flow run 'imposing-partridge' - Concurrency: 1 threads (target='dev') 15:54:59.164 | INFO | Flow run 'imposing-partridge' - 15:54:59.665 | INFO | Task run 'model my_first_dbt_model' - 1 of 29 OK created sql table model main.my_first_dbt_model ..................... [OK in 0.18s] 15:54:59.671 | INFO | Task run 'model my_first_dbt_model' - Finished in state Completed() ... 15:55:02.373 | ERROR | Task run 'model product_metrics' - Runtime Error in model product_metrics (models/marts/product/product_metrics.sql) Binder Error: Values list "o" does not have a column named "product_id" LINE 47: on p.product_id = o.product_id 15:55:02.857 | ERROR | Task run 'model product_metrics' - Finished in state Failed('Task run encountered an exception Exception: Node model.demo.product_metrics finished with status error') ``` The task runs created by calling `.invoke()` run separately from dbt Core, and do not affect dbt's execution behavior. These tasks do not persist results and cannot be cached. Use [dbt's native retry functionality](https://docs.getdbt.com/reference/commands/retry) in combination with [runtime data from `prefect`](/v3/how-to-guides/workflows/access-runtime-info) to retry failed nodes. ```python theme={null} from prefect import flow from prefect.runtime.flow_run import get_run_count from prefect_dbt import PrefectDbtRunner @flow(retries=2) def run_dbt(): runner = PrefectDbtRunner() if get_run_count() == 1: runner.invoke(["build"]) else: runner.invoke(["retry"]) if __name__ == "__main__": run_dbt() ``` #### Assets Prefect Cloud maintains a graph of [assets](/v3/concepts/assets), objects produced by your workflows. Any dbt seed, source or model will appear on your asset graph in Prefect Cloud once it has been executed using the `PrefectDbtRunner`. The upstream dependencies of an asset materialized by `prefect-dbt` are derived from the `depends_on` field in dbt's `manifest.json`. The asset's `key` will be its corresponding dbt resource's `relation_name`. The `name` and `description` asset properties are populated by a dbt resource's name description. The `owners` asset property is populated if there is data assigned to the `owner` key under a resoure's `meta` config. ```yaml theme={null} models: - name: product_metrics description: "Product metrics and categorization" config: meta: owner: "kevin-g" ``` Asset metadata is collected from the result of the node's execution. ```json theme={null} { "node_path": "marts/product/product_metrics.sql", "node_name": "product_metrics", "unique_id": "model.demo.product_metrics", "resource_type": "model", "materialized": "table", "node_status": "error", "node_started_at": "2025-06-26T20:55:05.661126", "node_finished_at": "2025-06-26T20:55:05.733257", "meta": { "owner": "kevin-g" }, "node_relation": { "database": "dev", "schema": "main_marts", "alias": "product_metrics", "relation_name": "\"dev\".\"main_marts\".\"product_metrics\"" } } ``` Optionally, the compiled code of a dbt model can be appended to the asset description. ```python theme={null} from prefect import flow from prefect_dbt import PrefectDbtRunner @flow def run_dbt(): PrefectDbtRunner(include_compiled_code=True).invoke(["build"]) if __name__ == "__main__": run_dbt() ``` #### dbt settings The `PrefectDbtSettings` class, based on Pydantic's `BaseSettings` class, automatically detects `DBT_`-prefixed environment variables that have a direct effect on the `PrefectDbtRunner` class. If no environment variables are set, dbt's defaults are used. Provide a `PrefectDbtSettings` instance to `PrefectDbtRunner` to customize dbt settings or override environment variables. ```python theme={null} from prefect import flow from prefect_dbt import PrefectDbtRunner, PrefectDbtSettings @flow def run_dbt(): PrefectDbtRunner( settings=PrefectDbtSettings( project_dir="test", profiles_dir="examples/run_dbt" ) ).invoke(["build"]) if __name__ == "__main__": run_dbt() ``` #### Logging The `PrefectDbtRunner` class maps all dbt log levels to standard Python logging levels, so filtering for log levels like `WARNING` or `ERROR` in the Prefect UI applies to dbt's logs. By default, the logging level used by dbt is Prefect's logging level, which can be configured using the `PREFECT_LOGGING_LEVEL` Prefect setting. The dbt logging level can be set independently from Prefect's by using the `DBT_LOG_LEVEL` environment variable, setting `log_level` in `PrefectDbtSettings`, or passing the `--log-level` flag or `log_level` kwarg to `.invoke()`. Only logging levels of higher severity (more restrictive) than Prefect's logging level will have an effect. ```python theme={null} from dbt_common.events.base_types import EventLevel from prefect import flow from prefect_dbt import PrefectDbtRunner, PrefectDbtSettings @flow def run_dbt(): PrefectDbtRunner( settings=PrefectDbtSettings( project_dir="test", profiles_dir="examples/run_dbt", log_level=EventLevel.ERROR, # explicitly choose a higher log level for dbt ) ).invoke(["build"]) if __name__ == "__main__": run_dbt() ``` #### `profiles.yml` templating The `PrefectDbtRunner` class supports templating in your `profiles.yml` file, allowing you to reference Prefect blocks and variables that will be resolved at runtime. This enables you to store sensitive credentials securely using Prefect blocks, and configure different targets based on the Prefect workspace. For example, a Prefect variable called `target` can have a different value in development (`dev`) and production (`prod`) workspaces. This allows you to use the same `profiles.yml` file to automatically reference a local DuckDB instance in development and a Snowflake instance in production. ```yaml theme={null} example: outputs: dev: type: duckdb path: dev.duckdb threads: 1 prod: type: snowflake account: "{{ prefect.blocks.snowflake-credentials.warehouse-access.account }}" user: "{{ prefect.blocks.snowflake-credentials.warehouse-access.user }}" password: "{{ prefect.blocks.snowflake-credentials.warehouse-access.password }}" database: "{{ prefect.blocks.snowflake-connector.prod-connector.database }}" schema: "{{ prefect.blocks.snowflake-connector.prod-connector.schema }}" warehouse: "{{ prefect.blocks.snowflake-connector.prod-connector.warehouse }}" threads: 4 target: "{{ prefect.variables.target }}" ``` #### Failure handling By default, any dbt node execution failures cause the entire dbt run to raise an exception with a message containing detailed information about the failure. ``` Failures detected during invocation of dbt command 'build': Test not_null_my_first_dbt_model_id failed with message: "Got 1 result, configured to fail if != 0" ``` The `PrefectDbtRunner`'s `raise_on_failure` option can be set to `False` to prevent failures in dbt from causing the failure of the flow or task in which `.invoke()` is called. ```python theme={null} from prefect import flow from prefect_dbt import PrefectDbtRunner @flow def run_dbt(): PrefectDbtRunner( raise_on_failure=False # Failed tests will not fail the flow run ).invoke(["build"]) if __name__ == "__main__": run_dbt() ``` #### Native dbt configuration You can disable automatic asset lineage detection for all resources in your dbt project config, or for specific resources in their own config: ```yaml theme={null} prefect: enable_assets: False ``` ### prefect-dbt 0.6.6 and earlier `prefect-dbt` supports a couple of ways to run dbt Core commands. A `DbtCoreOperation` block will run the commands as shell commands, while other tasks use dbt's [Programmatic Invocation](#programmatic-invocation). Optionally, specify the `project_dir`. If `profiles_dir` is not set, the `DBT_PROFILES_DIR` environment variable will be used. If `DBT_PROFILES_DIR` is not set, the default directory will be used `$HOME/.dbt/`. #### Use an existing profile If you have an existing dbt `profiles.yml` file, specify the `profiles_dir` where the file is located: ```python theme={null} from prefect import flow from prefect_dbt.cli.commands import DbtCoreOperation @flow def trigger_dbt_flow() -> str: result = DbtCoreOperation( commands=["pwd", "dbt debug", "dbt run"], project_dir="PROJECT-DIRECTORY-PLACEHOLDER", profiles_dir="PROFILES-DIRECTORY-PLACEHOLDER" ).run() return result if __name__ == "__main__": trigger_dbt_flow() ``` If you are already using Prefect blocks such as the [Snowflake Connector block](integrations/prefect-snowflake), you can use those blocks to [create a new `profiles.yml` with a `DbtCliProfile` block](#create-a-new-profile-with-blocks). ##### Use environment variables with Prefect secret blocks If you use environment variables in `profiles.yml`, set a Prefect Secret block as an environment variable: ```python theme={null} import os from prefect.blocks.system import Secret secret_block = Secret.load("DBT_PASSWORD_PLACEHOLDER") # Access the stored secret DBT_PASSWORD = secret_block.get() os.environ["DBT_PASSWORD"] = DBT_PASSWORD ``` This example `profiles.yml` file could then access that variable. ```yaml theme={null} profile: target: prod outputs: prod: type: postgres host: 127.0.0.1 # IMPORTANT: Make sure to quote the entire Jinja string here user: dbt_user password: "{{ env_var('DBT_PASSWORD') }}" ``` #### Create a new `profiles.yml` file with blocks If you don't have a `profiles.yml` file, you can use a DbtCliProfile block to create `profiles.yml`. Then, specify `profiles_dir` where `profiles.yml` will be written. Here's example code with placeholders: ```python theme={null} from prefect import flow from prefect_dbt.cli import DbtCliProfile, DbtCoreOperation @flow def trigger_dbt_flow(): dbt_cli_profile = DbtCliProfile.load("DBT-CORE-OPERATION-BLOCK-PLACEHOLDER") with DbtCoreOperation( commands=["dbt debug", "dbt run"], project_dir="PROJECT-DIRECTORY-PLACEHOLDER", profiles_dir="PROFILES-DIRECTORY-PLACEHOLDER", dbt_cli_profile=dbt_cli_profile, ) as dbt_operation: dbt_process = dbt_operation.trigger() # do other things before waiting for completion dbt_process.wait_for_completion() result = dbt_process.fetch_result() return result if __name__ == "__main__": trigger_dbt_flow() ``` **Supplying the `dbt_cli_profile` argument will overwrite existing `profiles.yml` files** If you already have a `profiles.yml` file in the specified `profiles_dir`, the file will be overwritten. If you do not specify a profiles directory, `profiles.yml` at `~/.dbt/` would be overwritten. Visit the [SDK reference](/integrations/prefect-dbt/api-ref/prefect_dbt-cli-configs-base) in the side navigation to see other built-in `TargetConfigs` blocks. If the desired service profile is not available, you can build one from the generic `TargetConfigs` class. #### Programmatic Invocation `prefect-dbt` has some pre-built tasks that use dbt's [programmatic invocation](https://docs.getdbt.com/reference/programmatic-invocations). For example: ```python theme={null} from prefect import flow from prefect_dbt.cli.tasks import from prefect import flow from prefect_dbt.cli.commands import trigger_dbt_cli_command, dbt_build_task @flow def dbt_build_flow(): trigger_dbt_cli_command( command="dbt deps", project_dir="/Users/test/my_dbt_project_dir", ) dbt_build_task( project_dir = "/Users/test/my_dbt_project_dir", create_summary_artifact = True, summary_artifact_key = "dbt-build-task-summary", extra_command_args=["--select", "foo_model"] ) if __name__ == "__main__": dbt_build_flow() ``` See the [SDK reference](/integrations/prefect-dbt/api-ref/prefect_dbt-cli-commands) for other pre-built tasks. ##### Create a summary artifact These pre-built tasks can also create artifacts. These artifacts have extra information about dbt Core runs, such as messages and compiled code for nodes that fail or have errors. prefect-dbt Summary Artifact #### BigQuery CLI profile block example To create dbt Core target config and profile blocks for BigQuery: 1. Save and load a `GcpCredentials` block. 2. Determine the schema / dataset you want to use in BigQuery. 3. Create a short script, replacing the placeholders. ```python theme={null} from prefect_gcp.credentials import GcpCredentials from prefect_dbt.cli import BigQueryTargetConfigs, DbtCliProfile credentials = GcpCredentials.load("CREDENTIALS-BLOCK-NAME-PLACEHOLDER") target_configs = BigQueryTargetConfigs( schema="SCHEMA-NAME-PLACEHOLDER", # also known as dataset credentials=credentials, ) target_configs.save("TARGET-CONFIGS-BLOCK-NAME-PLACEHOLDER") dbt_cli_profile = DbtCliProfile( name="PROFILE-NAME-PLACEHOLDER", target="TARGET-NAME-placeholder", target_configs=target_configs, ) dbt_cli_profile.save("DBT-CLI-PROFILE-BLOCK-NAME-PLACEHOLDER") ``` To create a dbt Core operation block: 1. Determine the dbt commands you want to run. 2. Create a short script, replacing the placeholders. ```python theme={null} from prefect_dbt.cli import DbtCliProfile, DbtCoreOperation dbt_cli_profile = DbtCliProfile.load("DBT-CLI-PROFILE-BLOCK-NAME-PLACEHOLDER") dbt_core_operation = DbtCoreOperation( commands=["DBT-CLI-COMMANDS-PLACEHOLDER"], dbt_cli_profile=dbt_cli_profile, overwrite_profiles=True, ) dbt_core_operation.save("DBT-CORE-OPERATION-BLOCK-NAME-PLACEHOLDER") ``` Load the saved block that holds your credentials: ```python theme={null} from prefect_dbt.cloud import DbtCoreOperation DbtCoreOperation.load("DBT-CORE-OPERATION-BLOCK-NAME-PLACEHOLDER") ``` ## Resources For assistance using dbt, consult the [dbt documentation](https://docs.getdbt.com/docs/building-a-dbt-project/documentation). Refer to the `prefect-dbt` [SDK reference](/integrations/prefect-dbt/api-ref/prefect_dbt-utilities) to explore all the capabilities of the `prefect-dbt` library. ### Additional installation options Additional installation options for dbt Core with BigQuery, Snowflake, and Postgres are shown below. #### Additional capabilities for dbt Core and Snowflake profiles First install the main library compatible with your Prefect version: ```bash theme={null} pip install "prefect[dbt]" ``` Then install the additional capabilities you need. ```bash theme={null} pip install "prefect-dbt[snowflake]" ``` #### Additional capabilities for dbt Core and BigQuery profiles ```bash theme={null} pip install "prefect-dbt[bigquery]" ``` #### Additional capabilities for dbt Core and Postgres profiles ```bash theme={null} pip install "prefect-dbt[postgres]" ``` Or, install all of the extras. ```bash theme={null} pip install -U "prefect-dbt[all_extras]" ``` # containers Source: https://docs.prefect.io/integrations/prefect-docker/api-ref/prefect_docker-containers # `prefect_docker.containers` Integrations with Docker Containers. ## Functions ### `create_docker_container` ```python theme={null} create_docker_container(image: str, command: Optional[Union[str, List[str]]] = None, name: Optional[str] = None, detach: Optional[bool] = None, entrypoint: Optional[Union[str, List[str]]] = None, environment: Optional[Union[Dict[str, str], List[str]]] = None, docker_host: Optional[DockerHost] = None, **create_kwargs: Dict[str, Any]) -> Container ``` Create a container without starting it. Similar to docker create. **Args:** * `image`: The image to run. * `command`: The command(s) to run in the container. * `name`: The name for this container. * `detach`: Run container in the background. * `docker_host`: Settings for interacting with a Docker host. * `entrypoint`: The entrypoint for the container. * `environment`: Environment variables to set inside the container, as a dictionary or a list of strings in the format \["SOMEVARIABLE=xxx"]. * `**create_kwargs`: Additional keyword arguments to pass to [`client.containers.create`](https://docker-py.readthedocs.io/en/stable/containers.html#docker.models.containers.ContainerCollection.create). **Returns:** * A Docker Container object. **Examples:** Create a container with the Prefect image. ```python theme={null} from prefect import flow from prefect_docker.containers import create_docker_container @flow def create_docker_container_flow(): container = create_docker_container( image="prefecthq/prefect", command="echo 'hello world!'" ) create_docker_container_flow() ``` ### `get_docker_container_logs` ```python theme={null} get_docker_container_logs(container_id: str, docker_host: Optional[DockerHost] = None, **logs_kwargs: Dict[str, Any]) -> str ``` Get logs from this container. Similar to the docker logs command. **Args:** * `container_id`: The container ID to pull logs from. * `docker_host`: Settings for interacting with a Docker host. * `**logs_kwargs`: Additional keyword arguments to pass to [`client.containers.get(container_id).logs`](https://docker-py.readthedocs.io/en/stable/containers.html#docker.models.containers.Container.logs). **Returns:** * The Container's logs. **Examples:** Gets logs from a container with an ID that starts with "c157". ```python theme={null} from prefect import flow from prefect_docker.containers import get_docker_container_logs @flow def get_docker_container_logs_flow(): logs = get_docker_container_logs(container_id="c157") return logs get_docker_container_logs_flow() ``` ### `start_docker_container` ```python theme={null} start_docker_container(container_id: str, docker_host: Optional[DockerHost] = None, **start_kwargs: Dict[str, Any]) -> Container ``` Start this container. Similar to the docker start command. **Args:** * `container_id`: The container ID to start. * `docker_host`: Settings for interacting with a Docker host. * `**start_kwargs`: Additional keyword arguments to pass to [`client.containers.get(container_id).start`](https://docker-py.readthedocs.io/en/stable/containers.html#docker.models.containers.Container.start). **Returns:** * The Docker Container object. **Examples:** Start a container with an ID that starts with "c157". ```python theme={null} from prefect import flow from prefect_docker.containers import start_docker_container @flow def start_docker_container_flow(): container = start_docker_container(container_id="c157") return container start_docker_container_flow() ``` ### `stop_docker_container` ```python theme={null} stop_docker_container(container_id: str, docker_host: Optional[DockerHost] = None, **stop_kwargs: Dict[str, Any]) -> Container ``` Stops a container. Similar to the docker stop command. **Args:** * `container_id`: The container ID to stop. * `docker_host`: Settings for interacting with a Docker host. * `**stop_kwargs`: Additional keyword arguments to pass to [`client.containers.get(container_id).stop`](https://docker-py.readthedocs.io/en/stable/containers.html#docker.models.containers.Container.stop). **Returns:** * The Docker Container object. **Examples:** Stop a container with an ID that starts with "c157". ```python theme={null} from prefect import flow from prefect_docker.containers import stop_docker_container @flow def stop_docker_container_flow(): container = stop_docker_container(container_id="c157") return container stop_docker_container_flow() ``` ### `remove_docker_container` ```python theme={null} remove_docker_container(container_id: str, docker_host: Optional[DockerHost] = None, **remove_kwargs: Dict[str, Any]) -> Container ``` Remove this container. Similar to the docker rm command. **Args:** * `container_id`: The container ID to remove. * `docker_host`: Settings for interacting with a Docker host. * `**remove_kwargs`: Additional keyword arguments to pass to [`client.containers.get(container_id).remove`](https://docker-py.readthedocs.io/en/stable/containers.html#docker.models.containers.Container.remove). **Returns:** * The Docker Container object. **Examples:** Removes a container with an ID that starts with "c157". ```python theme={null} from prefect import flow from prefect_docker.containers import remove_docker_container @flow def remove_docker_container_flow(): container = remove_docker_container(container_id="c157") return container remove_docker_container() ``` # credentials Source: https://docs.prefect.io/integrations/prefect-docker/api-ref/prefect_docker-credentials # `prefect_docker.credentials` Module containing docker credentials. ## Classes ### `DockerRegistryCredentials` Block used to manage credentials for interacting with a Docker Registry. **Examples:** Log into Docker Registry. ```python theme={null} from prefect_docker import DockerHost, DockerRegistryCredentials docker_host = DockerHost() docker_registry_credentials = DockerRegistryCredentials.load("BLOCK_NAME") with docker_host.get_client() as client: docker_registry_credentials.login(client) ``` **Methods:** #### `login` ```python theme={null} login(self, client: docker.DockerClient) ``` Authenticates a given Docker client with the configured Docker registry. **Args:** * `client`: A Docker Client. # __init__ Source: https://docs.prefect.io/integrations/prefect-docker/api-ref/prefect_docker-deployments-__init__ # `prefect_docker.deployments` *This module is empty or contains only private/internal implementations.* # steps Source: https://docs.prefect.io/integrations/prefect-docker/api-ref/prefect_docker-deployments-steps # `prefect_docker.deployments.steps` Prefect deployment steps for building and pushing Docker images. These steps can be used in a `prefect.yaml` file to define the default build steps for a group of deployments, or they can be used to define the build step for a specific deployment. !!! example Build a Docker image before deploying a flow: ```yaml theme={null} build: - prefect_docker.deployments.steps.build_docker_image: id: build-image requires: prefect-docker image_name: repo-name/image-name tag: dev push: - prefect_docker.deployments.steps.push_docker_image: requires: prefect-docker image_name: "{{ build-image.image_name }}" tag: "{{ build-image.tag }}" ``` ## Functions ### `cacheable` ```python theme={null} cacheable(func: Callable[P, T]) -> Callable[P, T] ``` ### `build_docker_image` ```python theme={null} build_docker_image(image_name: str, dockerfile: str = 'Dockerfile', tag: str | None = None, additional_tags: list[str] | None = None, ignore_cache: bool = False, persist_dockerfile: bool = False, dockerfile_output_path: str = 'Dockerfile.generated', **build_kwargs: Any) -> BuildDockerImageResult ``` Builds a Docker image for a Prefect deployment. Can be used within a `prefect.yaml` file to build a Docker image prior to creating or updating a deployment. **Args:** * `image_name`: The name of the Docker image to build, including the registry and repository. * `dockerfile`: The path to the Dockerfile used to build the image. If "auto" is passed, a temporary Dockerfile will be created to build the image. * `tag`: The tag to apply to the built image. * `additional_tags`: Additional tags on the image, in addition to `tag`, to apply to the built image. * `persist_dockerfile`: If True and dockerfile="auto", the generated Dockerfile will be saved instead of deleted after the build. * `dockerfile_output_path`: Optional path where the auto-generated Dockerfile should be saved (e.g., "Dockerfile.generated"). Only used if `persist_dockerfile` is True. * `**build_kwargs`: Additional keyword arguments to pass to Docker when building the image. Available options can be found in the [`docker-py`](https://docker-py.readthedocs.io/en/stable/images.html#docker.models.images.ImageCollection.build) documentation. Returns: A dictionary containing the image name and tag of the built image. Example: Build a Docker image prior to creating a deployment: ```yaml theme={null} build: - prefect_docker.deployments.steps.build_docker_image: requires: prefect-docker image_name: repo-name/image-name tag: dev ``` Build a Docker image with multiple tags: ```yaml theme={null} build: - prefect_docker.deployments.steps.build_docker_image: requires: prefect-docker image_name: repo-name/image-name tag: dev additional_tags: - v0.1.0, - dac9ccccedaa55a17916eef14f95cc7bdd3c8199 ``` Build a Docker image using an auto-generated Dockerfile: ```yaml theme={null} build: - prefect_docker.deployments.steps.build_docker_image: requires: prefect-docker image_name: repo-name/image-name tag: dev dockerfile: auto ``` Build a Docker image for a different platform: ```yaml theme={null} build: - prefect_docker.deployments.steps.build_docker_image: requires: prefect-docker image_name: repo-name/image-name tag: dev dockerfile: Dockerfile platform: amd64 ``` Save the auto-generated Dockerfile to disk: ```yaml theme={null} build: - prefect_docker.deployments.steps.build_docker_image: requires: prefect-docker image_name: repo-name/image-name tag: dev dockerfile: auto persist_dockerfile: true dockerfile_output_path: Dockerfile.generated ``` ### `push_docker_image` ```python theme={null} push_docker_image(image_name: str, tag: str | None = None, credentials: dict[str, Any] | None = None, additional_tags: list[str] | None = None, ignore_cache: bool = False) -> PushDockerImageResult ``` Push a Docker image to a remote registry. **Args:** * `image_name`: The name of the Docker image to push, including the registry and repository. * `tag`: The tag of the Docker image to push. * `credentials`: A dictionary containing the username, password, and URL for the registry to push the image to. * `additional_tags`: Additional tags on the image, in addition to `tag`, to apply to the built image. Returns: A dictionary containing the image name and tag of the pushed image. Examples: Build and push a Docker image to a private repository: ```yaml theme={null} build: - prefect_docker.deployments.steps.build_docker_image: id: build-image requires: prefect-docker image_name: repo-name/image-name tag: dev dockerfile: auto push: - prefect_docker.deployments.steps.push_docker_image: requires: prefect-docker image_name: "{{ build-image.image_name }}" tag: "{{ build-image.tag }}" credentials: "{{ prefect.blocks.docker-registry-credentials.dev-registry }}" ``` Build and push a Docker image to a private repository with multiple tags ```yaml theme={null} build: - prefect_docker.deployments.steps.build_docker_image: id: build-image requires: prefect-docker image_name: repo-name/image-name tag: dev dockerfile: auto additional_tags: [ v0.1.0, dac9ccccedaa55a17916eef14f95cc7bdd3c8199 ] push: - prefect_docker.deployments.steps.push_docker_image: requires: prefect-docker image_name: "{{ build-image.image_name }}" tag: "{{ build-image.tag }}" credentials: "{{ prefect.blocks.docker-registry-credentials.dev-registry }}" additional_tags: "{{ build-image.additional_tags }}" ``` ## Classes ### `BuildDockerImageResult` The result of a `build_docker_image` step. **Attributes:** * `image_name`: The name of the built image. * `tag`: The tag of the built image. * `image`: The name and tag of the built image. * `image_id`: The ID of the built image. * `additional_tags`: The additional tags on the image, in addition to `tag`. ### `PushDockerImageResult` The result of a `push_docker_image` step. **Attributes:** * `image_name`: The name of the pushed image. * `tag`: The tag of the pushed image. * `image`: The name and tag of the pushed image. * `additional_tags`: The additional tags on the image, in addition to `tag`. # __init__ Source: https://docs.prefect.io/integrations/prefect-docker/api-ref/prefect_docker-experimental-__init__ # `prefect_docker.experimental` *This module is empty or contains only private/internal implementations.* # decorators Source: https://docs.prefect.io/integrations/prefect-docker/api-ref/prefect_docker-experimental-decorators # `prefect_docker.experimental.decorators` ## Functions ### `docker` ```python theme={null} docker(work_pool: str, **job_variables: Any) -> Callable[[Flow[P, R]], Flow[P, R]] ``` Decorator that binds execution of a flow to a Docker work pool **Args:** * `work_pool`: The name of the Docker work pool to use * `**job_variables`: Additional job variables to use for infrastructure configuration # host Source: https://docs.prefect.io/integrations/prefect-docker/api-ref/prefect_docker-host # `prefect_docker.host` Module containing Docker host settings. ## Classes ### `DockerHost` Block used to manage settings for interacting with a Docker host. **Attributes:** * `base_url`: URL to the Docker server, e.g. `unix\:///var/run/docker.sock` or `tcp\://127.0.0.1\:1234`. If this is not set, the client will be configured from environment variables. * `version`: The version of the API to use. Set to auto to automatically detect the server's version. * `timeout`: Default timeout for API calls, in seconds. * `max_pool_size`: The maximum number of connections to save in the pool. * `client_kwargs`: Additional keyword arguments to pass to `docker.from_env()` or `DockerClient`. **Examples:** Get a Docker Host client. ```python theme={null} from prefect_docker import DockerHost docker_host = DockerHost( base_url="tcp://127.0.0.1:1234", max_pool_size=4 ) with docker_host.get_client() as client: ... # Use the client for Docker operations ``` **Methods:** #### `get_client` ```python theme={null} get_client(self) -> docker.DockerClient ``` Gets a Docker Client to communicate with a Docker host. **Returns:** * A Docker Client. # images Source: https://docs.prefect.io/integrations/prefect-docker/api-ref/prefect_docker-images # `prefect_docker.images` Integrations with Docker Images. ## Functions ### `pull_docker_image` ```python theme={null} pull_docker_image(repository: str, tag: Optional[str] = None, platform: Optional[str] = None, all_tags: bool = False, docker_host: Optional[DockerHost] = None, docker_registry_credentials: Optional[DockerRegistryCredentials] = None, **pull_kwargs: Dict[str, Any]) -> Union[Image, List[Image]] ``` Pull an image of the given name and return it. Similar to the docker pull command. If all\_tags is set, the tag parameter is ignored and all image tags will be pulled. **Args:** * `repository`: The repository to pull. * `tag`: The tag to pull; if not provided, it is set to latest. * `platform`: Platform in the format os\[/arch\[/variant]]. * `all_tags`: Pull all image tags which will return a list of Images. * `docker_host`: Settings for interacting with a Docker host; if not provided, will automatically instantiate a `DockerHost` from env. * `docker_registry_credentials`: Docker credentials used to log in to a registry before pulling the image. * `**pull_kwargs`: Additional keyword arguments to pass to `client.images.pull`. **Returns:** * The image that has been pulled, or a list of images if `all_tags` is `True`. **Examples:** Pull prefecthq/prefect image with the tag latest-python3.10. ```python theme={null} from prefect import flow from prefect_docker.images import pull_docker_image @flow def pull_docker_image_flow(): image = pull_docker_image( repository="prefecthq/prefect", tag="latest-python3.10" ) return image pull_docker_image_flow() ``` # types Source: https://docs.prefect.io/integrations/prefect-docker/api-ref/prefect_docker-types # `prefect_docker.types` ## Functions ### `assert_volume_str` ```python theme={null} assert_volume_str(volume: str) -> str ``` Validate a Docker volume string and raise `ValueError` if invalid. # worker Source: https://docs.prefect.io/integrations/prefect-docker/api-ref/prefect_docker-worker # `prefect_docker.worker` Module containing the Docker worker used for executing flow runs as Docker containers. To start a Docker worker, run the following command: ```bash theme={null} prefect worker start --pool 'my-work-pool' --type docker ``` Replace `my-work-pool` with the name of the work pool you want the worker to poll for flow runs. For more information about work pools and workers, checkout out the [Prefect docs](https://docs.prefect.io/latest/deploy/infrastructure-concepts). ## Classes ### `ImagePullPolicy` Enum representing the image pull policy options for a Docker container. ### `DockerWorkerJobConfiguration` Configuration class used by the Docker worker. An instance of this class is passed to the Docker worker's `run` method for each flow run. It contains all the information necessary to execute the flow run as a Docker container. **Attributes:** * `name`: The name to give to created Docker containers. * `command`: The command executed in created Docker containers to kick off flow run execution. * `env`: The environment variables to set in created Docker containers. * `labels`: The labels to set on created Docker containers. * `image`: The image reference of a container image to use for created jobs. If not set, the latest Prefect image will be used. * `image_pull_policy`: The image pull policy to use when pulling images. * `networks`: Docker networks that created containers should be connected to. * `network_mode`: The network mode for the created containers (e.g. host, bridge). If 'networks' is set, this cannot be set. * `auto_remove`: If set, containers will be deleted on completion. * `volumes`: Docker volumes that should be mounted in created containers. * `stream_output`: If set, the output from created containers will be streamed to local standard output. * `mem_limit`: Memory limit of created containers. Accepts a value with a unit identifier (e.g. 100000b, 1000k, 128m, 1g.) If a value is given without a unit, bytes are assumed. * `memswap_limit`: Total memory (memory + swap), -1 to disable swap. Should only be set if `mem_limit` is also set. If `mem_limit` is set, this defaults to allowing the container to use as much swap as memory. For example, if `mem_limit` is 300m and `memswap_limit` is not set, containers can use 600m in total of memory and swap. * `privileged`: Give extended privileges to created containers. * `container_create_kwargs`: Extra args for docker py when creating container. **Methods:** #### `get_extra_hosts` ```python theme={null} get_extra_hosts(self, docker_client: DockerClient) -> Optional[dict[str, str]] ``` A host.docker.internal -> host-gateway mapping is necessary for communicating with the API on Linux machines. Docker Desktop on macOS will automatically already have this mapping. #### `get_network_mode` ```python theme={null} get_network_mode(self) -> Optional[str] ``` Returns the network mode to use for the container based on the configured options and the platform. #### `prepare_for_flow_run` ```python theme={null} prepare_for_flow_run(self, flow_run: 'FlowRun', deployment: 'DeploymentResponse | None' = None, flow: 'APIFlow | None' = None, work_pool: 'WorkPool | None' = None, worker_name: 'str | None' = None, worker_id: 'UUID | None' = None) ``` Prepares the flow run by setting the image, labels, and name attributes. ### `DockerWorkerResult` Contains information about a completed Docker container ### `DockerWorker` Prefect worker that executes flow runs within Docker containers. **Methods:** #### `kill_infrastructure` ```python theme={null} kill_infrastructure(self, infrastructure_pid: str, configuration: DockerWorkerJobConfiguration, grace_seconds: int = 30) -> None ``` Kill a Docker container. **Args:** * `infrastructure_pid`: The infrastructure identifier in format "docker\_host\_base\_url:container\_id". * `configuration`: The job configuration (not used for Docker but kept for API compatibility). * `grace_seconds`: Time to allow for graceful shutdown before force killing. **Raises:** * `InfrastructureNotFound`: If the container doesn't exist. #### `run` ```python theme={null} run(self, flow_run: 'FlowRun', configuration: DockerWorkerJobConfiguration, task_status: Optional[anyio.abc.TaskStatus[str]] = None) -> DockerWorkerResult ``` Executes a flow run within a Docker container and waits for the flow run to complete. #### `setup` ```python theme={null} setup(self) ``` # prefect-docker Source: https://docs.prefect.io/integrations/prefect-docker/index The `prefect-docker` library is required to create deployments that will submit runs to most Prefect work pool infrastructure types. ## Getting started ### Prerequisites * [Docker installed](https://www.docker.com/) and running. ### Install `prefect-docker` The following command will install a version of `prefect-docker` compatible with your installed version of `prefect`. If you don't already have `prefect` installed, it will install the newest version of `prefect` as well. ```bash theme={null} pip install "prefect[docker]" ``` Upgrade to the latest versions of `prefect` and `prefect-docker`: ```bash theme={null} pip install -U "prefect[docker]" ``` ### Examples See the Prefect [Workers docs](/v3/how-to-guides/deployment_infra/docker) to learn how to create and run deployments that use Docker. ## Resources For assistance using Docker, consult the [Docker documentation](https://docs.docker.com/). Refer to the `prefect-docker` [SDK documentation](/integrations/prefect-docker/api-ref/prefect_docker-containers) to explore all the capabilities of the `prefect-docker` library. # credentials Source: https://docs.prefect.io/integrations/prefect-email/api-ref/prefect_email-credentials # `prefect_email.credentials` Credential classes used to perform authenticated interactions with email services ## Classes ### `SMTPType` Protocols used to secure email transmissions. ### `SMTPServer` Server used to send email. ### `EmailServerCredentials` Block used to manage generic email server authentication. It is recommended you use a [Google App Password](https://support.google.com/accounts/answer/185833) if you use Gmail. **Attributes:** * `username`: The username to use for authentication to the server. Unnecessary if SMTP login is not required. * `password`: The password to use for authentication to the server. Unnecessary if SMTP login is not required. * `smtp_server`: Either the hostname of the SMTP server, or one of the keys from the built-in SMTPServer Enum members, like "gmail". * `smtp_type`: Either "SSL", "STARTTLS", or "INSECURE". * `smtp_port`: If provided, overrides the smtp\_type's default port number. * `verify`: If `False`, SSL certificates will not be verified. Default to `True`. **Methods:** #### `get_server` ```python theme={null} get_server(self) -> SMTP ``` Gets an authenticated SMTP server. **Returns:** * An authenticated SMTP server. # message Source: https://docs.prefect.io/integrations/prefect-email/api-ref/prefect_email-message # `prefect_email.message` Tasks for interacting with email message services ## Functions ### `email_send_message` ```python theme={null} email_send_message(subject: str, msg: str, email_server_credentials: 'EmailServerCredentials', msg_plain: Optional[str] = None, email_from: Optional[str] = None, email_to: Optional[Union[str, List[str]]] = None, email_to_cc: Optional[Union[str, List[str]]] = None, email_to_bcc: Optional[Union[str, List[str]]] = None, attachments: Optional[List[str]] = None, inline_images: Optional[dict[str, str]] = None) ``` Sends an email message from an authenticated email service over SMTP. Sending messages containing HTML code is supported - the default MIME type is set to the text/html. **Args:** * `subject`: The subject line of the email. * `msg`: The contents of the email, added as html; can be used in combination with msg\_plain. * `msg_plain`: The contents of the email as plain text, can be used in combination with msg. * `email_to`: The email addresses to send the message to, separated by commas. If a list is provided, will join the items, separated by commas. * `email_to_cc`: Additional email addresses to send the message to as cc, separated by commas. If a list is provided, will join the items, separated by commas. * `email_to_bcc`: Additional email addresses to send the message to as bcc, separated by commas. If a list is provided, will join the items, separated by commas. * `attachments`: Names of files that should be sent as attachment. * `inline_images`: A dictionary where keys are content IDs (cids) and values are file paths to images to embed in the HTML body. **Returns:** * The MIME Multipart message of the email. # prefect-email Source: https://docs.prefect.io/integrations/prefect-email/index The `prefect-email` library helps you send emails from your Prefect flows. ## Getting started ### Prerequisites * Many email services, such as Gmail, require an [App Password](https://support.google.com/accounts/answer/185833) to successfully send emails. If you encounter an error similar to `smtplib.SMTPAuthenticationError: (535, b'5.7.8 Username and Password not accepted...`, it's likely you are not using an App Password. ### Install `prefect-email` The following command will install a version of prefect-email compatible with your installed version of Prefect. If you don't already have Prefect installed, it will install the newest version of Prefect as well. ```bash theme={null} pip install "prefect[email]" ``` Upgrade to the latest versions of Prefect and prefect-email: ```bash theme={null} pip install -U "prefect[email]" ``` ### Register newly installed block types Register the block types in the prefect-email module to make them available for use. ```bash theme={null} prefect block register -m prefect_email ``` ## Save credentials to an EmailServerCredentials block Save your email credentials to a block. Replace the placeholders with your email address and password. ```python theme={null} from prefect_email import EmailServerCredentials credentials = EmailServerCredentials( username="EMAIL-ADDRESS-PLACEHOLDER", password="PASSWORD-PLACEHOLDER", # must be an application password ) credentials.save("BLOCK-NAME-PLACEHOLDER") ``` In the examples below you load a credentials block to authenticate with the email server. ## Send emails The code below shows how to send an email using the pre-built `email_send_message` [task](https://docs.prefect.io/latest/develop/write-tasks/). ```python theme={null} from prefect import flow from prefect_email import EmailServerCredentials, email_send_message @flow def example_email_send_message_flow(email_addresses): email_server_credentials = EmailServerCredentials.load("BLOCK-NAME-PLACEHOLDER") for email_address in email_addresses: subject = email_send_message.with_options(name=f"email {email_address}").submit( email_server_credentials=email_server_credentials, subject="Example Flow Notification using Gmail", msg="This proves email_send_message works!", email_to=email_address, ) if __name__ == "__main__": example_email_send_message_flow(["EMAIL-ADDRESS-PLACEHOLDER"]) ``` ## Capture exceptions and send an email This example demonstrates how to send an email notification with the details of the exception when a flow run fails. `prefect-email` can be wrapped in an `except` statement to do just that! ```python theme={null} from prefect import flow from prefect.context import get_run_context from prefect_email import EmailServerCredentials, email_send_message def notify_exc_by_email(exc): context = get_run_context() flow_run_name = context.flow_run.name email_server_credentials = EmailServerCredentials.load("email-server-credentials") email_send_message( email_server_credentials=email_server_credentials, subject=f"Flow run {flow_run_name!r} failed", msg=f"Flow run {flow_run_name!r} failed due to {exc}.", email_to=email_server_credentials.username, ) @flow def example_flow(): try: 1 / 0 except Exception as exc: notify_exc_by_email(exc) raise if __name__ == "__main__": example_flow() ``` ## Resources Refer to the `prefect-email` [SDK documentation](/integrations/prefect-email/api-ref/prefect_email-credentials) to explore all the capabilities of the `prefect-email` library. # bigquery Source: https://docs.prefect.io/integrations/prefect-gcp/api-ref/prefect_gcp-bigquery # `prefect_gcp.bigquery` Tasks for interacting with GCP BigQuery ## Functions ### `abigquery_query` ```python theme={null} abigquery_query(query: str, gcp_credentials: GcpCredentials, query_params: Optional[List[tuple]] = None, dry_run_max_bytes: Optional[int] = None, dataset: Optional[str] = None, table: Optional[str] = None, to_dataframe: bool = False, job_config: Optional[dict] = None, project: Optional[str] = None, result_transformer: Optional[Callable[[List['Row']], Any]] = None, location: str = 'US') -> Any ``` Runs a BigQuery query (async version). **Args:** * `query`: String of the query to execute. * `gcp_credentials`: Credentials to use for authentication with GCP. * `query_params`: List of 3-tuples specifying BigQuery query parameters; currently only scalar query parameters are supported. See the [Google documentation](https://cloud.google.com/bigquery/docs/parameterized-queries#bigquery-query-params-python) for more details on how both the query and the query parameters should be formatted. * `dry_run_max_bytes`: If provided, the maximum number of bytes the query is allowed to process; this will be determined by executing a dry run and raising a `ValueError` if the maximum is exceeded. * `dataset`: Name of a destination dataset to write the query results to, if you don't want them returned; if provided, `table` must also be provided. * `table`: Name of a destination table to write the query results to, if you don't want them returned; if provided, `dataset` must also be provided. * `to_dataframe`: If provided, returns the results of the query as a pandas dataframe instead of a list of `bigquery.table.Row` objects. * `job_config`: Dictionary of job configuration parameters; note that the parameters provided here must be pickleable (e.g., dataset references will be rejected). * `project`: The project to initialize the BigQuery Client with; if not provided, will default to the one inferred from your credentials. * `result_transformer`: Function that can be passed to transform the result of a query before returning. The function will be passed the list of rows returned by BigQuery for the given query. * `location`: Location of the dataset that will be queried. **Returns:** * A list of rows, or pandas DataFrame if to\_dataframe, * matching the query criteria. ### `bigquery_query` ```python theme={null} bigquery_query(query: str, gcp_credentials: GcpCredentials, query_params: Optional[List[tuple]] = None, dry_run_max_bytes: Optional[int] = None, dataset: Optional[str] = None, table: Optional[str] = None, to_dataframe: bool = False, job_config: Optional[dict] = None, project: Optional[str] = None, result_transformer: Optional[Callable[[List['Row']], Any]] = None, location: str = 'US') -> Any ``` Runs a BigQuery query. **Args:** * `query`: String of the query to execute. * `gcp_credentials`: Credentials to use for authentication with GCP. * `query_params`: List of 3-tuples specifying BigQuery query parameters; currently only scalar query parameters are supported. See the [Google documentation](https://cloud.google.com/bigquery/docs/parameterized-queries#bigquery-query-params-python) for more details on how both the query and the query parameters should be formatted. * `dry_run_max_bytes`: If provided, the maximum number of bytes the query is allowed to process; this will be determined by executing a dry run and raising a `ValueError` if the maximum is exceeded. * `dataset`: Name of a destination dataset to write the query results to, if you don't want them returned; if provided, `table` must also be provided. * `table`: Name of a destination table to write the query results to, if you don't want them returned; if provided, `dataset` must also be provided. * `to_dataframe`: If provided, returns the results of the query as a pandas dataframe instead of a list of `bigquery.table.Row` objects. * `job_config`: Dictionary of job configuration parameters; note that the parameters provided here must be pickleable (e.g., dataset references will be rejected). * `project`: The project to initialize the BigQuery Client with; if not provided, will default to the one inferred from your credentials. * `result_transformer`: Function that can be passed to transform the result of a query before returning. The function will be passed the list of rows returned by BigQuery for the given query. * `location`: Location of the dataset that will be queried. **Returns:** * A list of rows, or pandas DataFrame if to\_dataframe, * matching the query criteria. ### `abigquery_create_table` ```python theme={null} abigquery_create_table(dataset: str, table: str, gcp_credentials: GcpCredentials, schema: Optional[List['SchemaField']] = None, clustering_fields: List[str] = None, time_partitioning: 'TimePartitioning' = None, project: Optional[str] = None, location: str = 'US', external_config: Optional['ExternalConfig'] = None) -> str ``` Creates table in BigQuery (async version). Args: dataset: Name of a dataset in that the table will be created. table: Name of a table to create. schema: Schema to use when creating the table. gcp\_credentials: Credentials to use for authentication with GCP. clustering\_fields: List of fields to cluster the table by. time\_partitioning: `bigquery.TimePartitioning` object specifying a partitioning of the newly created table project: Project to initialize the BigQuery Client with; if not provided, will default to the one inferred from your credentials. location: The location of the dataset that will be written to. external\_config: The [external data source](https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/bigquery_table#nested_external_data_configuration). # noqa Returns: Table name. Example: ```python theme={null} from prefect import flow from prefect_gcp import GcpCredentials from prefect_gcp.bigquery import abigquery_create_table from google.cloud.bigquery import SchemaField @flow async def example_bigquery_create_table_flow(): gcp_credentials = GcpCredentials(project="project") schema = [ SchemaField("number", field_type="INTEGER", mode="REQUIRED"), SchemaField("text", field_type="STRING", mode="REQUIRED"), SchemaField("bool", field_type="BOOLEAN") ] result = await abigquery_create_table( dataset="dataset", table="test_table", schema=schema, gcp_credentials=gcp_credentials ) return result example_bigquery_create_table_flow() ``` ### `bigquery_create_table` ```python theme={null} bigquery_create_table(dataset: str, table: str, gcp_credentials: GcpCredentials, schema: Optional[List['SchemaField']] = None, clustering_fields: List[str] = None, time_partitioning: 'TimePartitioning' = None, project: Optional[str] = None, location: str = 'US', external_config: Optional['ExternalConfig'] = None) -> str ``` Creates table in BigQuery. Args: dataset: Name of a dataset in that the table will be created. table: Name of a table to create. schema: Schema to use when creating the table. gcp\_credentials: Credentials to use for authentication with GCP. clustering\_fields: List of fields to cluster the table by. time\_partitioning: `bigquery.TimePartitioning` object specifying a partitioning of the newly created table project: Project to initialize the BigQuery Client with; if not provided, will default to the one inferred from your credentials. location: The location of the dataset that will be written to. external\_config: The [external data source](https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/bigquery_table#nested_external_data_configuration). # noqa Returns: Table name. Example: ```python theme={null} from prefect import flow from prefect_gcp import GcpCredentials from prefect_gcp.bigquery import bigquery_create_table from google.cloud.bigquery import SchemaField @flow def example_bigquery_create_table_flow(): gcp_credentials = GcpCredentials(project="project") schema = [ SchemaField("number", field_type="INTEGER", mode="REQUIRED"), SchemaField("text", field_type="STRING", mode="REQUIRED"), SchemaField("bool", field_type="BOOLEAN") ] result = bigquery_create_table( dataset="dataset", table="test_table", schema=schema, gcp_credentials=gcp_credentials ) return result example_bigquery_create_table_flow() ``` ### `abigquery_insert_stream` ```python theme={null} abigquery_insert_stream(dataset: str, table: str, records: List[dict], gcp_credentials: GcpCredentials, project: Optional[str] = None, location: str = 'US') -> List ``` Insert records in a Google BigQuery table via the [streaming API](https://cloud.google.com/bigquery/streaming-data-into-bigquery) (async version). **Args:** * `dataset`: Name of a dataset where the records will be written to. * `table`: Name of a table to write to. * `records`: The list of records to insert as rows into the BigQuery table; each item in the list should be a dictionary whose keys correspond to columns in the table. * `gcp_credentials`: Credentials to use for authentication with GCP. * `project`: The project to initialize the BigQuery Client with; if not provided, will default to the one inferred from your credentials. * `location`: Location of the dataset that will be written to. **Returns:** * List of inserted rows. ### `bigquery_insert_stream` ```python theme={null} bigquery_insert_stream(dataset: str, table: str, records: List[dict], gcp_credentials: GcpCredentials, project: Optional[str] = None, location: str = 'US') -> List ``` Insert records in a Google BigQuery table via the [streaming API](https://cloud.google.com/bigquery/streaming-data-into-bigquery). **Args:** * `dataset`: Name of a dataset where the records will be written to. * `table`: Name of a table to write to. * `records`: The list of records to insert as rows into the BigQuery table; each item in the list should be a dictionary whose keys correspond to columns in the table. * `gcp_credentials`: Credentials to use for authentication with GCP. * `project`: The project to initialize the BigQuery Client with; if not provided, will default to the one inferred from your credentials. * `location`: Location of the dataset that will be written to. **Returns:** * List of inserted rows. ### `abigquery_load_cloud_storage` ```python theme={null} abigquery_load_cloud_storage(dataset: str, table: str, uri: str, gcp_credentials: GcpCredentials, schema: Optional[List['SchemaField']] = None, job_config: Optional[dict] = None, project: Optional[str] = None, location: str = 'US') -> 'LoadJob' ``` Run method for this Task (async version). Invoked by *calling* this Task within a Flow context, after initialization. Args: uri: GCS path to load data from. dataset: The id of a destination dataset to write the records to. table: The name of a destination table to write the records to. gcp\_credentials: Credentials to use for authentication with GCP. schema: The schema to use when creating the table. job\_config: Dictionary of job configuration parameters; note that the parameters provided here must be pickleable (e.g., dataset references will be rejected). project: The project to initialize the BigQuery Client with; if not provided, will default to the one inferred from your credentials. location: Location of the dataset that will be written to. **Returns:** * The response from `load_table_from_uri`. ### `bigquery_load_cloud_storage` ```python theme={null} bigquery_load_cloud_storage(dataset: str, table: str, uri: str, gcp_credentials: GcpCredentials, schema: Optional[List['SchemaField']] = None, job_config: Optional[dict] = None, project: Optional[str] = None, location: str = 'US') -> 'LoadJob' ``` Run method for this Task. Invoked by *calling* this Task within a Flow context, after initialization. Args: uri: GCS path to load data from. dataset: The id of a destination dataset to write the records to. table: The name of a destination table to write the records to. gcp\_credentials: Credentials to use for authentication with GCP. schema: The schema to use when creating the table. job\_config: Dictionary of job configuration parameters; note that the parameters provided here must be pickleable (e.g., dataset references will be rejected). project: The project to initialize the BigQuery Client with; if not provided, will default to the one inferred from your credentials. location: Location of the dataset that will be written to. **Returns:** * The response from `load_table_from_uri`. ### `abigquery_load_file` ```python theme={null} abigquery_load_file(dataset: str, table: str, path: Union[str, Path], gcp_credentials: GcpCredentials, schema: Optional[List['SchemaField']] = None, job_config: Optional[dict] = None, rewind: bool = False, size: Optional[int] = None, project: Optional[str] = None, location: str = 'US') -> 'LoadJob' ``` Loads file into BigQuery (async version). **Args:** * `dataset`: ID of a destination dataset to write the records to; if not provided here, will default to the one provided at initialization. * `table`: Name of a destination table to write the records to; if not provided here, will default to the one provided at initialization. * `path`: A string or path-like object of the file to be loaded. * `gcp_credentials`: Credentials to use for authentication with GCP. * `schema`: Schema to use when creating the table. * `job_config`: An optional dictionary of job configuration parameters; note that the parameters provided here must be pickleable (e.g., dataset references will be rejected). * `rewind`: if True, seek to the beginning of the file handle before reading the file. * `size`: Number of bytes to read from the file handle. If size is None or large, resumable upload will be used. Otherwise, multipart upload will be used. * `project`: Project to initialize the BigQuery Client with; if not provided, will default to the one inferred from your credentials. * `location`: location of the dataset that will be written to. **Returns:** * The response from `load_table_from_file`. ### `bigquery_load_file` ```python theme={null} bigquery_load_file(dataset: str, table: str, path: Union[str, Path], gcp_credentials: GcpCredentials, schema: Optional[List['SchemaField']] = None, job_config: Optional[dict] = None, rewind: bool = False, size: Optional[int] = None, project: Optional[str] = None, location: str = 'US') -> 'LoadJob' ``` Loads file into BigQuery. **Args:** * `dataset`: ID of a destination dataset to write the records to; if not provided here, will default to the one provided at initialization. * `table`: Name of a destination table to write the records to; if not provided here, will default to the one provided at initialization. * `path`: A string or path-like object of the file to be loaded. * `gcp_credentials`: Credentials to use for authentication with GCP. * `schema`: Schema to use when creating the table. * `job_config`: An optional dictionary of job configuration parameters; note that the parameters provided here must be pickleable (e.g., dataset references will be rejected). * `rewind`: if True, seek to the beginning of the file handle before reading the file. * `size`: Number of bytes to read from the file handle. If size is None or large, resumable upload will be used. Otherwise, multipart upload will be used. * `project`: Project to initialize the BigQuery Client with; if not provided, will default to the one inferred from your credentials. * `location`: location of the dataset that will be written to. **Returns:** * The response from `load_table_from_file`. ## Classes ### `BigQueryWarehouse` A block for querying a database with BigQuery. Upon instantiating, a connection to BigQuery is established and maintained for the life of the object until the close method is called. It is recommended to use this block as a context manager, which will automatically close the connection and its cursors when the context is exited. It is also recommended that this block is loaded and consumed within a single task or flow because if the block is passed across separate tasks and flows, the state of the block's connection and cursor could be lost. **Attributes:** * `gcp_credentials`: The credentials to use to authenticate. * `fetch_size`: The number of rows to fetch at a time when calling fetch\_many. Note, this parameter is executed on the client side and is not passed to the database. To limit on the server side, add the `LIMIT` clause, or the dialect's equivalent clause, like `TOP`, to the query. **Methods:** #### `aexecute` ```python theme={null} aexecute(self, operation: str, parameters: Optional[Dict[str, Any]] = None, **execution_options: Dict[str, Any]) -> None ``` Executes an operation on the database (async version). This method is intended to be used for operations that do not return data, such as INSERT, UPDATE, or DELETE. Unlike the fetch methods, this method will always execute the operation upon calling. **Args:** * `operation`: The SQL query or other operation to be executed. * `parameters`: The parameters for the operation. * `**execution_options`: Additional options to pass to `connection.execute`. **Examples:** Execute operation with parameters: ```python theme={null} from prefect_gcp.bigquery import BigQueryWarehouse async with BigQueryWarehouse.load("BLOCK_NAME") as warehouse: operation = ''' CREATE TABLE mydataset.trips AS ( SELECT bikeid, start_time, duration_minutes FROM bigquery-public-data.austin_bikeshare.bikeshare_trips LIMIT %(limit)s ); ''' await warehouse.aexecute(operation, parameters={"limit": 5}) ``` #### `aexecute_many` ```python theme={null} aexecute_many(self, operation: str, seq_of_parameters: List[Dict[str, Any]]) -> None ``` Executes many operations on the database (async version). This method is intended to be used for operations that do not return data, such as INSERT, UPDATE, or DELETE. Unlike the fetch methods, this method will always execute the operations upon calling. **Args:** * `operation`: The SQL query or other operation to be executed. * `seq_of_parameters`: The sequence of parameters for the operation. **Examples:** Create mytable in mydataset and insert two rows into it: ```python theme={null} from prefect_gcp.bigquery import BigQueryWarehouse async with BigQueryWarehouse.load("bigquery") as warehouse: create_operation = ''' CREATE TABLE IF NOT EXISTS mydataset.mytable ( col1 STRING, col2 INTEGER, col3 BOOLEAN ) ''' await warehouse.aexecute(create_operation) insert_operation = ''' INSERT INTO mydataset.mytable (col1, col2, col3) VALUES (%s, %s, %s) ''' seq_of_parameters = [ ("a", 1, True), ("b", 2, False), ] await warehouse.aexecute_many( insert_operation, seq_of_parameters=seq_of_parameters ) ``` #### `afetch_all` ```python theme={null} afetch_all(self, operation: str, parameters: Optional[Dict[str, Any]] = None, **execution_options: Dict[str, Any]) -> List['Row'] ``` Fetch all results from the database (async version). Repeated calls using the same inputs to *any* of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset\_cursors method is called. **Args:** * `operation`: The SQL query or other operation to be executed. * `parameters`: The parameters for the operation. * `**execution_options`: Additional options to pass to `connection.execute`. **Returns:** * A list of tuples containing the data returned by the database, where each row is a tuple and each column is a value in the tuple. **Examples:** Execute operation with parameters, fetching all rows: ```python theme={null} from prefect_gcp.bigquery import BigQueryWarehouse async with BigQueryWarehouse.load("BLOCK_NAME") as warehouse: operation = ''' SELECT word, word_count FROM `bigquery-public-data.samples.shakespeare` WHERE corpus = %(corpus)s AND word_count >= %(min_word_count)s ORDER BY word_count DESC LIMIT 3; ''' parameters = { "corpus": "romeoandjuliet", "min_word_count": 250, } result = await warehouse.afetch_all(operation, parameters=parameters) ``` #### `afetch_many` ```python theme={null} afetch_many(self, operation: str, parameters: Optional[Dict[str, Any]] = None, size: Optional[int] = None, **execution_options: Dict[str, Any]) -> List['Row'] ``` Fetch a limited number of results from the database (async version). Repeated calls using the same inputs to *any* of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset\_cursors method is called. **Args:** * `operation`: The SQL query or other operation to be executed. * `parameters`: The parameters for the operation. * `size`: The number of results to return; if None or 0, uses the value of `fetch_size` configured on the block. * `**execution_options`: Additional options to pass to `connection.execute`. **Returns:** * A list of tuples containing the data returned by the database, where each row is a tuple and each column is a value in the tuple. **Examples:** Execute operation with parameters, fetching two new rows at a time: ```python theme={null} from prefect_gcp.bigquery import BigQueryWarehouse async with BigQueryWarehouse.load("BLOCK_NAME") as warehouse: operation = ''' SELECT word, word_count FROM `bigquery-public-data.samples.shakespeare` WHERE corpus = %(corpus)s AND word_count >= %(min_word_count)s ORDER BY word_count DESC LIMIT 6; ''' parameters = { "corpus": "romeoandjuliet", "min_word_count": 250, } for _ in range(0, 3): result = await warehouse.afetch_many( operation, parameters=parameters, size=2 ) print(result) ``` #### `afetch_one` ```python theme={null} afetch_one(self, operation: str, parameters: Optional[Dict[str, Any]] = None, **execution_options: Dict[str, Any]) -> 'Row' ``` Fetch a single result from the database (async version). Repeated calls using the same inputs to *any* of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset\_cursors method is called. **Args:** * `operation`: The SQL query or other operation to be executed. * `parameters`: The parameters for the operation. * `**execution_options`: Additional options to pass to `connection.execute`. **Returns:** * A tuple containing the data returned by the database, where each row is a tuple and each column is a value in the tuple. **Examples:** Execute operation with parameters, fetching one new row at a time: ```python theme={null} from prefect_gcp.bigquery import BigQueryWarehouse async with BigQueryWarehouse.load("BLOCK_NAME") as warehouse: operation = ''' SELECT word, word_count FROM `bigquery-public-data.samples.shakespeare` WHERE corpus = %(corpus)s AND word_count >= %(min_word_count)s ORDER BY word_count DESC LIMIT 3; ''' parameters = { "corpus": "romeoandjuliet", "min_word_count": 250, } for _ in range(0, 3): result = await warehouse.afetch_one(operation, parameters=parameters) print(result) ``` #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` #### `close` ```python theme={null} close(self) ``` Closes connection and its cursors. #### `execute` ```python theme={null} execute(self, operation: str, parameters: Optional[Dict[str, Any]] = None, **execution_options: Dict[str, Any]) -> None ``` Executes an operation on the database. This method is intended to be used for operations that do not return data, such as INSERT, UPDATE, or DELETE. Unlike the fetch methods, this method will always execute the operation upon calling. **Args:** * `operation`: The SQL query or other operation to be executed. * `parameters`: The parameters for the operation. * `**execution_options`: Additional options to pass to `connection.execute`. **Examples:** Execute operation with parameters: ```python theme={null} from prefect_gcp.bigquery import BigQueryWarehouse with BigQueryWarehouse.load("BLOCK_NAME") as warehouse: operation = ''' CREATE TABLE mydataset.trips AS ( SELECT bikeid, start_time, duration_minutes FROM bigquery-public-data.austin_bikeshare.bikeshare_trips LIMIT %(limit)s ); ''' warehouse.execute(operation, parameters={"limit": 5}) ``` #### `execute_many` ```python theme={null} execute_many(self, operation: str, seq_of_parameters: List[Dict[str, Any]]) -> None ``` Executes many operations on the database. This method is intended to be used for operations that do not return data, such as INSERT, UPDATE, or DELETE. Unlike the fetch methods, this method will always execute the operations upon calling. **Args:** * `operation`: The SQL query or other operation to be executed. * `seq_of_parameters`: The sequence of parameters for the operation. **Examples:** Create mytable in mydataset and insert two rows into it: ```python theme={null} from prefect_gcp.bigquery import BigQueryWarehouse with BigQueryWarehouse.load("bigquery") as warehouse: create_operation = ''' CREATE TABLE IF NOT EXISTS mydataset.mytable ( col1 STRING, col2 INTEGER, col3 BOOLEAN ) ''' warehouse.execute(create_operation) insert_operation = ''' INSERT INTO mydataset.mytable (col1, col2, col3) VALUES (%s, %s, %s) ''' seq_of_parameters = [ ("a", 1, True), ("b", 2, False), ] warehouse.execute_many( insert_operation, seq_of_parameters=seq_of_parameters ) ``` #### `fetch_all` ```python theme={null} fetch_all(self, operation: str, parameters: Optional[Dict[str, Any]] = None, **execution_options: Dict[str, Any]) -> List['Row'] ``` Fetch all results from the database. Repeated calls using the same inputs to *any* of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset\_cursors method is called. **Args:** * `operation`: The SQL query or other operation to be executed. * `parameters`: The parameters for the operation. * `**execution_options`: Additional options to pass to `connection.execute`. **Returns:** * A list of tuples containing the data returned by the database, where each row is a tuple and each column is a value in the tuple. **Examples:** Execute operation with parameters, fetching all rows: ```python theme={null} from prefect_gcp.bigquery import BigQueryWarehouse with BigQueryWarehouse.load("BLOCK_NAME") as warehouse: operation = ''' SELECT word, word_count FROM `bigquery-public-data.samples.shakespeare` WHERE corpus = %(corpus)s AND word_count >= %(min_word_count)s ORDER BY word_count DESC LIMIT 3; ''' parameters = { "corpus": "romeoandjuliet", "min_word_count": 250, } result = warehouse.fetch_all(operation, parameters=parameters) ``` #### `fetch_many` ```python theme={null} fetch_many(self, operation: str, parameters: Optional[Dict[str, Any]] = None, size: Optional[int] = None, **execution_options: Dict[str, Any]) -> List['Row'] ``` Fetch a limited number of results from the database. Repeated calls using the same inputs to *any* of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset\_cursors method is called. **Args:** * `operation`: The SQL query or other operation to be executed. * `parameters`: The parameters for the operation. * `size`: The number of results to return; if None or 0, uses the value of `fetch_size` configured on the block. * `**execution_options`: Additional options to pass to `connection.execute`. **Returns:** * A list of tuples containing the data returned by the database, where each row is a tuple and each column is a value in the tuple. **Examples:** Execute operation with parameters, fetching two new rows at a time: ```python theme={null} from prefect_gcp.bigquery import BigQueryWarehouse with BigQueryWarehouse.load("BLOCK_NAME") as warehouse: operation = ''' SELECT word, word_count FROM `bigquery-public-data.samples.shakespeare` WHERE corpus = %(corpus)s AND word_count >= %(min_word_count)s ORDER BY word_count DESC LIMIT 6; ''' parameters = { "corpus": "romeoandjuliet", "min_word_count": 250, } for _ in range(0, 3): result = warehouse.fetch_many( operation, parameters=parameters, size=2 ) print(result) ``` #### `fetch_one` ```python theme={null} fetch_one(self, operation: str, parameters: Optional[Dict[str, Any]] = None, **execution_options: Dict[str, Any]) -> 'Row' ``` Fetch a single result from the database. Repeated calls using the same inputs to *any* of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset\_cursors method is called. **Args:** * `operation`: The SQL query or other operation to be executed. * `parameters`: The parameters for the operation. * `**execution_options`: Additional options to pass to `connection.execute`. **Returns:** * A tuple containing the data returned by the database, where each row is a tuple and each column is a value in the tuple. **Examples:** Execute operation with parameters, fetching one new row at a time: ```python theme={null} from prefect_gcp.bigquery import BigQueryWarehouse with BigQueryWarehouse.load("BLOCK_NAME") as warehouse: operation = ''' SELECT word, word_count FROM `bigquery-public-data.samples.shakespeare` WHERE corpus = %(corpus)s AND word_count >= %(min_word_count)s ORDER BY word_count DESC LIMIT 3; ''' parameters = { "corpus": "romeoandjuliet", "min_word_count": 250, } for _ in range(0, 3): result = warehouse.fetch_one(operation, parameters=parameters) print(result) ``` #### `get_connection` ```python theme={null} get_connection(self) -> 'Connection' ``` Get the opened connection to BigQuery. #### `reset_cursors` ```python theme={null} reset_cursors(self) -> None ``` Tries to close all opened cursors. # cloud_storage Source: https://docs.prefect.io/integrations/prefect-gcp/api-ref/prefect_gcp-cloud_storage # `prefect_gcp.cloud_storage` Tasks for interacting with GCP Cloud Storage. ## Functions ### `acloud_storage_create_bucket` ```python theme={null} acloud_storage_create_bucket(bucket: str, gcp_credentials: GcpCredentials, project: Optional[str] = None, location: Optional[str] = None, **create_kwargs: Dict[str, Any]) -> str ``` Creates a bucket (async version). **Args:** * `bucket`: Name of the bucket. * `gcp_credentials`: Credentials to use for authentication with GCP. * `project`: Name of the project to use; overrides the gcp\_credentials project if provided. * `location`: Location of the bucket. * `**create_kwargs`: Additional keyword arguments to pass to `client.create_bucket`. **Returns:** * The bucket name. ### `cloud_storage_create_bucket` ```python theme={null} cloud_storage_create_bucket(bucket: str, gcp_credentials: GcpCredentials, project: Optional[str] = None, location: Optional[str] = None, **create_kwargs: Dict[str, Any]) -> str ``` Creates a bucket. **Args:** * `bucket`: Name of the bucket. * `gcp_credentials`: Credentials to use for authentication with GCP. * `project`: Name of the project to use; overrides the gcp\_credentials project if provided. * `location`: Location of the bucket. * `**create_kwargs`: Additional keyword arguments to pass to `client.create_bucket`. **Returns:** * The bucket name. ### `acloud_storage_download_blob_as_bytes` ```python theme={null} acloud_storage_download_blob_as_bytes(bucket: str, blob: str, gcp_credentials: GcpCredentials, chunk_size: Optional[int] = None, encryption_key: Optional[str] = None, timeout: Union[float, Tuple[float, float]] = 60, project: Optional[str] = None, **download_kwargs: Dict[str, Any]) -> bytes ``` Downloads a blob as bytes (async version). **Args:** * `bucket`: Name of the bucket. * `blob`: Name of the Cloud Storage blob. * `gcp_credentials`: Credentials to use for authentication with GCP. * `chunk_size`: The size of a chunk of data whenever iterating (in bytes). This must be a multiple of 256 KB per the API specification. * `encryption_key`: An encryption key. * `timeout`: The number of seconds the transport should wait for the server response. Can also be passed as a tuple (connect\_timeout, read\_timeout). * `project`: Name of the project to use; overrides the gcp\_credentials project if provided. * `**download_kwargs`: Additional keyword arguments to pass to `Blob.download_as_bytes`. **Returns:** * A bytes or string representation of the blob object. ### `cloud_storage_download_blob_as_bytes` ```python theme={null} cloud_storage_download_blob_as_bytes(bucket: str, blob: str, gcp_credentials: GcpCredentials, chunk_size: Optional[int] = None, encryption_key: Optional[str] = None, timeout: Union[float, Tuple[float, float]] = 60, project: Optional[str] = None, **download_kwargs: Dict[str, Any]) -> bytes ``` Downloads a blob as bytes. **Args:** * `bucket`: Name of the bucket. * `blob`: Name of the Cloud Storage blob. * `gcp_credentials`: Credentials to use for authentication with GCP. * `chunk_size`: The size of a chunk of data whenever iterating (in bytes). This must be a multiple of 256 KB per the API specification. * `encryption_key`: An encryption key. * `timeout`: The number of seconds the transport should wait for the server response. Can also be passed as a tuple (connect\_timeout, read\_timeout). * `project`: Name of the project to use; overrides the gcp\_credentials project if provided. * `**download_kwargs`: Additional keyword arguments to pass to `Blob.download_as_bytes`. **Returns:** * A bytes or string representation of the blob object. ### `acloud_storage_download_blob_to_file` ```python theme={null} acloud_storage_download_blob_to_file(bucket: str, blob: str, path: Union[str, Path], gcp_credentials: GcpCredentials, chunk_size: Optional[int] = None, encryption_key: Optional[str] = None, timeout: Union[float, Tuple[float, float]] = 60, project: Optional[str] = None, **download_kwargs: Dict[str, Any]) -> Union[str, Path] ``` Downloads a blob to a file path (async version). **Args:** * `bucket`: Name of the bucket. * `blob`: Name of the Cloud Storage blob. * `path`: Downloads the contents to the provided file path; if the path is a directory, automatically joins the blob name. * `gcp_credentials`: Credentials to use for authentication with GCP. * `chunk_size`: The size of a chunk of data whenever iterating (in bytes). This must be a multiple of 256 KB per the API specification. * `encryption_key`: An encryption key. * `timeout`: The number of seconds the transport should wait for the server response. Can also be passed as a tuple (connect\_timeout, read\_timeout). * `project`: Name of the project to use; overrides the gcp\_credentials project if provided. * `**download_kwargs`: Additional keyword arguments to pass to `Blob.download_to_filename`. **Returns:** * The path to the blob object. ### `cloud_storage_download_blob_to_file` ```python theme={null} cloud_storage_download_blob_to_file(bucket: str, blob: str, path: Union[str, Path], gcp_credentials: GcpCredentials, chunk_size: Optional[int] = None, encryption_key: Optional[str] = None, timeout: Union[float, Tuple[float, float]] = 60, project: Optional[str] = None, **download_kwargs: Dict[str, Any]) -> Union[str, Path] ``` Downloads a blob to a file path. **Args:** * `bucket`: Name of the bucket. * `blob`: Name of the Cloud Storage blob. * `path`: Downloads the contents to the provided file path; if the path is a directory, automatically joins the blob name. * `gcp_credentials`: Credentials to use for authentication with GCP. * `chunk_size`: The size of a chunk of data whenever iterating (in bytes). This must be a multiple of 256 KB per the API specification. * `encryption_key`: An encryption key. * `timeout`: The number of seconds the transport should wait for the server response. Can also be passed as a tuple (connect\_timeout, read\_timeout). * `project`: Name of the project to use; overrides the gcp\_credentials project if provided. * `**download_kwargs`: Additional keyword arguments to pass to `Blob.download_to_filename`. **Returns:** * The path to the blob object. ### `acloud_storage_upload_blob_from_string` ```python theme={null} acloud_storage_upload_blob_from_string(data: Union[str, bytes], bucket: str, blob: str, gcp_credentials: GcpCredentials, content_type: Optional[str] = None, chunk_size: Optional[int] = None, encryption_key: Optional[str] = None, timeout: Union[float, Tuple[float, float]] = 60, project: Optional[str] = None, **upload_kwargs: Dict[str, Any]) -> str ``` Uploads a blob from a string or bytes representation of data (async version). **Args:** * `data`: String or bytes representation of data to upload. * `bucket`: Name of the bucket. * `blob`: Name of the Cloud Storage blob. * `gcp_credentials`: Credentials to use for authentication with GCP. * `content_type`: Type of content being uploaded. * `chunk_size`: The size of a chunk of data whenever iterating (in bytes). This must be a multiple of 256 KB per the API specification. * `encryption_key`: An encryption key. * `timeout`: The number of seconds the transport should wait for the server response. Can also be passed as a tuple (connect\_timeout, read\_timeout). * `project`: Name of the project to use; overrides the gcp\_credentials project if provided. * `**upload_kwargs`: Additional keyword arguments to pass to `Blob.upload_from_string`. **Returns:** * The blob name. ### `cloud_storage_upload_blob_from_string` ```python theme={null} cloud_storage_upload_blob_from_string(data: Union[str, bytes], bucket: str, blob: str, gcp_credentials: GcpCredentials, content_type: Optional[str] = None, chunk_size: Optional[int] = None, encryption_key: Optional[str] = None, timeout: Union[float, Tuple[float, float]] = 60, project: Optional[str] = None, **upload_kwargs: Dict[str, Any]) -> str ``` Uploads a blob from a string or bytes representation of data. **Args:** * `data`: String or bytes representation of data to upload. * `bucket`: Name of the bucket. * `blob`: Name of the Cloud Storage blob. * `gcp_credentials`: Credentials to use for authentication with GCP. * `content_type`: Type of content being uploaded. * `chunk_size`: The size of a chunk of data whenever iterating (in bytes). This must be a multiple of 256 KB per the API specification. * `encryption_key`: An encryption key. * `timeout`: The number of seconds the transport should wait for the server response. Can also be passed as a tuple (connect\_timeout, read\_timeout). * `project`: Name of the project to use; overrides the gcp\_credentials project if provided. * `**upload_kwargs`: Additional keyword arguments to pass to `Blob.upload_from_string`. **Returns:** * The blob name. ### `acloud_storage_upload_blob_from_file` ```python theme={null} acloud_storage_upload_blob_from_file(file: Union[str, Path, BytesIO], bucket: str, blob: str, gcp_credentials: GcpCredentials, content_type: Optional[str] = None, chunk_size: Optional[int] = None, encryption_key: Optional[str] = None, timeout: Union[float, Tuple[float, float]] = 60, project: Optional[str] = None, **upload_kwargs: Dict[str, Any]) -> str ``` Uploads a blob from file path or file-like object (async version). Usage for passing in file-like object is if the data was downloaded from the web; can bypass writing to disk and directly upload to Cloud Storage. **Args:** * `file`: Path to data or file like object to upload. * `bucket`: Name of the bucket. * `blob`: Name of the Cloud Storage blob. * `gcp_credentials`: Credentials to use for authentication with GCP. * `content_type`: Type of content being uploaded. * `chunk_size`: The size of a chunk of data whenever iterating (in bytes). This must be a multiple of 256 KB per the API specification. * `encryption_key`: An encryption key. * `timeout`: The number of seconds the transport should wait for the server response. Can also be passed as a tuple (connect\_timeout, read\_timeout). * `project`: Name of the project to use; overrides the gcp\_credentials project if provided. * `**upload_kwargs`: Additional keyword arguments to pass to `Blob.upload_from_file` or `Blob.upload_from_filename`. **Returns:** * The blob name. ### `cloud_storage_upload_blob_from_file` ```python theme={null} cloud_storage_upload_blob_from_file(file: Union[str, Path, BytesIO], bucket: str, blob: str, gcp_credentials: GcpCredentials, content_type: Optional[str] = None, chunk_size: Optional[int] = None, encryption_key: Optional[str] = None, timeout: Union[float, Tuple[float, float]] = 60, project: Optional[str] = None, **upload_kwargs: Dict[str, Any]) -> str ``` Uploads a blob from file path or file-like object. Usage for passing in file-like object is if the data was downloaded from the web; can bypass writing to disk and directly upload to Cloud Storage. **Args:** * `file`: Path to data or file like object to upload. * `bucket`: Name of the bucket. * `blob`: Name of the Cloud Storage blob. * `gcp_credentials`: Credentials to use for authentication with GCP. * `content_type`: Type of content being uploaded. * `chunk_size`: The size of a chunk of data whenever iterating (in bytes). This must be a multiple of 256 KB per the API specification. * `encryption_key`: An encryption key. * `timeout`: The number of seconds the transport should wait for the server response. Can also be passed as a tuple (connect\_timeout, read\_timeout). * `project`: Name of the project to use; overrides the gcp\_credentials project if provided. * `**upload_kwargs`: Additional keyword arguments to pass to `Blob.upload_from_file` or `Blob.upload_from_filename`. **Returns:** * The blob name. ### `cloud_storage_copy_blob` ```python theme={null} cloud_storage_copy_blob(source_bucket: str, dest_bucket: str, source_blob: str, gcp_credentials: GcpCredentials, dest_blob: Optional[str] = None, timeout: Union[float, Tuple[float, float]] = 60, project: Optional[str] = None, **copy_kwargs: Dict[str, Any]) -> str ``` Copies data from one Google Cloud Storage bucket to another, without downloading it locally. **Args:** * `source_bucket`: Source bucket name. * `dest_bucket`: Destination bucket name. * `source_blob`: Source blob name. * `gcp_credentials`: Credentials to use for authentication with GCP. * `dest_blob`: Destination blob name; if not provided, defaults to source\_blob. * `timeout`: The number of seconds the transport should wait for the server response. Can also be passed as a tuple (connect\_timeout, read\_timeout). * `project`: Name of the project to use; overrides the gcp\_credentials project if provided. * `**copy_kwargs`: Additional keyword arguments to pass to `Bucket.copy_blob`. **Returns:** * Destination blob name. ## Classes ### `DataFrameSerializationFormat` An enumeration class to represent different file formats, compression options for upload\_from\_dataframe **Attributes:** * `CSV`: Representation for 'csv' file format with no compression and its related content type and suffix. * `CSV_GZIP`: Representation for 'csv' file format with 'gzip' compression and its related content type and suffix. * `PARQUET`: Representation for 'parquet' file format with no compression and its related content type and suffix. * `PARQUET_SNAPPY`: Representation for 'parquet' file format with 'snappy' compression and its related content type and suffix. * `PARQUET_GZIP`: Representation for 'parquet' file format with 'gzip' compression and its related content type and suffix. **Methods:** #### `compression` ```python theme={null} compression(self) -> Union[str, None] ``` The compression type of the current instance. #### `content_type` ```python theme={null} content_type(self) -> str ``` The content type of the current instance. #### `fix_extension_with` ```python theme={null} fix_extension_with(self, gcs_blob_path: str) -> str ``` Fix the extension of a GCS blob. **Args:** * `gcs_blob_path`: The path to the GCS blob to be modified. **Returns:** * The modified path to the GCS blob with the new extension. #### `format` ```python theme={null} format(self) -> str ``` The file format of the current instance. #### `suffix` ```python theme={null} suffix(self) -> str ``` The suffix of the file format of the current instance. ### `GcsBucket` Block used to store data using GCP Cloud Storage Buckets. Note! `GcsBucket` in `prefect-gcp` is a unique block, separate from `GCS` in core Prefect. `GcsBucket` does not use `gcsfs` under the hood, instead using the `google-cloud-storage` package, and offers more configuration and functionality. **Attributes:** * `bucket`: Name of the bucket. * `gcp_credentials`: The credentials to authenticate with GCP. * `bucket_folder`: A default path to a folder within the GCS bucket to use for reading and writing objects. **Methods:** #### `acreate_bucket` ```python theme={null} acreate_bucket(self, location: Optional[str] = None, **create_kwargs) -> 'Bucket' ``` Creates a bucket (async version). **Args:** * `location`: The location of the bucket. * `**create_kwargs`: Additional keyword arguments to pass to the `create_bucket` method. **Returns:** * The bucket object. **Examples:** Create a bucket. ```python theme={null} from prefect_gcp.cloud_storage import GcsBucket gcs_bucket = GcsBucket(bucket="my-bucket") await gcs_bucket.acreate_bucket() ``` #### `adownload_folder_to_path` ```python theme={null} adownload_folder_to_path(self, from_folder: str, to_folder: Optional[Union[str, Path]] = None, **download_kwargs: Dict[str, Any]) -> Path ``` Downloads objects *within* a folder (excluding the folder itself) from the object storage service to a folder (async version). **Args:** * `from_folder`: The path to the folder to download from; this gets prefixed with the bucket\_folder. * `to_folder`: The path to download the folder to. If not provided, will default to the current directory. * `**download_kwargs`: Additional keyword arguments to pass to `Blob.download_to_filename`. **Returns:** * The absolute path that the folder was downloaded to. **Examples:** Download my\_folder to a local folder named my\_folder. ```python theme={null} from prefect_gcp.cloud_storage import GcsBucket gcs_bucket = GcsBucket.load("my-bucket") await gcs_bucket.adownload_folder_to_path("my_folder", "my_folder") ``` #### `adownload_object_to_file_object` ```python theme={null} adownload_object_to_file_object(self, from_path: str, to_file_object: BinaryIO, **download_kwargs: Dict[str, Any]) -> BinaryIO ``` Downloads an object from the object storage service to a file-like object (async version), which can be a BytesIO object or a BufferedWriter. **Args:** * `from_path`: The path to the blob to download from; this gets prefixed with the bucket\_folder. * `to_file_object`: The file-like object to download the blob to. * `**download_kwargs`: Additional keyword arguments to pass to `Blob.download_to_file`. **Returns:** * The file-like object that the object was downloaded to. **Examples:** Download my\_folder/notes.txt object to a BytesIO object. ```python theme={null} from io import BytesIO from prefect_gcp.cloud_storage import GcsBucket gcs_bucket = GcsBucket.load("my-bucket") with BytesIO() as buf: await gcs_bucket.adownload_object_to_file_object("my_folder/notes.txt", buf) ``` Download my\_folder/notes.txt object to a BufferedWriter. ```python theme={null} from prefect_gcp.cloud_storage import GcsBucket gcs_bucket = GcsBucket.load("my-bucket") with open("notes.txt", "wb") as f: await gcs_bucket.adownload_object_to_file_object("my_folder/notes.txt", f) ``` #### `adownload_object_to_path` ```python theme={null} adownload_object_to_path(self, from_path: str, to_path: Optional[Union[str, Path]] = None, **download_kwargs: Dict[str, Any]) -> Path ``` Downloads an object from the object storage service to a path (async version). **Args:** * `from_path`: The path to the blob to download; this gets prefixed with the bucket\_folder. * `to_path`: The path to download the blob to. If not provided, the blob's name will be used. * `**download_kwargs`: Additional keyword arguments to pass to `Blob.download_to_filename`. **Returns:** * The absolute path that the object was downloaded to. **Examples:** Download my\_folder/notes.txt object to notes.txt. ```python theme={null} from prefect_gcp.cloud_storage import GcsBucket gcs_bucket = GcsBucket.load("my-bucket") await gcs_bucket.adownload_object_to_path("my_folder/notes.txt", "notes.txt") ``` #### `aget_bucket` ```python theme={null} aget_bucket(self) -> 'Bucket' ``` Returns the bucket object (async version). **Returns:** * The bucket object. **Examples:** Get the bucket object. ```python theme={null} from prefect_gcp.cloud_storage import GcsBucket gcs_bucket = GcsBucket.load("my-bucket") await gcs_bucket.aget_bucket() ``` #### `aget_directory` ```python theme={null} aget_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> List[Union[str, Path]] ``` Copies a folder from the configured GCS bucket to a local directory (async version). Defaults to copying the entire contents of the block's bucket\_folder to the current working directory. **Args:** * `from_path`: Path in GCS bucket to download from. Defaults to the block's configured bucket\_folder. * `local_path`: Local path to download GCS bucket contents to. Defaults to the current working directory. **Returns:** * A list of downloaded file paths. #### `alist_blobs` ```python theme={null} alist_blobs(self, folder: str = '') -> List['Blob'] ``` Lists all blobs in the bucket that are in a folder (async version). Folders are not included in the output. **Args:** * `folder`: The folder to list blobs from. **Returns:** * A list of Blob objects. **Examples:** Get all blobs from a folder named "prefect". ```python theme={null} from prefect_gcp.cloud_storage import GcsBucket gcs_bucket = GcsBucket.load("my-bucket") await gcs_bucket.alist_blobs("prefect") ``` #### `alist_folders` ```python theme={null} alist_folders(self, folder: str = '') -> List[str] ``` Lists all folders and subfolders in the bucket (async version). **Args:** * `folder`: List all folders and subfolders inside given folder. **Returns:** * A list of folders. **Examples:** Get all folders from a bucket named "my-bucket". ```python theme={null} from prefect_gcp.cloud_storage import GcsBucket gcs_bucket = GcsBucket.load("my-bucket") await gcs_bucket.alist_folders() ``` Get all folders from a folder called years ```python theme={null} from prefect_gcp.cloud_storage import GcsBucket gcs_bucket = GcsBucket.load("my-bucket") await gcs_bucket.alist_folders("years") ``` #### `aput_directory` ```python theme={null} aput_directory(self, local_path: Optional[str] = None, to_path: Optional[str] = None, ignore_file: Optional[str] = None) -> int ``` Uploads a directory from a given local path to the configured GCS bucket in a given folder (async version). Defaults to uploading the entire contents the current working directory to the block's bucket\_folder. **Args:** * `local_path`: Path to local directory to upload from. * `to_path`: Path in GCS bucket to upload to. Defaults to block's configured bucket\_folder. * `ignore_file`: Path to file containing gitignore style expressions for filepaths to ignore. **Returns:** * The number of files uploaded. #### `aread_path` ```python theme={null} aread_path(self, path: str) -> bytes ``` Read specified path from GCS and return contents (async version). Provide the entire path to the key in GCS. **Args:** * `path`: Entire path to (and including) the key. **Returns:** * A bytes or string representation of the blob object. #### `aupload_from_dataframe` ```python theme={null} aupload_from_dataframe(self, df: 'DataFrame', to_path: str, serialization_format: Union[str, DataFrameSerializationFormat] = DataFrameSerializationFormat.CSV_GZIP, **upload_kwargs: Dict[str, Any]) -> str ``` Upload a Pandas DataFrame to Google Cloud Storage in various formats (async version). This function uploads the data in a Pandas DataFrame to Google Cloud Storage in a specified format, such as .csv, .csv.gz, .parquet, .parquet.snappy, and .parquet.gz. **Args:** * `df`: The Pandas DataFrame to be uploaded. * `to_path`: The destination path for the uploaded DataFrame. * `serialization_format`: The format to serialize the DataFrame into. When passed as a `str`, the valid options are: 'csv', 'csv\_gzip', 'parquet', 'parquet\_snappy', 'parquet\_gzip'. Defaults to `DataFrameSerializationFormat.CSV_GZIP`. * `**upload_kwargs`: Additional keyword arguments to pass to the underlying `upload_from_dataframe` method. **Returns:** * The path that the object was uploaded to. #### `aupload_from_file_object` ```python theme={null} aupload_from_file_object(self, from_file_object: BinaryIO, to_path: str, **upload_kwargs) -> str ``` Uploads an object to the object storage service from a file-like object (async version), which can be a BytesIO object or a BufferedReader. **Args:** * `from_file_object`: The file-like object to upload from. * `to_path`: The path to upload the object to; this gets prefixed with the bucket\_folder. * `**upload_kwargs`: Additional keyword arguments to pass to `Blob.upload_from_file`. **Returns:** * The path that the object was uploaded to. **Examples:** Upload my\_folder/notes.txt object to a BytesIO object. ```python theme={null} from io import BytesIO from prefect_gcp.cloud_storage import GcsBucket gcs_bucket = GcsBucket.load("my-bucket") with open("notes.txt", "rb") as f: await gcs_bucket.aupload_from_file_object(f, "my_folder/notes.txt") ``` Upload BufferedReader object to my\_folder/notes.txt. ```python theme={null} from io import BufferedReader from prefect_gcp.cloud_storage import GcsBucket gcs_bucket = GcsBucket.load("my-bucket") with open("notes.txt", "rb") as f: await gcs_bucket.aupload_from_file_object( BufferedReader(f), "my_folder/notes.txt" ) ``` #### `aupload_from_folder` ```python theme={null} aupload_from_folder(self, from_folder: Union[str, Path], to_folder: Optional[str] = None, **upload_kwargs: Dict[str, Any]) -> str ``` Uploads files *within* a folder (excluding the folder itself) to the object storage service folder (async version). **Args:** * `from_folder`: The path to the folder to upload from. * `to_folder`: The path to upload the folder to. If not provided, will default to bucket\_folder or the base directory of the bucket. * `**upload_kwargs`: Additional keyword arguments to pass to `Blob.upload_from_filename`. **Returns:** * The path that the folder was uploaded to. **Examples:** Upload local folder my\_folder to the bucket's folder my\_folder. ```python theme={null} from prefect_gcp.cloud_storage import GcsBucket gcs_bucket = GcsBucket.load("my-bucket") await gcs_bucket.aupload_from_folder("my_folder") ``` #### `aupload_from_path` ```python theme={null} aupload_from_path(self, from_path: Union[str, Path], to_path: Optional[str] = None, **upload_kwargs: Dict[str, Any]) -> str ``` Uploads an object from a path to the object storage service (async version). **Args:** * `from_path`: The path to the file to upload from. * `to_path`: The path to upload the file to. If not provided, will use the file name of from\_path; this gets prefixed with the bucket\_folder. * `**upload_kwargs`: Additional keyword arguments to pass to `Blob.upload_from_filename`. **Returns:** * The path that the object was uploaded to. **Examples:** Upload notes.txt to my\_folder/notes.txt. ```python theme={null} from prefect_gcp.cloud_storage import GcsBucket gcs_bucket = GcsBucket.load("my-bucket") await gcs_bucket.aupload_from_path("notes.txt", "my_folder/notes.txt") ``` #### `awrite_path` ```python theme={null} awrite_path(self, path: str, content: bytes) -> str ``` Writes to an GCS bucket (async version). **Args:** * `path`: The key name. Each object in your bucket has a unique key (or key name). * `content`: What you are uploading to GCS Bucket. **Returns:** * The path that the contents were written to. #### `basepath` ```python theme={null} basepath(self) -> str ``` Read-only property that mirrors the bucket folder. Used for deployment. #### `create_bucket` ```python theme={null} create_bucket(self, location: Optional[str] = None, **create_kwargs) -> 'Bucket' ``` Creates a bucket. **Args:** * `location`: The location of the bucket. * `**create_kwargs`: Additional keyword arguments to pass to the `create_bucket` method. **Returns:** * The bucket object. **Examples:** Create a bucket. ```python theme={null} from prefect_gcp.cloud_storage import GcsBucket gcs_bucket = GcsBucket(bucket="my-bucket") gcs_bucket.create_bucket() ``` #### `download_folder_to_path` ```python theme={null} download_folder_to_path(self, from_folder: str, to_folder: Optional[Union[str, Path]] = None, **download_kwargs: Dict[str, Any]) -> Path ``` Downloads objects *within* a folder (excluding the folder itself) from the object storage service to a folder. **Args:** * `from_folder`: The path to the folder to download from; this gets prefixed with the bucket\_folder. * `to_folder`: The path to download the folder to. If not provided, will default to the current directory. * `**download_kwargs`: Additional keyword arguments to pass to `Blob.download_to_filename`. **Returns:** * The absolute path that the folder was downloaded to. **Examples:** Download my\_folder to a local folder named my\_folder. ```python theme={null} from prefect_gcp.cloud_storage import GcsBucket gcs_bucket = GcsBucket.load("my-bucket") gcs_bucket.download_folder_to_path("my_folder", "my_folder") ``` #### `download_object_to_file_object` ```python theme={null} download_object_to_file_object(self, from_path: str, to_file_object: BinaryIO, **download_kwargs: Dict[str, Any]) -> BinaryIO ``` Downloads an object from the object storage service to a file-like object, which can be a BytesIO object or a BufferedWriter. **Args:** * `from_path`: The path to the blob to download from; this gets prefixed with the bucket\_folder. * `to_file_object`: The file-like object to download the blob to. * `**download_kwargs`: Additional keyword arguments to pass to `Blob.download_to_file`. **Returns:** * The file-like object that the object was downloaded to. **Examples:** Download my\_folder/notes.txt object to a BytesIO object. ```python theme={null} from io import BytesIO from prefect_gcp.cloud_storage import GcsBucket gcs_bucket = GcsBucket.load("my-bucket") with BytesIO() as buf: gcs_bucket.download_object_to_file_object("my_folder/notes.txt", buf) ``` Download my\_folder/notes.txt object to a BufferedWriter. ```python theme={null} from prefect_gcp.cloud_storage import GcsBucket gcs_bucket = GcsBucket.load("my-bucket") with open("notes.txt", "wb") as f: gcs_bucket.download_object_to_file_object("my_folder/notes.txt", f) ``` #### `download_object_to_path` ```python theme={null} download_object_to_path(self, from_path: str, to_path: Optional[Union[str, Path]] = None, **download_kwargs: Dict[str, Any]) -> Path ``` Downloads an object from the object storage service to a path. **Args:** * `from_path`: The path to the blob to download; this gets prefixed with the bucket\_folder. * `to_path`: The path to download the blob to. If not provided, the blob's name will be used. * `**download_kwargs`: Additional keyword arguments to pass to `Blob.download_to_filename`. **Returns:** * The absolute path that the object was downloaded to. **Examples:** Download my\_folder/notes.txt object to notes.txt. ```python theme={null} from prefect_gcp.cloud_storage import GcsBucket gcs_bucket = GcsBucket.load("my-bucket") gcs_bucket.download_object_to_path("my_folder/notes.txt", "notes.txt") ``` #### `get_bucket` ```python theme={null} get_bucket(self) -> 'Bucket' ``` Returns the bucket object. **Returns:** * The bucket object. **Examples:** Get the bucket object. ```python theme={null} from prefect_gcp.cloud_storage import GcsBucket gcs_bucket = GcsBucket.load("my-bucket") gcs_bucket.get_bucket() ``` #### `get_directory` ```python theme={null} get_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> List[Union[str, Path]] ``` Copies a folder from the configured GCS bucket to a local directory. Defaults to copying the entire contents of the block's bucket\_folder to the current working directory. **Args:** * `from_path`: Path in GCS bucket to download from. Defaults to the block's configured bucket\_folder. * `local_path`: Local path to download GCS bucket contents to. Defaults to the current working directory. **Returns:** * A list of downloaded file paths. #### `list_blobs` ```python theme={null} list_blobs(self, folder: str = '') -> List['Blob'] ``` Lists all blobs in the bucket that are in a folder. Folders are not included in the output. **Args:** * `folder`: The folder to list blobs from. **Returns:** * A list of Blob objects. **Examples:** Get all blobs from a folder named "prefect". ```python theme={null} from prefect_gcp.cloud_storage import GcsBucket gcs_bucket = GcsBucket.load("my-bucket") gcs_bucket.list_blobs("prefect") ``` #### `list_folders` ```python theme={null} list_folders(self, folder: str = '') -> List[str] ``` Lists all folders and subfolders in the bucket. **Args:** * `folder`: List all folders and subfolders inside given folder. **Returns:** * A list of folders. **Examples:** Get all folders from a bucket named "my-bucket". ```python theme={null} from prefect_gcp.cloud_storage import GcsBucket gcs_bucket = GcsBucket.load("my-bucket") gcs_bucket.list_folders() ``` Get all folders from a folder called years ```python theme={null} from prefect_gcp.cloud_storage import GcsBucket gcs_bucket = GcsBucket.load("my-bucket") gcs_bucket.list_folders("years") ``` #### `put_directory` ```python theme={null} put_directory(self, local_path: Optional[str] = None, to_path: Optional[str] = None, ignore_file: Optional[str] = None) -> int ``` Uploads a directory from a given local path to the configured GCS bucket in a given folder. Defaults to uploading the entire contents the current working directory to the block's bucket\_folder. **Args:** * `local_path`: Path to local directory to upload from. * `to_path`: Path in GCS bucket to upload to. Defaults to block's configured bucket\_folder. * `ignore_file`: Path to file containing gitignore style expressions for filepaths to ignore. **Returns:** * The number of files uploaded. #### `read_path` ```python theme={null} read_path(self, path: str) -> bytes ``` Read specified path from GCS and return contents. Provide the entire path to the key in GCS. **Args:** * `path`: Entire path to (and including) the key. **Returns:** * A bytes or string representation of the blob object. #### `upload_from_dataframe` ```python theme={null} upload_from_dataframe(self, df: 'DataFrame', to_path: str, serialization_format: Union[str, DataFrameSerializationFormat] = DataFrameSerializationFormat.CSV_GZIP, **upload_kwargs: Dict[str, Any]) -> str ``` Upload a Pandas DataFrame to Google Cloud Storage in various formats. This function uploads the data in a Pandas DataFrame to Google Cloud Storage in a specified format, such as .csv, .csv.gz, .parquet, .parquet.snappy, and .parquet.gz. **Args:** * `df`: The Pandas DataFrame to be uploaded. * `to_path`: The destination path for the uploaded DataFrame. * `serialization_format`: The format to serialize the DataFrame into. When passed as a `str`, the valid options are: 'csv', 'csv\_gzip', 'parquet', 'parquet\_snappy', 'parquet\_gzip'. Defaults to `DataFrameSerializationFormat.CSV_GZIP`. * `**upload_kwargs`: Additional keyword arguments to pass to the underlying `upload_from_dataframe` method. **Returns:** * The path that the object was uploaded to. #### `upload_from_file_object` ```python theme={null} upload_from_file_object(self, from_file_object: BinaryIO, to_path: str, **upload_kwargs) -> str ``` Uploads an object to the object storage service from a file-like object, which can be a BytesIO object or a BufferedReader. **Args:** * `from_file_object`: The file-like object to upload from. * `to_path`: The path to upload the object to; this gets prefixed with the bucket\_folder. * `**upload_kwargs`: Additional keyword arguments to pass to `Blob.upload_from_file`. **Returns:** * The path that the object was uploaded to. **Examples:** Upload my\_folder/notes.txt object to a BytesIO object. ```python theme={null} from io import BytesIO from prefect_gcp.cloud_storage import GcsBucket gcs_bucket = GcsBucket.load("my-bucket") with open("notes.txt", "rb") as f: gcs_bucket.upload_from_file_object(f, "my_folder/notes.txt") ``` Upload BufferedReader object to my\_folder/notes.txt. ```python theme={null} from io import BufferedReader from prefect_gcp.cloud_storage import GcsBucket gcs_bucket = GcsBucket.load("my-bucket") with open("notes.txt", "rb") as f: gcs_bucket.upload_from_file_object( BufferedReader(f), "my_folder/notes.txt" ) ``` #### `upload_from_folder` ```python theme={null} upload_from_folder(self, from_folder: Union[str, Path], to_folder: Optional[str] = None, **upload_kwargs: Dict[str, Any]) -> str ``` Uploads files *within* a folder (excluding the folder itself) to the object storage service folder. **Args:** * `from_folder`: The path to the folder to upload from. * `to_folder`: The path to upload the folder to. If not provided, will default to bucket\_folder or the base directory of the bucket. * `**upload_kwargs`: Additional keyword arguments to pass to `Blob.upload_from_filename`. **Returns:** * The path that the folder was uploaded to. **Examples:** Upload local folder my\_folder to the bucket's folder my\_folder. ```python theme={null} from prefect_gcp.cloud_storage import GcsBucket gcs_bucket = GcsBucket.load("my-bucket") gcs_bucket.upload_from_folder("my_folder") ``` #### `upload_from_path` ```python theme={null} upload_from_path(self, from_path: Union[str, Path], to_path: Optional[str] = None, **upload_kwargs: Dict[str, Any]) -> str ``` Uploads an object from a path to the object storage service. **Args:** * `from_path`: The path to the file to upload from. * `to_path`: The path to upload the file to. If not provided, will use the file name of from\_path; this gets prefixed with the bucket\_folder. * `**upload_kwargs`: Additional keyword arguments to pass to `Blob.upload_from_filename`. **Returns:** * The path that the object was uploaded to. **Examples:** Upload notes.txt to my\_folder/notes.txt. ```python theme={null} from prefect_gcp.cloud_storage import GcsBucket gcs_bucket = GcsBucket.load("my-bucket") gcs_bucket.upload_from_path("notes.txt", "my_folder/notes.txt") ``` #### `write_path` ```python theme={null} write_path(self, path: str, content: bytes) -> str ``` Writes to an GCS bucket. **Args:** * `path`: The key name. Each object in your bucket has a unique key (or key name). * `content`: What you are uploading to GCS Bucket. **Returns:** * The path that the contents were written to. # credentials Source: https://docs.prefect.io/integrations/prefect-gcp/api-ref/prefect_gcp-credentials # `prefect_gcp.credentials` Module handling GCP credentials. ## Classes ### `ClientType` ### `GcpCredentials` Block used to manage authentication with GCP. Google authentication is handled via the `google.oauth2` module or through the CLI. Specify either one of service `account_file` or `service_account_info`; if both are not specified, the client will try to detect the credentials following Google's [Application Default Credentials](https://cloud.google.com/docs/authentication/application-default-credentials). See Google's [Authentication documentation](https://cloud.google.com/docs/authentication#service-accounts) for details on inference and recommended authentication patterns. **Attributes:** * `service_account_file`: Path to the service account JSON keyfile. * `service_account_info`: The contents of the keyfile as a dict. **Methods:** #### `block_initialization` ```python theme={null} block_initialization(self) ``` #### `get_access_token` ```python theme={null} get_access_token(self) ``` See: [https://stackoverflow.com/a/69107745](https://stackoverflow.com/a/69107745) Also: [https://www.jhanley.com/google-cloud-creating-oauth-access-tokens-for-rest-api-calls/](https://www.jhanley.com/google-cloud-creating-oauth-access-tokens-for-rest-api-calls/) #### `get_bigquery_client` ```python theme={null} get_bigquery_client(self, project: Optional[str] = None, location: Optional[str] = None) -> 'BigQueryClient' ``` Gets an authenticated BigQuery client. **Args:** * `project`: Name of the project to use; overrides the base class's project if provided. * `location`: Location to use. **Returns:** * An authenticated BigQuery client. **Examples:** Gets a GCP BigQuery client from a path. ```python theme={null} from prefect import flow from prefect_gcp.credentials import GcpCredentials @flow() def example_get_client_flow(): service_account_file = "~/.secrets/prefect-service-account.json" client = GcpCredentials( service_account_file=service_account_file ).get_bigquery_client() example_get_client_flow() ``` Gets a GCP BigQuery client from a dictionary. ```python theme={null} from prefect import flow from prefect_gcp.credentials import GcpCredentials @flow() def example_get_client_flow(): service_account_info = { "type": "service_account", "project_id": "project_id", "private_key_id": "private_key_id", "private_key": "private_key", "client_email": "client_email", "client_id": "client_id", "auth_uri": "auth_uri", "token_uri": "token_uri", "auth_provider_x509_cert_url": "auth_provider_x509_cert_url", "client_x509_cert_url": "client_x509_cert_url" } client = GcpCredentials( service_account_info=service_account_info ).get_bigquery_client() example_get_client_flow() ``` #### `get_client` ```python theme={null} get_client(self, client_type: Union[str, ClientType], **get_client_kwargs: Dict[str, Any]) -> Any ``` Helper method to dynamically get a client type. **Args:** * `client_type`: The name of the client to get. * `**get_client_kwargs`: Additional keyword arguments to pass to the `get_*_client` method. **Returns:** * An authenticated client. **Raises:** * `ValueError`: if the client is not supported. #### `get_cloud_storage_client` ```python theme={null} get_cloud_storage_client(self, project: Optional[str] = None) -> 'StorageClient' ``` Gets an authenticated Cloud Storage client. **Args:** * `project`: Name of the project to use; overrides the base class's project if provided. **Returns:** * An authenticated Cloud Storage client. **Examples:** Gets a GCP Cloud Storage client from a path. ```python theme={null} from prefect import flow from prefect_gcp.credentials import GcpCredentials @flow() def example_get_client_flow(): service_account_file = "~/.secrets/prefect-service-account.json" client = GcpCredentials( service_account_file=service_account_file ).get_cloud_storage_client() example_get_client_flow() ``` Gets a GCP Cloud Storage client from a dictionary. ```python theme={null} from prefect import flow from prefect_gcp.credentials import GcpCredentials @flow() def example_get_client_flow(): service_account_info = { "type": "service_account", "project_id": "project_id", "private_key_id": "private_key_id", "private_key": "private_key", "client_email": "client_email", "client_id": "client_id", "auth_uri": "auth_uri", "token_uri": "token_uri", "auth_provider_x509_cert_url": "auth_provider_x509_cert_url", "client_x509_cert_url": "client_x509_cert_url" } client = GcpCredentials( service_account_info=service_account_info ).get_cloud_storage_client() example_get_client_flow() ``` #### `get_credentials_from_service_account` ```python theme={null} get_credentials_from_service_account(self) -> Credentials ``` Helper method to serialize credentials by using either service\_account\_file or service\_account\_info. #### `get_job_service_async_client` ```python theme={null} get_job_service_async_client(self, client_options: Union[Dict[str, Any], ClientOptions] = None) -> 'JobServiceAsyncClient' ``` Gets an authenticated Job Service async client for Vertex AI. **Returns:** * An authenticated Job Service async client. **Examples:** Gets a GCP Job Service client from a path. ```python theme={null} from prefect import flow from prefect_gcp.credentials import GcpCredentials @flow() def example_get_client_flow(): service_account_file = "~/.secrets/prefect-service-account.json" client = GcpCredentials( service_account_file=service_account_file ).get_job_service_async_client() example_get_client_flow() ``` Gets a GCP Cloud Storage client from a dictionary. ```python theme={null} from prefect import flow from prefect_gcp.credentials import GcpCredentials @flow() def example_get_client_flow(): service_account_info = { "type": "service_account", "project_id": "project_id", "private_key_id": "private_key_id", "private_key": "private_key", "client_email": "client_email", "client_id": "client_id", "auth_uri": "auth_uri", "token_uri": "token_uri", "auth_provider_x509_cert_url": "auth_provider_x509_cert_url", "client_x509_cert_url": "client_x509_cert_url" } client = GcpCredentials( service_account_info=service_account_info ).get_job_service_async_client() example_get_client_flow() ``` #### `get_job_service_client` ```python theme={null} get_job_service_client(self, client_options: Union[Dict[str, Any], ClientOptions] = None) -> 'JobServiceClient' ``` Gets an authenticated Job Service client for Vertex AI. **Returns:** * An authenticated Job Service client. **Examples:** Gets a GCP Job Service client from a path. ```python theme={null} from prefect import flow from prefect_gcp.credentials import GcpCredentials @flow() def example_get_client_flow(): service_account_file = "~/.secrets/prefect-service-account.json" client = GcpCredentials( service_account_file=service_account_file ).get_job_service_client() example_get_client_flow() ``` Gets a GCP Cloud Storage client from a dictionary. ```python theme={null} from prefect import flow from prefect_gcp.credentials import GcpCredentials @flow() def example_get_client_flow(): service_account_info = { "type": "service_account", "project_id": "project_id", "private_key_id": "private_key_id", "private_key": "private_key", "client_email": "client_email", "client_id": "client_id", "auth_uri": "auth_uri", "token_uri": "token_uri", "auth_provider_x509_cert_url": "auth_provider_x509_cert_url", "client_x509_cert_url": "client_x509_cert_url" } client = GcpCredentials( service_account_info=service_account_info ).get_job_service_client() example_get_client_flow() ``` #### `get_secret_manager_client` ```python theme={null} get_secret_manager_client(self) -> 'SecretManagerServiceClient' ``` Gets an authenticated Secret Manager Service client. **Returns:** * An authenticated Secret Manager Service client. **Examples:** Gets a GCP Secret Manager client from a path. ```python theme={null} from prefect import flow from prefect_gcp.credentials import GcpCredentials @flow() def example_get_client_flow(): service_account_file = "~/.secrets/prefect-service-account.json" client = GcpCredentials( service_account_file=service_account_file ).get_secret_manager_client() example_get_client_flow() ``` Gets a GCP Cloud Storage client from a dictionary. ```python theme={null} from prefect import flow from prefect_gcp.credentials import GcpCredentials @flow() def example_get_client_flow(): service_account_info = { "type": "service_account", "project_id": "project_id", "private_key_id": "private_key_id", "private_key": "private_key", "client_email": "client_email", "client_id": "client_id", "auth_uri": "auth_uri", "token_uri": "token_uri", "auth_provider_x509_cert_url": "auth_provider_x509_cert_url", "client_x509_cert_url": "client_x509_cert_url" } client = GcpCredentials( service_account_info=service_account_info ).get_secret_manager_client() example_get_client_flow() ``` # steps Source: https://docs.prefect.io/integrations/prefect-gcp/api-ref/prefect_gcp-deployments-steps # `prefect_gcp.deployments.steps` Prefect deployment steps for code storage in and retrieval from Google Cloud Storage. ## Functions ### `push_to_gcs` ```python theme={null} push_to_gcs(bucket: str, folder: str, project: Optional[str] = None, credentials: Optional[Dict] = None, ignore_file = '.prefectignore') -> PushToGcsOutput ``` Pushes the contents of the current working directory to a GCS bucket, excluding files and folders specified in the ignore\_file. **Args:** * `bucket`: The name of the GCS bucket where files will be uploaded. * `folder`: The folder in the GCS bucket where files will be uploaded. * `project`: The GCP project the bucket belongs to. If not provided, the project will be inferred from the credentials or the local environment. * `credentials`: A dictionary containing the service account information and project used for authentication. If not provided, the application default credentials will be used. * `ignore_file`: The name of the file containing ignore patterns. **Returns:** * A dictionary containing the bucket and folder where files were uploaded. **Examples:** Push to a GCS bucket: ```yaml theme={null} build: - prefect_gcp.deployments.steps.push_to_gcs: requires: prefect-gcp bucket: my-bucket folder: my-project ``` Push to a GCS bucket using credentials stored in a block: ```yaml theme={null} build: - prefect_gcp.deployments.steps.push_to_gcs: requires: prefect-gcp bucket: my-bucket folder: my-folder credentials: "{{ prefect.blocks.gcp-credentials.dev-credentials }}" ``` Push to a GCS bucket using credentials stored in a service account file: ```yaml theme={null} build: - prefect_gcp.deployments.steps.push_to_gcs: requires: prefect-gcp bucket: my-bucket folder: my-folder credentials: project: my-project service_account_file: /path/to/service_account.json ``` ### `pull_from_gcs` ```python theme={null} pull_from_gcs(bucket: str, folder: str, project: Optional[str] = None, credentials: Optional[Dict] = None) -> PullFromGcsOutput ``` Pulls the contents of a project from an GCS bucket to the current working directory. **Args:** * `bucket`: The name of the GCS bucket where files are stored. * `folder`: The folder in the GCS bucket where files are stored. * `project`: The GCP project the bucket belongs to. If not provided, the project will be inferred from the credentials or the local environment. * `credentials`: A dictionary containing the service account information and project used for authentication. If not provided, the application default credentials will be used. **Returns:** * A dictionary containing the bucket, folder, and local directory where files were downloaded. **Examples:** Pull from GCS using the default environment credentials: ```yaml theme={null} build: - prefect_gcp.deployments.steps.pull_from_gcs: requires: prefect-gcp bucket: my-bucket folder: my-folder ``` Pull from GCS using credentials stored in a block: ```yaml theme={null} build: - prefect_gcp.deployments.steps.pull_from_gcs: requires: prefect-gcp bucket: my-bucket folder: my-folder credentials: "{{ prefect.blocks.gcp-credentials.dev-credentials }}" ``` Pull from to an GCS bucket using credentials stored in a service account file: ```yaml theme={null} build: - prefect_gcp.deployments.steps.pull_from_gcs: requires: prefect-gcp bucket: my-bucket folder: my-folder credentials: project: my-project service_account_file: /path/to/service_account.json ``` ## Classes ### `PushToGcsOutput` The output of the `push_to_gcs` step. ### `PullFromGcsOutput` The output of the `pull_from_gcs` step. # execute Source: https://docs.prefect.io/integrations/prefect-gcp/api-ref/prefect_gcp-experimental-bundles-execute # `prefect_gcp.experimental.bundles.execute` ## Functions ### `execute_bundle_from_gcs` ```python theme={null} execute_bundle_from_gcs(bucket: str, key: str, gcp_credentials_block_name: Optional[str] = None) -> None ``` # upload Source: https://docs.prefect.io/integrations/prefect-gcp/api-ref/prefect_gcp-experimental-bundles-upload # `prefect_gcp.experimental.bundles.upload` ## Functions ### `upload_bundle_to_gcs` ```python theme={null} upload_bundle_to_gcs(local_filepath: Path, bucket: str, key: str, gcp_credentials_block_name: str | None = None) -> UploadBundleToGcsOutput ``` Uploads a bundle file to a GCS bucket. **Args:** * `local_filepath`: The path to the bundle file to upload. * `bucket`: The name of the GCS bucket to upload the bundle to. * `key`: The key (path) to upload the bundle to in the GCS bucket. * `gcp_credentials_block_name`: The name of the GCP credentials block to use. **Returns:** * A dictionary containing the bucket and key of the uploaded bundle. ## Classes ### `UploadBundleToGcsOutput` The output of the `upload_bundle_to_gcs` step. # decorators Source: https://docs.prefect.io/integrations/prefect-gcp/api-ref/prefect_gcp-experimental-decorators # `prefect_gcp.experimental.decorators` ## Functions ### `cloud_run` ```python theme={null} cloud_run(work_pool: str, **job_variables: Any) -> Callable[[Flow[P, R]], InfrastructureBoundFlow[P, R]] ``` Decorator that binds execution of a flow to a Cloud Run V2 work pool **Args:** * `work_pool`: The name of the Cloud Run V2 work pool to use * `**job_variables`: Additional job variables to use for infrastructure configuration ### `vertex_ai` ```python theme={null} vertex_ai(work_pool: str, **job_variables: Any) -> Callable[[Flow[P, R]], InfrastructureBoundFlow[P, R]] ``` Decorator that binds execution of a flow to a Vertex AI work pool **Args:** * `work_pool`: The name of the Vertex AI work pool to use * `**job_variables`: Additional job variables to use for infrastructure configuration # cloud_run_v2 Source: https://docs.prefect.io/integrations/prefect-gcp/api-ref/prefect_gcp-models-cloud_run_v2 # `prefect_gcp.models.cloud_run_v2` ## Classes ### `SecretKeySelector` SecretKeySelector is a data model for specifying a GCP secret to inject into a Cloud Run V2 Job as an environment variable. Follows Cloud Run V2 rest API, docs: [https://cloud.google.com/run/docs/reference/rest/v2/Container#SecretKeySelector](https://cloud.google.com/run/docs/reference/rest/v2/Container#SecretKeySelector) ### `JobV2` JobV2 is a data model for a job that will be run on Cloud Run with the V2 API. **Methods:** #### `create` ```python theme={null} create(cr_client: Resource, project: str, location: str, job_id: str, body: Dict) -> Dict ``` Create a job on Cloud Run with the V2 API. **Args:** * `cr_client`: The base client needed for interacting with GCP Cloud Run V2 API. * `project`: The GCP project ID. * `location`: The GCP region. * `job_id`: The ID of the job to create. * `body`: The job body. **Returns:** * The response from the Cloud Run V2 API. #### `delete` ```python theme={null} delete(cr_client: Resource, project: str, location: str, job_name: str) -> Dict ``` Delete a job on Cloud Run with the V2 API. **Args:** * `cr_client`: The base client needed for interacting with GCP Cloud Run V2 API. * `project`: The GCP project ID. * `location`: The GCP region. * `job_name`: The name of the job to delete. **Returns:** * The response from the Cloud Run V2 API. #### `get` ```python theme={null} get(cls, cr_client: Resource, project: str, location: str, job_name: str) ``` Get a job from Cloud Run with the V2 API. **Args:** * `cr_client`: The base client needed for interacting with GCP Cloud Run V2 API. * `project`: The GCP project ID. * `location`: The GCP region. * `job_name`: The name of the job to get. #### `get_ready_condition` ```python theme={null} get_ready_condition(self) -> Dict ``` Get the ready condition for the job. **Returns:** * The ready condition for the job. #### `is_ready` ```python theme={null} is_ready(self) -> bool ``` Check if the job is ready to run. **Returns:** * Whether the job is ready to run. #### `run` ```python theme={null} run(cr_client: Resource, project: str, location: str, job_name: str) ``` Run a job on Cloud Run with the V2 API. **Args:** * `cr_client`: The base client needed for interacting with GCP Cloud Run V2 API. * `project`: The GCP project ID. * `location`: The GCP region. * `job_name`: The name of the job to run. ### `ExecutionV2` ExecutionV2 is a data model for an execution of a job that will be run on Cloud Run API v2. **Methods:** #### `condition_after_completion` ```python theme={null} condition_after_completion(self) -> Dict ``` Return the condition after completion. **Returns:** * The condition after completion. #### `get` ```python theme={null} get(cls, cr_client: Resource, execution_id: str) ``` Get an execution from Cloud Run with the V2 API. **Args:** * `cr_client`: The base client needed for interacting with GCP Cloud Run V2 API. * `execution_id`: The name of the execution to get, in the form of projects//locations//jobs//executions / #### `is_running` ```python theme={null} is_running(self) -> bool ``` Return whether the execution is running. **Returns:** * Whether the execution is running. #### `succeeded` ```python theme={null} succeeded(self) ``` Whether or not the Execution completed is a successful state. # secret_manager Source: https://docs.prefect.io/integrations/prefect-gcp/api-ref/prefect_gcp-secret_manager # `prefect_gcp.secret_manager` ## Functions ### `acreate_secret` ```python theme={null} acreate_secret(secret_name: str, gcp_credentials: 'GcpCredentials', timeout: float = 60, project: Optional[str] = None) -> str ``` Creates a secret in Google Cloud Platform's Secret Manager. **Args:** * `secret_name`: Name of the secret to retrieve. * `gcp_credentials`: Credentials to use for authentication with GCP. * `timeout`: The number of seconds the transport should wait for the server response. * `project`: Name of the project to use; overrides the gcp\_credentials project if provided. **Returns:** * The path of the created secret. ### `create_secret` ```python theme={null} create_secret(secret_name: str, gcp_credentials: 'GcpCredentials', timeout: float = 60, project: Optional[str] = None) -> str ``` Creates a secret in Google Cloud Platform's Secret Manager. **Args:** * `secret_name`: Name of the secret to retrieve. * `gcp_credentials`: Credentials to use for authentication with GCP. * `timeout`: The number of seconds the transport should wait for the server response. * `project`: Name of the project to use; overrides the gcp\_credentials project if provided. **Returns:** * The path of the created secret. ### `aupdate_secret` ```python theme={null} aupdate_secret(secret_name: str, secret_value: Union[str, bytes], gcp_credentials: 'GcpCredentials', timeout: float = 60, project: Optional[str] = None) -> str ``` Updates a secret in Google Cloud Platform's Secret Manager. **Args:** * `secret_name`: Name of the secret to retrieve. * `secret_value`: Desired value of the secret. Can be either `str` or `bytes`. * `gcp_credentials`: Credentials to use for authentication with GCP. * `timeout`: The number of seconds the transport should wait for the server response. * `project`: Name of the project to use; overrides the gcp\_credentials project if provided. **Returns:** * The path of the updated secret. ### `update_secret` ```python theme={null} update_secret(secret_name: str, secret_value: Union[str, bytes], gcp_credentials: 'GcpCredentials', timeout: float = 60, project: Optional[str] = None) -> str ``` Updates a secret in Google Cloud Platform's Secret Manager. **Args:** * `secret_name`: Name of the secret to retrieve. * `secret_value`: Desired value of the secret. Can be either `str` or `bytes`. * `gcp_credentials`: Credentials to use for authentication with GCP. * `timeout`: The number of seconds the transport should wait for the server response. * `project`: Name of the project to use; overrides the gcp\_credentials project if provided. **Returns:** * The path of the updated secret. ### `aread_secret` ```python theme={null} aread_secret(secret_name: str, gcp_credentials: 'GcpCredentials', version_id: Union[str, int] = 'latest', timeout: float = 60, project: Optional[str] = None) -> str ``` Reads the value of a given secret from Google Cloud Platform's Secret Manager. **Args:** * `secret_name`: Name of the secret to retrieve. * `gcp_credentials`: Credentials to use for authentication with GCP. * `version_id`: Version number of the secret to use, or "latest". * `timeout`: The number of seconds the transport should wait for the server response. * `project`: Name of the project to use; overrides the gcp\_credentials project if provided. **Returns:** * Contents of the specified secret. ### `read_secret` ```python theme={null} read_secret(secret_name: str, gcp_credentials: 'GcpCredentials', version_id: Union[str, int] = 'latest', timeout: float = 60, project: Optional[str] = None) -> str ``` Reads the value of a given secret from Google Cloud Platform's Secret Manager. **Args:** * `secret_name`: Name of the secret to retrieve. * `gcp_credentials`: Credentials to use for authentication with GCP. * `version_id`: Version number of the secret to use, or "latest". * `timeout`: The number of seconds the transport should wait for the server response. * `project`: Name of the project to use; overrides the gcp\_credentials project if provided. **Returns:** * Contents of the specified secret. ### `adelete_secret` ```python theme={null} adelete_secret(secret_name: str, gcp_credentials: 'GcpCredentials', timeout: float = 60, project: Optional[str] = None) -> str ``` Deletes the specified secret from Google Cloud Platform's Secret Manager. **Args:** * `secret_name`: Name of the secret to delete. * `gcp_credentials`: Credentials to use for authentication with GCP. * `timeout`: The number of seconds the transport should wait for the server response. * `project`: Name of the project to use; overrides the gcp\_credentials project if provided. **Returns:** * The path of the deleted secret. ### `delete_secret` ```python theme={null} delete_secret(secret_name: str, gcp_credentials: 'GcpCredentials', timeout: float = 60, project: Optional[str] = None) -> str ``` Deletes the specified secret from Google Cloud Platform's Secret Manager. **Args:** * `secret_name`: Name of the secret to delete. * `gcp_credentials`: Credentials to use for authentication with GCP. * `timeout`: The number of seconds the transport should wait for the server response. * `project`: Name of the project to use; overrides the gcp\_credentials project if provided. **Returns:** * The path of the deleted secret. ### `adelete_secret_version` ```python theme={null} adelete_secret_version(secret_name: str, version_id: int, gcp_credentials: 'GcpCredentials', timeout: float = 60, project: Optional[str] = None) -> str ``` Deletes a version of a given secret from Google Cloud Platform's Secret Manager. **Args:** * `secret_name`: Name of the secret to retrieve. * `version_id`: Version number of the secret to use; "latest" can NOT be used. * `gcp_credentials`: Credentials to use for authentication with GCP. * `timeout`: The number of seconds the transport should wait for the server response. * `project`: Name of the project to use; overrides the gcp\_credentials project if provided. **Returns:** * The path of the deleted secret version. ### `delete_secret_version` ```python theme={null} delete_secret_version(secret_name: str, version_id: int, gcp_credentials: 'GcpCredentials', timeout: float = 60, project: Optional[str] = None) -> str ``` Deletes a version of a given secret from Google Cloud Platform's Secret Manager. **Args:** * `secret_name`: Name of the secret to retrieve. * `version_id`: Version number of the secret to use; "latest" can NOT be used. * `gcp_credentials`: Credentials to use for authentication with GCP. * `timeout`: The number of seconds the transport should wait for the server response. * `project`: Name of the project to use; overrides the gcp\_credentials project if provided. **Returns:** * The path of the deleted secret version. ## Classes ### `GcpSecret` Manages a secret in Google Cloud Platform's Secret Manager. **Attributes:** * `gcp_credentials`: Credentials to use for authentication with GCP. * `secret_name`: Name of the secret to manage. * `secret_version`: Version number of the secret to use, or "latest". **Methods:** #### `adelete_secret` ```python theme={null} adelete_secret(self) -> str ``` Deletes the secret from the secret storage service (async version). **Returns:** * The path that the secret was deleted from. #### `aread_secret` ```python theme={null} aread_secret(self) -> bytes ``` Reads the secret data from the secret storage service (async version). **Returns:** * The secret data as bytes. #### `awrite_secret` ```python theme={null} awrite_secret(self, secret_data: bytes) -> str ``` Writes the secret data to the secret storage service (async version); if it doesn't exist it will be created. **Args:** * `secret_data`: The secret to write. **Returns:** * The path that the secret was written to. #### `delete_secret` ```python theme={null} delete_secret(self) -> str ``` Deletes the secret from the secret storage service. **Returns:** * The path that the secret was deleted from. #### `read_secret` ```python theme={null} read_secret(self) -> bytes ``` Reads the secret data from the secret storage service. **Returns:** * The secret data as bytes. #### `write_secret` ```python theme={null} write_secret(self, secret_data: bytes) -> str ``` Writes the secret data to the secret storage service; if it doesn't exist it will be created. **Args:** * `secret_data`: The secret to write. **Returns:** * The path that the secret was written to. # utilities Source: https://docs.prefect.io/integrations/prefect-gcp/api-ref/prefect_gcp-utilities # `prefect_gcp.utilities` ## Functions ### `slugify_name` ```python theme={null} slugify_name(name: str, max_length: int = 30) -> Optional[str] ``` Slugify text for use as a name. Keeps only alphanumeric characters and dashes, and caps the length of the slug at 30 chars. The 30 character length allows room to add a uuid for generating a unique name for the job while keeping the total length of a name below 63 characters, which is the limit for Cloud Run job names. **Args:** * `name`: The name of the job **Returns:** * The slugified job name or None if the slugified name is empty ## Classes ### `Job` Utility class to call GCP `jobs` API and interact with the returned objects. **Methods:** #### `create` ```python theme={null} create(client: Resource, namespace: str, body: dict) ``` Make a create request to the GCP jobs API. #### `delete` ```python theme={null} delete(client: Resource, namespace: str, job_name: str) ``` Make a delete request to the GCP jobs API. #### `get` ```python theme={null} get(cls, client: Resource, namespace: str, job_name: str) ``` Make a get request to the GCP jobs API and return a Job instance. #### `has_execution_in_progress` ```python theme={null} has_execution_in_progress(self) -> bool ``` See if job has a run in progress. #### `is_ready` ```python theme={null} is_ready(self) -> bool ``` Whether a job is finished registering and ready to be executed #### `run` ```python theme={null} run(client: Resource, namespace: str, job_name: str) ``` Make a run request to the GCP jobs API. ### `Execution` Utility class to call GCP `executions` API and interact with the returned objects. **Methods:** #### `condition_after_completion` ```python theme={null} condition_after_completion(self) ``` Returns Execution condition if Execution has completed. #### `get` ```python theme={null} get(cls, client: Resource, namespace: str, execution_name: str) ``` Make a get request to the GCP executions API and return an Execution instance. #### `is_running` ```python theme={null} is_running(self) -> bool ``` Returns True if Execution is not completed. #### `succeeded` ```python theme={null} succeeded(self) ``` Whether or not the Execution completed is a successful state. # cloud_run Source: https://docs.prefect.io/integrations/prefect-gcp/api-ref/prefect_gcp-workers-cloud_run # `prefect_gcp.workers.cloud_run` \ Module containing the Cloud Run worker used for executing flow runs as Cloud Run jobs. Get started by creating a Cloud Run work pool: ```bash theme={null} prefect work-pool create 'my-cloud-run-pool' --type cloud-run ``` Then start a Cloud Run worker with the following command: ```bash theme={null} prefect worker start --pool 'my-cloud-run-pool' ``` ## Configuration Read more about configuring work pools [here](https://docs.prefect.io/3.0/deploy/infrastructure-concepts/work-pools). ## Advanced Configuration !!! example "Using a custom Cloud Run job template" Below is the default job body template used by the Cloud Run Worker: ```json theme={null} { "apiVersion": "run.googleapis.com/v1", "kind": "Job", "metadata": { "name": "{{ name }}", "annotations": { "run.googleapis.com/launch-stage": "BETA", } }, "spec": { "template": { "spec": { "template": { "spec": { "containers": [ { "image": "{{ image }}", "args": "{{ args }}", "resources": { "limits": { "cpu": "{{ cpu }}", "memory": "{{ memory }}" }, "requests": { "cpu": "{{ cpu }}", "memory": "{{ memory }}" } } } ], "timeoutSeconds": "{{ timeout }}", "serviceAccountName": "{{ service_account_name }}" } } } } }, "metadata": { "annotations": { "run.googleapis.com/vpc-access-connector": "{{ vpc_connector_name }}" } } }, }, "keep_job": "{{ keep_job }}" } ``` Each values enclosed in `{{ }}` is a placeholder that will be replaced with a value at runtime on a per-deployment basis. The values that can be used a placeholders are defined by the `variables` schema defined in the base job template. The default job body template and available variables can be customized on a work pool by work pool basis. By editing the default job body template you can: * Add additional placeholders to the default job template * Remove placeholders from the default job template * Pass values to Cloud Run that are not defined in the `variables` schema ### Adding additional placeholders For example, to allow for extra customization of a new annotation not described in the default job template, you can add the following: ```json theme={null} { "apiVersion": "run.googleapis.com/v1", "kind": "Job", "metadata": { "name": "{{ name }}", "annotations": { "run.googleapis.com/my-custom-annotation": "{{ my_custom_annotation }}", "run.googleapis.com/launch-stage": "BETA", }, ... }, ... } ``` `my_custom_annotation` can now be used as a placeholder in the job template and set on a per-deployment basis. ```yaml theme={null} # prefect.yaml deployments: ... - name: my-deployment ... work_pool: my-cloud-run-pool job_variables: {"my_custom_annotation": "my-custom-value"} ``` Additionally, fields can be set to prevent configuration at the deployment level. For example to configure the `vpc_connector_name` field, the placeholder can be removed and replaced with an actual value. Now all deployments that point to this work pool will use the same `vpc_connector_name` value. ```json theme={null} { "apiVersion": "run.googleapis.com/v1", "kind": "Job", "spec": { "template": { "metadata": { "annotations": { "run.googleapis.com/vpc-access-connector": "my-vpc-connector" } }, ... }, ... } } ``` ## Classes ### `CloudRunWorkerJobConfiguration` Configuration class used by the Cloud Run Worker to create a Cloud Run Job. An instance of this class is passed to the Cloud Run worker's `run` method for each flow run. It contains all information necessary to execute the flow run as a Cloud Run Job. **Attributes:** * `region`: The region where the Cloud Run Job resides. * `credentials`: The GCP Credentials used to connect to Cloud Run. * `job_body`: The job body used to create the Cloud Run Job. * `timeout`: Max allowed duration the job may be active before Cloud Run will actively try to mark it failed and kill associated containers (maximum of 3600 seconds, 1 hour). * `keep_job`: Whether to delete the Cloud Run Job after it completes. * `prefect_api_key_secret`: A GCP secret containing a Prefect API Key. * `prefect_api_auth_string_secret`: A GCP secret containing a Prefect API authorization string. **Methods:** #### `job_name` ```python theme={null} job_name(self) -> str ``` property for accessing the name from the job metadata. #### `prepare_for_flow_run` ```python theme={null} prepare_for_flow_run(self, flow_run: 'FlowRun', deployment: Optional['DeploymentResponse'] = None, flow: Optional['Flow'] = None, work_pool: Optional['WorkPool'] = None, worker_name: Optional[str] = None, worker_id: Optional['UUID'] = None) ``` Prepares the job configuration for a flow run. Ensures that necessary values are present in the job body and that the job body is valid. **Args:** * `flow_run`: The flow run to prepare the job configuration for * `deployment`: The deployment associated with the flow run used for preparation. * `flow`: The flow associated with the flow run used for preparation. #### `project` ```python theme={null} project(self) -> str ``` property for accessing the project from the credentials. ### `CloudRunWorkerVariables` Default variables for the Cloud Run worker. The schema for this class is used to populate the `variables` section of the default base job template. ### `CloudRunWorkerResult` Contains information about the final state of a completed process ### `CloudRunWorker` Prefect worker that executes flow runs within Cloud Run Jobs. **Methods:** #### `kill_infrastructure` ```python theme={null} kill_infrastructure(self, infrastructure_pid: str, configuration: CloudRunWorkerJobConfiguration, grace_seconds: int = 30) -> None ``` Kill a Cloud Run Job by deleting it. **Args:** * `infrastructure_pid`: The job name. * `configuration`: The job configuration used to connect to GCP. * `grace_seconds`: Not used for Cloud Run (GCP handles graceful shutdown). **Raises:** * `InfrastructureNotFound`: If the job doesn't exist. #### `run` ```python theme={null} run(self, flow_run: 'FlowRun', configuration: CloudRunWorkerJobConfiguration, task_status: Optional[anyio.abc.TaskStatus] = None) -> CloudRunWorkerResult ``` Executes a flow run within a Cloud Run Job and waits for the flow run to complete. **Args:** * `flow_run`: The flow run to execute * `configuration`: The configuration to use when executing the flow run. * `task_status`: The task status object for the current flow run. If provided, the task will be marked as started. **Returns:** * A result object containing information about the final state of the flow run # cloud_run_v2 Source: https://docs.prefect.io/integrations/prefect-gcp/api-ref/prefect_gcp-workers-cloud_run_v2 # `prefect_gcp.workers.cloud_run_v2` ## Classes ### `CloudRunWorkerJobV2Configuration` The configuration for the Cloud Run worker V2. The schema for this class is used to populate the `job_body` section of the default base job template. **Methods:** #### `job_name` ```python theme={null} job_name(self) -> str ``` Returns the name of the job. **Returns:** * The name of the job. #### `prepare_for_flow_run` ```python theme={null} prepare_for_flow_run(self, flow_run: 'FlowRun', deployment: Optional['DeploymentResponse'] = None, flow: Optional['Flow'] = None, work_pool: Optional['WorkPool'] = None, worker_name: Optional[str] = None, worker_id: Optional['UUID'] = None) ``` Prepares the job configuration for a flow run. Ensures that necessary values are present in the job body and that the job body is valid. **Args:** * `flow_run`: The flow run to prepare the job configuration for * `deployment`: The deployment associated with the flow run used for preparation. * `flow`: The flow associated with the flow run used for preparation. * `work_pool`: The work pool associated with the flow run used for preparation. * `worker_name`: The worker name associated with the flow run used for preparation. #### `project` ```python theme={null} project(self) -> str ``` Returns the GCP project associated with the credentials. **Returns:** * The GCP project associated with the credentials. ### `CloudRunWorkerV2Variables` Default variables for the v2 Cloud Run worker. The schema for this class is used to populate the `variables` section of the default base job template. ### `CloudRunWorkerV2Result` The result of a Cloud Run worker V2 job. ### `CloudRunWorkerV2` The Cloud Run worker V2. **Methods:** #### `kill_infrastructure` ```python theme={null} kill_infrastructure(self, infrastructure_pid: str, configuration: CloudRunWorkerJobV2Configuration, grace_seconds: int = 30) -> None ``` Kill a Cloud Run V2 Job by deleting it. **Args:** * `infrastructure_pid`: The job name. * `configuration`: The job configuration used to connect to GCP. * `grace_seconds`: Not used for Cloud Run V2 (GCP handles graceful shutdown). **Raises:** * `InfrastructureNotFound`: If the job doesn't exist. #### `run` ```python theme={null} run(self, flow_run: 'FlowRun', configuration: CloudRunWorkerJobV2Configuration, task_status: Optional[TaskStatus] = None) -> CloudRunWorkerV2Result ``` Runs the flow run on Cloud Run and waits for it to complete. **Args:** * `flow_run`: The flow run to run. * `configuration`: The configuration for the job. * `task_status`: The task status to update. **Returns:** * The result of the job. # vertex Source: https://docs.prefect.io/integrations/prefect-gcp/api-ref/prefect_gcp-workers-vertex # `prefect_gcp.workers.vertex` \ Module containing the custom worker used for executing flow runs as Vertex AI Custom Jobs. Get started by creating a Cloud Run work pool: ```bash theme={null} prefect work-pool create 'my-vertex-pool' --type vertex-ai ``` Then start a Cloud Run worker with the following command: ```bash theme={null} prefect worker start --pool 'my-vertex-pool' ``` ## Configuration Read more about configuring work pools [here](https://docs.prefect.io/3.0/deploy/infrastructure-concepts/work-pools). ## Classes ### `VertexAIWorkerVariables` Default variables for the Vertex AI worker. The schema for this class is used to populate the `variables` section of the default base job template. ### `VertexAIWorkerJobConfiguration` Configuration class used by the Vertex AI Worker to create a Job. An instance of this class is passed to the Vertex AI Worker's `run` method for each flow run. It contains all information necessary to execute the flow run as a Vertex AI Job. **Attributes:** * `region`: The region where the Vertex AI Job resides. * `credentials`: The GCP Credentials used to connect to Vertex AI. * `job_spec`: The Vertex AI Job spec used to create the Job. * `job_watch_poll_interval`: The interval between GCP API calls to check Job state. **Methods:** #### `job_name` ```python theme={null} job_name(self) -> str ``` The name can be up to 128 characters long and can be consist of any UTF-8 characters. Reference: [https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.CustomJob#google\_cloud\_aiplatform\_CustomJob\_display\_name](https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.CustomJob#google_cloud_aiplatform_CustomJob_display_name) #### `prepare_for_flow_run` ```python theme={null} prepare_for_flow_run(self, flow_run: 'FlowRun', deployment: Optional['DeploymentResponse'] = None, flow: Optional['Flow'] = None, work_pool: Optional['WorkPool'] = None, worker_name: Optional[str] = None, worker_id: Optional['UUID'] = None) ``` #### `project` ```python theme={null} project(self) -> str ``` property for accessing the project from the credentials. ### `VertexAIWorkerResult` Contains information about the final state of a completed process ### `VertexAIWorker` Prefect worker that executes flow runs within Vertex AI Jobs. **Methods:** #### `kill_infrastructure` ```python theme={null} kill_infrastructure(self, infrastructure_pid: str, configuration: VertexAIWorkerJobConfiguration, grace_seconds: int = 30) -> None ``` Kill a Vertex AI Custom Job by cancelling it. **Args:** * `infrastructure_pid`: The full job name (e.g., "projects/123/locations/us-central1/customJobs/456"). * `configuration`: The job configuration used to connect to GCP. * `grace_seconds`: Not used for Vertex AI (GCP handles graceful shutdown). **Raises:** * `InfrastructureNotFound`: If the job doesn't exist. #### `run` ```python theme={null} run(self, flow_run: 'FlowRun', configuration: VertexAIWorkerJobConfiguration, task_status: Optional[anyio.abc.TaskStatus] = None) -> VertexAIWorkerResult ``` Executes a flow run within a Vertex AI Job and waits for the flow run to complete. **Args:** * `flow_run`: The flow run to execute * `configuration`: The configuration to use when executing the flow run. * `task_status`: The task status object for the current flow run. If provided, the task will be marked as started. **Returns:** * A result object containing information about the final state of the flow run # Google Cloud Run Worker Guide Source: https://docs.prefect.io/integrations/prefect-gcp/gcp-worker-guide ## Why use Google Cloud Run for flow run execution? Google Cloud Run is a fully managed compute platform that automatically scales your containerized applications. 1. Serverless architecture: Cloud Run follows a serverless architecture, which means you don't need to manage any underlying infrastructure. Google Cloud Run automatically handles the scaling and availability of your flow run infrastructure, allowing you to focus on developing and deploying your code. 2. Scalability: Cloud Run can automatically scale your pipeline to handle varying workloads and traffic. It can quickly respond to increased demand and scale back down during low activity periods, ensuring efficient resource utilization. 3. Integration with Google Cloud services: Google Cloud Run easily integrates with other Google Cloud services, such as Google Cloud Storage, Google Cloud Pub/Sub, and Google Cloud Build. This interoperability enables you to build end-to-end data pipelines that use a variety of services. 4. Portability: Since Cloud Run uses container images, you can develop your pipelines locally using Docker and then deploy them on Google Cloud Run without significant modifications. This portability allows you to run the same pipeline in different environments. ## Google Cloud Run guide After completing this guide, you will have: 1. Created a Google Cloud Service Account 2. Created a Prefect Work Pool 3. Deployed a Prefect Worker as a Cloud Run Service 4. Deployed a Flow 5. Executed the Flow as a Google Cloud Run Job ### Prerequisites Before starting this guide, make sure you have: * A [Google Cloud Platform (GCP) account](https://cloud.google.com/gcp). * A project on your GCP account where you have the necessary permissions to create Cloud Run Services and Service Accounts. * The `gcloud` CLI installed on your local machine. You can follow Google Cloud's [installation guide](https://cloud.google.com/sdk/docs/install). If you're using Apple (or a Linux system) you can also use [Homebrew](https://formulae.brew.sh/cask/google-cloud-sdk) for installation. * [Docker](https://www.docker.com/get-started/) installed on your local machine. * A Prefect server instance. You can sign up for a forever free [Prefect Cloud Account](https://app.prefect.cloud/) or, alternatively, self-host a Prefect server. ### Step 1. Create a Google Cloud service account First, open a terminal or command prompt on your local machine where `gcloud` is installed. If you haven't already authenticated with `gcloud`, run the following command and follow the instructions to log in to your GCP account. ```bash theme={null} gcloud auth login ``` Next, you'll set your project where you'd like to create the service account. Use the following command and replace `` with your GCP project's ID. ```bash theme={null} gcloud config set project ``` For example, if your project's ID is `prefect-project` the command will look like this: ```bash theme={null} gcloud config set project prefect-project ``` Now you're ready to make the service account. To do so, you'll need to run this command: ```bash theme={null} gcloud iam service-accounts create --display-name="" ``` Here's an example of the command above which you can use which already has the service account name and display name provided. An additional option to describe the service account has also been added: ```bash theme={null} gcloud iam service-accounts create prefect-service-account \ --description="service account to use for the prefect worker" \ --display-name="prefect-service-account" ``` The last step of this process is to make sure the service account has the proper permissions to execute flow runs as Cloud Run jobs. Run the following commands to grant the necessary permissions: ```bash theme={null} gcloud projects add-iam-policy-binding \ --member="serviceAccount:@.iam.gserviceaccount.com" \ --role="roles/iam.serviceAccountUser" ``` ```bash theme={null} gcloud projects add-iam-policy-binding \ --member="serviceAccount:@.iam.gserviceaccount.com" \ --role="roles/run.admin" ``` ### Step 2. Create a Cloud Run work pool Let's walk through the process of creating a Cloud Run work pool. #### Fill out the work pool base job template You can create a new work pool using the Prefect UI or CLI. The following command creates a work pool of type `cloud-run` via the CLI (you'll want to replace the `` with the name of your work pool): ```bash theme={null} prefect work-pool create --type cloud-run ``` Once the work pool is created, find the work pool in the UI and edit it. There are many ways to customize the base job template for the work pool. Modifying the template influences the infrastructure configuration that the worker provisions for flow runs submitted to the work pool. For this guide we are going to modify just a few of the available fields. Specify the region for the cloud run job. region Save the name of the service account created in first step of this guide. name Your work pool is now ready to receive scheduled flow runs! ### Step 3. Deploy a Cloud Run worker Now you can launch a Cloud Run service to host the Cloud Run worker. This worker will poll the work pool that you created in the previous step. Navigate back to your terminal and run the following commands to set your Prefect API key and URL as environment variables. Be sure to replace `` and `` with your Prefect account and workspace IDs (both will be available in the URL of the UI when previewing the workspace dashboard). You'll want to replace `` with an active API key as well. ```bash theme={null} export PREFECT_API_URL='https://api.prefect.cloud/api/accounts//workspaces/' export PREFECT_API_KEY='' ``` Once those variables are set, run the following shell command to deploy your worker as a service. Don't forget to replace `` with the name of the service account you created in the first step of this guide, and replace `` with the name of the work pool you created in the second step. ```bash theme={null} gcloud run deploy prefect-worker --image=prefecthq/prefect-gcp:latest \ --set-env-vars PREFECT_API_URL=$PREFECT_API_URL,PREFECT_API_KEY=$PREFECT_API_KEY \ --service-account \ --no-cpu-throttling \ --min-instances 1 \ --startup-probe httpGet.port=8080,httpGet.path=/health,initialDelaySeconds=100,periodSeconds=20,timeoutSeconds=20 \ --args "prefect","worker","start","--install-policy","never","--with-healthcheck","-p","","-t","cloud-run" ``` This example uses `prefecthq/prefect-gcp:latest` which includes both `prefect` and `prefect-gcp` pre-installed, and sets `--install-policy never` to avoid runtime package installation. For production deployments, consider pinning to a specific version tag (e.g., `prefecthq/prefect-gcp:0.6.17-python3.12-prefect3.6.19`). After running this command, you'll be prompted to specify a region. Choose the same region that you selected when creating the Cloud Run work pool in the second step of this guide. The next prompt will ask if you'd like to allow unauthenticated invocations to your worker. For this guide, you can select "No". After a few seconds, you'll be able to see your new `prefect-worker` service by navigating to the Cloud Run page of your Google Cloud console. Additionally, you should be able to see a record of this worker in the Prefect UI on the work pool's page by navigating to the `Worker` tab. Let's not leave our worker hanging, it's time to give it a job. ### Step 4. Deploy a flow Let's prepare a flow to run as a Cloud Run job. In this section of the guide, we'll "bake" our code into a Docker image, and push that image to Google Artifact Registry. ### Create a registry Let's create a docker repository in your Google Artifact Registry to host your custom image. If you already have a registry, and are authenticated to it, skip ahead to the *Write a flow* section. The following command creates a repository using the gcloud CLI. You'll want to replace the `` with your own value. : ```bash theme={null} gcloud artifacts repositories create \ --repository-format=docker --location=us ``` Now you can authenticate to artifact registry: ```bash theme={null} gcloud auth configure-docker us-docker.pkg.dev ``` ### Write a flow First, create a new directory. This will serve as the root of your project's repository. Within the directory, create a sub-directory called `flows`. Navigate to the `flows` subdirectory and create a new file for your flow. Feel free to write your own flow, but here's a ready-made one for your convenience: ```python theme={null} import httpx from prefect import flow, task from prefect.artifacts import create_markdown_artifact @task def mark_it_down(temp): markdown_report = f"""# Weather Report ## Recent weather | Time | Temperature | | :-------- | ----------: | | Now | {temp} | | In 1 hour | {temp + 2} | """ create_markdown_artifact( key="weather-report", markdown=markdown_report, description="Very scientific weather report", ) @flow def fetch_weather(lat: float, lon: float): base_url = "https://api.open-meteo.com/v1/forecast/" weather = httpx.get( base_url, params=dict(latitude=lat, longitude=lon, hourly="temperature_2m"), ) most_recent_temp = float(weather.json()["hourly"]["temperature_2m"][0]) mark_it_down(most_recent_temp) if __name__ == "__main__": fetch_weather(38.9, -77.0) ``` In the remainder of this guide, this script will be referred to as `weather_flow.py`, but you can name yours whatever you'd like. #### Creating a `prefect.yaml` file Now we're ready to make a `prefect.yaml` file, which will be responsible for managing the deployments of this repository. **Navigate back to the root of your directory**, and run the following command to create a `prefect.yaml` file using Prefect's docker deployment recipe. ```bash theme={null} prefect init --recipe docker ``` You'll receive a prompt to put in values for the image name and tag. Since we will be pushing the image to Google Artifact Registry, the name of your image should be prefixed with the path to the docker repository you created within the registry. For example: `us-docker.pkg.dev///`. You'll want to replace `` with the ID of your project in GCP. This should match the ID of the project you used in first step of this guide. Here is an example of what this could look like: ```bash theme={null} image_name: us-docker.pkg.dev/prefect-project/my-artifact-registry/gcp-weather-image tag: latest ``` At this point, there will be a new `prefect.yaml` file available at the root of your project. The contents will look similar to the example below, however, we've added in a combination of YAML templating options and Prefect deployment actions to build out a simple CI/CD process. Feel free to copy the contents and paste them in your prefect.yaml: ```yaml theme={null} # Welcome to your prefect.yaml file! You can you this file for storing and managing # configuration for deploying your flows. We recommend committing this file to source # control along with your flow code. # Generic metadata about this project name: prefect-version: 3.0.0 # build section allows you to manage and build docker image build: - prefect_docker.deployments.steps.build_docker_image: id: build_image requires: prefect-docker>=0.3.1 image_name: /gcp-weather-image tag: latest dockerfile: auto platform: linux/amd64 # push section allows you to manage if and how this project is uploaded to remote locations push: - prefect_docker.deployments.steps.push_docker_image: requires: prefect-docker>=0.3.1 image_name: '{{ build_image.image_name }}' tag: '{{ build_image.tag }}' # pull section allows you to provide instructions for cloning this project in remote locations pull: - prefect.deployments.steps.set_working_directory: directory: /opt/prefect/ # the deployments section allows you to provide configuration for deploying flows deployments: - name: gcp-weather-deploy version: null tags: [] description: null schedule: {} flow_name: null entrypoint: flows/weather_flow.py:fetch_weather parameters: lat: 14.5994 lon: 28.6731 work_pool: name: my-cloud-run-pool work_queue_name: default job_variables: image: '{{ build_image.image }}' ``` After copying the example above, don't forget to replace `` with the name of the directory where your flow folder and `prefect.yaml` live. You'll also need to replace `` with the path to the Docker repository in your Google Artifact Registry. To get a better understanding of the different components of the `prefect.yaml` file above and what they do, feel free to read this next section. Otherwise, you can skip ahead to *Flow Deployment*. In the `build` section of the `prefect.yaml` the following step is executed at deployment build time: 1. `prefect_docker.deployments.steps.build_docker_image` : builds a Docker image automatically which uses the name and tag chosen previously. If you are using an ARM-based chip (such as an M1 or M2 Mac), you'll want to ensure that you add `platform: linux/amd64` to your `build_docker_image` step to ensure that your docker image uses an AMD architecture. For example: ```yaml theme={null} - prefect_docker.deployments.steps.build_docker_image: id: build_image requires: prefect-docker>=0.3.1 image_name: us-docker.pkg.dev/prefect-project/my-docker-repository/gcp-weather-image tag: latest dockerfile: auto platform: linux/amd64 ``` The `push` section sends the Docker image to the Docker repository in your Google Artifact Registry, so that it can be easily accessed by the worker for flow run execution. The `pull` section sets the working directory for the process prior to importing your flow. In the `deployments` section of the `prefect.yaml` file above, you'll see that there is a deployment declaration named `gcp-weather-deploy`. Within the declaration, the entrypoint for the flow is specified along with some default parameters which will be passed to the flow at runtime. Last but not least, the name of the work pool that we created in step 2 of this guide is specified. #### Flow deployment Once you're happy with the specifications in the `prefect.yaml` file, run the following command in the terminal to deploy your flow: ```bash theme={null} prefect deploy --name gcp-weather-deploy ``` Once the flow is deployed to Prefect Cloud or your local Prefect Server, it's time to queue up a flow run! ### Step 5. Flow execution Find your deployment in the UI, and hit the *Quick Run* button. You have now successfully submitted a flow run to your Cloud Run worker! If you used the flow script provided in this guide, check the *Artifacts* tab for the flow run once it completes. You'll have a nice little weather report waiting for you there. Hope your day is a sunny one! ### Recap and next steps Congratulations on completing this guide! Looking back on our journey, you have: 1. Created a Google Cloud service account 2. Created a Cloud Run work pool 3. Deployed a Cloud Run worker 4. Deployed a flow 5. Executed a flow For next steps, take a look at some of the other [work pools](/v3/how-to-guides/deployment_infra/serverless) Prefect has to offer. The world is your oyster 🦪✨. # prefect-gcp Source: https://docs.prefect.io/integrations/prefect-gcp/index `prefect-gcp` helps you leverage the capabilities of Google Cloud Platform (GCP) in your workflows. For example, you can run flows on Vertex AI or Cloud Run, read and write data to BigQuery and Cloud Storage, and retrieve secrets with Secret Manager. ## Getting started ### Prerequisites * A [GCP account](https://cloud.google.com/) and the necessary permissions to access desired services. ### Install `prefect-gcp` Install `prefect-gcp` as an extra of `prefect`. If you don't already have `prefect` installed, it will install the newest version of `prefect` as well. ```bash pip theme={null} pip install -U "prefect[gcp]" ``` ```bash uv theme={null} uv pip install -U "prefect[gcp]" ``` If using BigQuery, Cloud Storage, Secret Manager, or Vertex AI, see [additional installation options](#install-extras). #### Install extras To install `prefect-gcp` with all additional capabilities, run the install command above and then run the following command: ```bash pip theme={null} pip install -U "prefect-gcp[all_extras]" ``` ```bash uv theme={null} uv pip install -U "prefect-gcp[all_extras]" ``` Or, install extras individually: ```bash pip theme={null} # Use Cloud Storage pip install -U "prefect-gcp[cloud_storage]" # Use BigQuery pip install -U "prefect-gcp[bigquery]" # Use Secret Manager pip install -U "prefect-gcp[secret_manager]" # Use Vertex AI pip install -U "prefect-gcp[aiplatform]" ``` ```bash uv theme={null} # Use Cloud Storage uv pip install -U "prefect-gcp[cloud_storage]" # Use BigQuery uv pip install -U "prefect-gcp[bigquery]" # Use Secret Manager uv pip install -U "prefect-gcp[secret_manager]" # Use Vertex AI uv pip install -U "prefect-gcp[aiplatform]" ``` ### Register newly installed block types Register the block types in the module to make them available for use. ```bash theme={null} prefect block register -m prefect_gcp ``` ## Blocks setup ### Credentials Authenticate with a service account to use `prefect-gcp` services. 1. Refer to the [GCP service account documentation](https://cloud.google.com/iam/docs/creating-managing-service-account-keys#creating) to create and download a service account key file. 2. Copy the JSON contents. 3. Use the Python code below, replace the placeholders with your information. ```python theme={null} from prefect_gcp import GcpCredentials # replace this PLACEHOLDER dict with your own service account info service_account_info = { "type": "service_account", "project_id": "PROJECT_ID", "private_key_id": "KEY_ID", "private_key": "-----BEGIN PRIVATE KEY-----\nPRIVATE_KEY\n-----END PRIVATE KEY-----\n", "client_email": "SERVICE_ACCOUNT_EMAIL", "client_id": "CLIENT_ID", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://accounts.google.com/o/oauth2/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/SERVICE_ACCOUNT_EMAIL" } GcpCredentials( service_account_info=service_account_info ).save("CREDENTIALS-BLOCK-NAME") ``` This credential block can be used to create other `prefect_gcp` blocks. **`service_account_info` vs `service_account_file`** The advantage of using `service_account_info`, instead of `service_account_file`, is that it is accessible across containers. If `service_account_file` is used, the provided path *must be available* in the container executing the flow. ### BigQuery Read data from and write to Google BigQuery within your Prefect flows. Be sure to [install](#install-extras) `prefect-gcp` with the BigQuery extra. ```python theme={null} from prefect_gcp.bigquery import GcpCredentials, BigQueryWarehouse gcp_credentials = GcpCredentials.load("CREDENTIALS-BLOCK-NAME") bigquery_block = BigQueryWarehouse( gcp_credentials = gcp_credentials, fetch_size = 1 # Optional: specify a default number of rows to fetch when calling fetch_many ) bigquery_block.save("BIGQUERY-BLOCK-NAME") ``` ### Secret Manager Manage secrets in Google Cloud Platform's Secret Manager. ```python theme={null} from prefect_gcp import GcpCredentials, GcpSecret gcp_credentials = GcpCredentials.load("CREDENTIALS-BLOCK-NAME") gcp_secret = GcpSecret( secret_name = "your-secret-name", secret_version = "latest", gcp_credentials = gcp_credentials ) gcp_secret.save("SECRET-BLOCK-NAME") ``` ### Cloud Storage Create a block to interact with a GCS bucket. ```python theme={null} from prefect_gcp import GcpCredentials, GcsBucket gcs_bucket = GcsBucket( bucket="BUCKET-NAME", gcp_credentials=GcpCredentials.load("BIGQUERY-BLOCK-NAME") ) gcs_bucket.save("GCS-BLOCK-NAME") ``` ## Run flows on Google Cloud Run or Vertex AI Run flows on [Google Cloud Run](https://cloud.google.com/run) or [Vertex AI](https://cloud.google.com/vertex-ai) to dynamically scale your infrastructure. Prefect Cloud offers [Google Cloud Run push work pools](/v3/how-to-guides/deployment_infra/serverless). Push work pools submit runs directly to Google Cloud Run, instead of requiring a worker to actively poll for flow runs to execute. See the [Google Cloud Run Worker Guide](/integrations/prefect-gcp/gcp-worker-guide) for a walkthrough of using Google Cloud Run in a hybrid work pool. ## Examples ### Interact with BigQuery This code creates a new dataset in BigQuery, defines a table, insert rows, and fetches data from the table: ```python theme={null} from prefect import flow from prefect_gcp.bigquery import GcpCredentials, BigQueryWarehouse @flow def bigquery_flow(): all_rows = [] gcp_credentials = GcpCredentials.load("CREDENTIALS-BLOCK-NAME") client = gcp_credentials.get_bigquery_client() client.create_dataset("test_example", exists_ok=True) with BigQueryWarehouse(gcp_credentials=gcp_credentials) as warehouse: warehouse.execute( "CREATE TABLE IF NOT EXISTS test_example.customers (name STRING, address STRING);" ) warehouse.execute_many( "INSERT INTO test_example.customers (name, address) VALUES (%(name)s, %(address)s);", seq_of_parameters=[ {"name": "Marvin", "address": "Highway 42"}, {"name": "Ford", "address": "Highway 42"}, {"name": "Unknown", "address": "Highway 42"}, ], ) while True: # Repeated fetch* calls using the same operation will # skip re-executing and instead return the next set of results new_rows = warehouse.fetch_many("SELECT * FROM test_example.customers", size=2) if len(new_rows) == 0: break all_rows.extend(new_rows) return all_rows if __name__ == "__main__": bigquery_flow() ``` ### Use Prefect with Google Cloud Storage Interact with Google Cloud Storage. The code below uses `prefect_gcp` to upload a file to a Google Cloud Storage bucket and download the same file under a different filename. ```python theme={null} from pathlib import Path from prefect import flow from prefect_gcp import GcpCredentials, GcsBucket @flow def cloud_storage_flow(): # create a dummy file to upload file_path = Path("test-example.txt") file_path.write_text("Hello, Prefect!") gcp_credentials = GcpCredentials.load("CREDENTIALS-BLOCK-NAME") gcs_bucket = GcsBucket( bucket="BUCKET-NAME", gcp_credentials=gcp_credentials ) gcs_bucket_path = gcs_bucket.upload_from_path(file_path) downloaded_file_path = gcs_bucket.download_object_to_path( gcs_bucket_path, "downloaded-test-example.txt" ) return downloaded_file_path.read_text() if __name__ == "__main__": cloud_storage_flow() ``` **Upload and download directories** `GcsBucket` supports uploading and downloading entire directories. ### Save secrets with Google Secret Manager Read and write secrets with Google Secret Manager. Be sure to [install](#instal-prefect-gcp) `prefect-gcp` with the Secret Manager extra. The code below writes a secret to the Secret Manager, reads the secret data, and deletes the secret. ```python theme={null} from prefect import flow from prefect_gcp import GcpCredentials, GcpSecret @flow def secret_manager_flow(): gcp_credentials = GcpCredentials.load("CREDENTIALS-BLOCK-NAME") gcp_secret = GcpSecret(secret_name="test-example", gcp_credentials=gcp_credentials) gcp_secret.write_secret(secret_data=b"Hello, Prefect!") secret_data = gcp_secret.read_secret() gcp_secret.delete_secret() return secret_data if __name__ == "__main__": secret_manager_flow() ``` ## Resources For assistance using GCP, consult the [Google Cloud documentation](https://cloud.google.com/docs). GCP can also authenticate without storing credentials in a block. See [Access third-party secrets](/v3/develop/secrets) for an example that uses AWS Secrets Manager and Snowflake. Refer to the `prefect-gcp` [SDK documentation](/integrations/prefect-gcp/api-ref/prefect_gcp-credentials) to explore all of the capabilities of the `prefect-gcp` library. # credentials Source: https://docs.prefect.io/integrations/prefect-github/api-ref/prefect_github-credentials # `prefect_github.credentials` Credential classes used to perform authenticated interactions with GitHub ## Classes ### `GitHubCredentials` Block used to manage GitHub authentication. **Attributes:** * `token`: the token to authenticate into GitHub. **Examples:** Load stored GitHub credentials: ```python theme={null} from prefect_github import GitHubCredentials github_credentials_block = GitHubCredentials.load("BLOCK_NAME") ``` **Methods:** #### `format_git_credentials` ```python theme={null} format_git_credentials(self, url: str) -> str ``` Format and return the full git URL with GitHub credentials embedded. GitHub uses plain token format without any prefix. **Args:** * `url`: Repository URL (e.g., "[https://github.com/org/repo.git](https://github.com/org/repo.git)") **Returns:** * Complete URL with credentials embedded **Raises:** * `ValueError`: If token is not configured #### `get_client` ```python theme={null} get_client(self) -> HTTPEndpoint ``` Gets an authenticated GitHub GraphQL HTTPEndpoint client. **Returns:** * An authenticated GitHub GraphQL HTTPEndpoint client. # exceptions Source: https://docs.prefect.io/integrations/prefect-github/api-ref/prefect_github-exceptions # `prefect_github.exceptions` Custom errors for Prefect GitHub ## Classes ### `InvalidRepositoryURLError` # graphql Source: https://docs.prefect.io/integrations/prefect-github/api-ref/prefect_github-graphql # `prefect_github.graphql` This is a module containing generic GraphQL tasks ## Functions ### `aexecute_graphql` ```python theme={null} aexecute_graphql(op: Union[Operation, str], github_credentials: GitHubCredentials, error_key: str = 'errors', **vars) -> Dict[str, Any] ``` Async version of execute\_graphql. See execute\_graphql for full documentation. ### `execute_graphql` ```python theme={null} execute_graphql(op: Union[Operation, str], github_credentials: GitHubCredentials, error_key: str = 'errors', **vars) -> Dict[str, Any] ``` Generic function for executing GraphQL operations. **Args:** * `op`: The operation, either as a valid GraphQL string or sgqlc.Operation. * `github_credentials`: Credentials to use for authentication with GitHub. * `error_key`: The key name to look out for in the response that indicates an error has occurred with the request. **Returns:** * A dict of the returned fields. **Examples:** Queries the first three issues from the Prefect repository using a string query. ```python theme={null} from prefect import flow from prefect_github import GitHubCredentials from prefect_github.graphql import execute_graphql @flow() def example_execute_graphql_flow(): op = ''' query GitHubRepoIssues($owner: String!, $name: String!) { repository(owner: $owner, name: $name) { issues(last: 3) { nodes { number title } } } } ''' token = "ghp_..." github_credentials = GitHubCredentials(token=token) params = dict(owner="PrefectHQ", name="Prefect") result = execute_graphql(op, github_credentials, **params) return result example_execute_graphql_flow() ``` Queries the first three issues from Prefect repository using a sgqlc.Operation. ```python theme={null} from prefect import flow from sgqlc.operation import Operation from prefect_github import GitHubCredentials from prefect_github.schemas import graphql_schema from prefect_github.graphql import execute_graphql @flow() def example_execute_graphql_flow(): op = Operation(graphql_schema.Query) op_settings = op.repository( owner="PrefectHQ", name="Prefect" ).issues( first=3 ).nodes() op_settings.__fields__("id", "title") token = "ghp_..." github_credentials = GitHubCredentials(token=token) result = execute_graphql( op, github_credentials, ) return result example_execute_graphql_flow() ``` # mutations Source: https://docs.prefect.io/integrations/prefect-github/api-ref/prefect_github-mutations # `prefect_github.mutations` This is a module containing: GitHub mutation tasks ## Functions ### `add_comment_subject` ```python theme={null} add_comment_subject(subject_id: str, body: str, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Adds a comment to an Issue or Pull Request. **Args:** * `subject_id`: The Node ID of the subject to modify. * `body`: The contents of the comment. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/mutation/\*.json. **Returns:** * A dict of the returned fields. ### `create_pull_request` ```python theme={null} create_pull_request(repository_id: str, base_ref_name: str, head_ref_name: str, title: str, github_credentials: GitHubCredentials, body: Optional[str] = None, maintainer_can_modify: Optional[bool] = None, draft: Optional[bool] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Create a new pull request. **Args:** * `repository_id`: The Node ID of the repository. * `base_ref_name`: The name of the branch you want your changes pulled into. This should be an existing branch on the current repository. You cannot update the base branch on a pull request to point to another repository. * `head_ref_name`: The name of the branch where your changes are implemented. For cross-repository pull requests in the same network, namespace `head_ref_name` with a user like this: `username\:branch`. * `title`: The title of the pull request. * `github_credentials`: Credentials to use for authentication with GitHub. * `body`: The contents of the pull request. * `maintainer_can_modify`: Indicates whether maintainers can modify the pull request. * `draft`: Indicates whether this pull request should be a draft. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/mutation/\*.json. **Returns:** * A dict of the returned fields. ### `close_pull_request` ```python theme={null} close_pull_request(pull_request_id: str, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Close a pull request. **Args:** * `pull_request_id`: ID of the pull request to be closed. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/mutation/\*.json. **Returns:** * A dict of the returned fields. ### `create_issue` ```python theme={null} create_issue(repository_id: str, title: str, assignee_ids: Iterable[str], label_ids: Iterable[str], project_ids: Iterable[str], github_credentials: GitHubCredentials, body: Optional[str] = None, milestone_id: Optional[str] = None, issue_template: Optional[str] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Creates a new issue. **Args:** * `repository_id`: The Node ID of the repository. * `title`: The title for the issue. * `assignee_ids`: The Node ID for the user assignee for this issue. * `label_ids`: An array of Node IDs of labels for this issue. * `project_ids`: An array of Node IDs for projects associated with this issue. * `github_credentials`: Credentials to use for authentication with GitHub. * `body`: The body for the issue description. * `milestone_id`: The Node ID of the milestone for this issue. * `issue_template`: The name of an issue template in the repository, assigns labels and assignees from the template to the issue. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/mutation/\*.json. **Returns:** * A dict of the returned fields. ### `close_issue` ```python theme={null} close_issue(issue_id: str, github_credentials: GitHubCredentials, state_reason: graphql_schema.IssueClosedStateReason = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Close an issue. **Args:** * `issue_id`: ID of the issue to be closed. * `github_credentials`: Credentials to use for authentication with GitHub. * `state_reason`: The reason the issue is to be closed. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/mutation/\*.json. **Returns:** * A dict of the returned fields. ### `add_star_starrable` ```python theme={null} add_star_starrable(starrable_id: str, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Adds a star to a Starrable. **Args:** * `starrable_id`: The Starrable ID to star. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/mutation/\*.json. **Returns:** * A dict of the returned fields. ### `remove_star_starrable` ```python theme={null} remove_star_starrable(starrable_id: str, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Removes a star from a Starrable. **Args:** * `starrable_id`: The Starrable ID to unstar. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/mutation/\*.json. **Returns:** * A dict of the returned fields. ### `add_reaction_subject` ```python theme={null} add_reaction_subject(subject_id: str, content: graphql_schema.ReactionContent, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Adds a reaction to a subject. **Args:** * `subject_id`: The Node ID of the subject to modify. * `content`: The name of the emoji to react with. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/mutation/\*.json. **Returns:** * A dict of the returned fields. ### `add_reaction` ```python theme={null} add_reaction(subject_id: str, content: graphql_schema.ReactionContent, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Adds a reaction to a subject. **Args:** * `subject_id`: The Node ID of the subject to modify. * `content`: The name of the emoji to react with. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/mutation/\*.json. **Returns:** * A dict of the returned fields. ### `remove_reaction_subject` ```python theme={null} remove_reaction_subject(subject_id: str, content: graphql_schema.ReactionContent, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Removes a reaction from a subject. **Args:** * `subject_id`: The Node ID of the subject to modify. * `content`: The name of the emoji reaction to remove. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/mutation/\*.json. **Returns:** * A dict of the returned fields. ### `remove_reaction` ```python theme={null} remove_reaction(subject_id: str, content: graphql_schema.ReactionContent, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Removes a reaction from a subject. **Args:** * `subject_id`: The Node ID of the subject to modify. * `content`: The name of the emoji reaction to remove. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/mutation/\*.json. **Returns:** * A dict of the returned fields. ### `request_reviews` ```python theme={null} request_reviews(pull_request_id: str, user_ids: Iterable[str], team_ids: Iterable[str], github_credentials: GitHubCredentials, union: Optional[bool] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Set review requests on a pull request. **Args:** * `pull_request_id`: The Node ID of the pull request to modify. * `user_ids`: The Node IDs of the user to request. * `team_ids`: The Node IDs of the team to request. * `github_credentials`: Credentials to use for authentication with GitHub. * `union`: Add users to the set rather than replace. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/mutation/\*.json. **Returns:** * A dict of the returned fields. ### `request_reviews_pull_request` ```python theme={null} request_reviews_pull_request(pull_request_id: str, user_ids: Iterable[str], team_ids: Iterable[str], github_credentials: GitHubCredentials, union: Optional[bool] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Set review requests on a pull request. **Args:** * `pull_request_id`: The Node ID of the pull request to modify. * `user_ids`: The Node IDs of the user to request. * `team_ids`: The Node IDs of the team to request. * `github_credentials`: Credentials to use for authentication with GitHub. * `union`: Add users to the set rather than replace. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/mutation/\*.json. **Returns:** * A dict of the returned fields. ### `add_pull_request_review` ```python theme={null} add_pull_request_review(pull_request_id: str, github_credentials: GitHubCredentials, commit_oid: Optional[datetime] = None, body: Optional[str] = None, event: graphql_schema.PullRequestReviewEvent = None, comments: Iterable[graphql_schema.DraftPullRequestReviewComment] = None, threads: Iterable[graphql_schema.DraftPullRequestReviewThread] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Adds a review to a Pull Request. **Args:** * `pull_request_id`: The Node ID of the pull request to modify. * `github_credentials`: Credentials to use for authentication with GitHub. * `commit_oid`: The commit OID the review pertains to. * `body`: The contents of the review body comment. * `event`: The event to perform on the pull request review. * `comments`: The review line comments. * `threads`: The review line comment threads. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/mutation/\*.json. **Returns:** * A dict of the returned fields. # organization Source: https://docs.prefect.io/integrations/prefect-github/api-ref/prefect_github-organization # `prefect_github.organization` This is a module containing: GitHub query\_organization\* tasks ## Functions ### `query_organization` ```python theme={null} query_organization(login: str, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` The query root of GitHub's GraphQL interface. **Args:** * `login`: The organization's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_team` ```python theme={null} query_organization_team(login: str, slug: str, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Find an organization's team by its slug. **Args:** * `login`: The organization's login. * `slug`: The name or slug of the team to find. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_teams` ```python theme={null} query_organization_teams(login: str, user_logins: Iterable[str], github_credentials: GitHubCredentials, privacy: graphql_schema.TeamPrivacy = None, role: graphql_schema.TeamRole = None, query: Optional[str] = None, order_by: graphql_schema.TeamOrder = None, ldap_mapped: Optional[bool] = None, root_teams_only: bool = False, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of teams in this organization. **Args:** * `login`: The organization's login. * `user_logins`: User logins to filter by. * `github_credentials`: Credentials to use for authentication with GitHub. * `privacy`: If non-null, filters teams according to privacy. * `role`: If non-null, filters teams according to whether the viewer is an admin or member on team. * `query`: If non-null, filters teams with query on team name and team slug. * `order_by`: Ordering options for teams returned from the connection. * `ldap_mapped`: If true, filters teams that are mapped to an LDAP Group (Enterprise only). * `root_teams_only`: If true, restrict to only root teams. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_project` ```python theme={null} query_organization_project(login: str, number: int, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Find project by number. **Args:** * `login`: The organization's login. * `number`: The project number to find. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_domains` ```python theme={null} query_organization_domains(login: str, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, is_verified: Optional[bool] = None, is_approved: Optional[bool] = None, order_by: graphql_schema.VerifiableDomainOrder = {'field': 'DOMAIN', 'direction': 'ASC'}, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of domains owned by the organization. **Args:** * `login`: The organization's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `is_verified`: Filter by if the domain is verified. * `is_approved`: Filter by if the domain is approved. * `order_by`: Ordering options for verifiable domains returned. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_packages` ```python theme={null} query_organization_packages(login: str, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, names: Optional[Iterable[str]] = None, repository_id: Optional[str] = None, package_type: graphql_schema.PackageType = None, order_by: graphql_schema.PackageOrder = {'field': 'CREATED_AT', 'direction': 'DESC'}, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of packages under the owner. **Args:** * `login`: The organization's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `names`: Find packages by their names. * `repository_id`: Find packages in a repository by ID. * `package_type`: Filter registry package by type. * `order_by`: Ordering of the returned packages. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_projects` ```python theme={null} query_organization_projects(login: str, states: Iterable[graphql_schema.ProjectState], github_credentials: GitHubCredentials, order_by: graphql_schema.ProjectOrder = None, search: Optional[str] = None, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of projects under the owner. **Args:** * `login`: The organization's login. * `states`: A list of states to filter the projects by. * `github_credentials`: Credentials to use for authentication with GitHub. * `order_by`: Ordering options for projects returned from the connection. * `search`: Query to search projects by, currently only searching by name. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_sponsors` ```python theme={null} query_organization_sponsors(login: str, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, tier_id: Optional[str] = None, order_by: graphql_schema.SponsorOrder = {'field': 'RELEVANCE', 'direction': 'DESC'}, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` List of sponsors for this user or organization. **Args:** * `login`: The organization's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `tier_id`: If given, will filter for sponsors at the given tier. Will only return sponsors whose tier the viewer is permitted to see. * `order_by`: Ordering options for sponsors returned from the connection. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_audit_log` ```python theme={null} query_organization_audit_log(login: str, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, query: Optional[str] = None, order_by: graphql_schema.AuditLogOrder = {'field': 'CREATED_AT', 'direction': 'DESC'}, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Audit log entries of the organization. **Args:** * `login`: The organization's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `query`: The query string to filter audit entries. * `order_by`: Ordering options for the returned audit log entries. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_project_v2` ```python theme={null} query_organization_project_v2(login: str, number: int, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Find a project by number. **Args:** * `login`: The organization's login. * `number`: The project number. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_projects_v2` ```python theme={null} query_organization_projects_v2(login: str, github_credentials: GitHubCredentials, query: Optional[str] = None, order_by: graphql_schema.ProjectV2Order = {'field': 'NUMBER', 'direction': 'DESC'}, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of projects under the owner. **Args:** * `login`: The organization's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `query`: A project to search for under the the owner. * `order_by`: How to order the returned projects. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_repository` ```python theme={null} query_organization_repository(login: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Find Repository. **Args:** * `login`: The organization's login. * `name`: Name of Repository to find. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_sponsoring` ```python theme={null} query_organization_sponsoring(login: str, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, order_by: graphql_schema.SponsorOrder = {'field': 'RELEVANCE', 'direction': 'DESC'}, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` List of users and organizations this entity is sponsoring. **Args:** * `login`: The organization's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `order_by`: Ordering options for the users and organizations returned from the connection. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_project_next` ```python theme={null} query_organization_project_next(login: str, number: int, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Find a project by project (beta) number. **Args:** * `login`: The organization's login. * `number`: The project (beta) number. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_pinned_items` ```python theme={null} query_organization_pinned_items(login: str, types: Iterable[graphql_schema.PinnableItemType], github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of repositories and gists this profile owner has pinned to their profile. **Args:** * `login`: The organization's login. * `types`: Filter the types of pinned items that are returned. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_projects_next` ```python theme={null} query_organization_projects_next(login: str, github_credentials: GitHubCredentials, query: Optional[str] = None, sort_by: graphql_schema.ProjectNextOrderField = 'TITLE', after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of projects (beta) under the owner. **Args:** * `login`: The organization's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `query`: A project (beta) to search for under the the owner. * `sort_by`: How to order the returned projects (beta). * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_repositories` ```python theme={null} query_organization_repositories(login: str, github_credentials: GitHubCredentials, privacy: graphql_schema.RepositoryPrivacy = None, order_by: graphql_schema.RepositoryOrder = None, affiliations: Iterable[graphql_schema.RepositoryAffiliation] = None, owner_affiliations: Iterable[graphql_schema.RepositoryAffiliation] = ('OWNER', 'COLLABORATOR'), is_locked: Optional[bool] = None, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, is_fork: Optional[bool] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of repositories that the user owns. **Args:** * `login`: The organization's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `privacy`: If non-null, filters repositories according to privacy. * `order_by`: Ordering options for repositories returned from the connection. * `affiliations`: Array of viewer's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the current viewer owns. * `owner_affiliations`: Array of owner's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the organization or user being viewed owns. * `is_locked`: If non-null, filters repositories according to whether they have been locked. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `is_fork`: If non-null, filters repositories according to whether they are forks of another repository. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_item_showcase` ```python theme={null} query_organization_item_showcase(login: str, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Showcases a selection of repositories and gists that the profile owner has either curated or that have been selected automatically based on popularity. **Args:** * `login`: The organization's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_pinnable_items` ```python theme={null} query_organization_pinnable_items(login: str, types: Iterable[graphql_schema.PinnableItemType], github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of repositories and gists this profile owner can pin to their profile. **Args:** * `login`: The organization's login. * `types`: Filter the types of pinnable items that are returned. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_recent_projects` ```python theme={null} query_organization_recent_projects(login: str, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Recent projects that this user has modified in the context of the owner. **Args:** * `login`: The organization's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_member_statuses` ```python theme={null} query_organization_member_statuses(login: str, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, order_by: graphql_schema.UserStatusOrder = {'field': 'UPDATED_AT', 'direction': 'DESC'}, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Get the status messages members of this entity have set that are either public or visible only to the organization. **Args:** * `login`: The organization's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `order_by`: Ordering options for user statuses returned from the connection. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_pending_members` ```python theme={null} query_organization_pending_members(login: str, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of users who have been invited to join this organization. **Args:** * `login`: The organization's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_sponsors_listing` ```python theme={null} query_organization_sponsors_listing(login: str, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` The GitHub Sponsors listing for this user or organization. **Args:** * `login`: The organization's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_members_with_role` ```python theme={null} query_organization_members_with_role(login: str, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of users who are members of this organization. **Args:** * `login`: The organization's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_enterprise_owners` ```python theme={null} query_organization_enterprise_owners(login: str, github_credentials: GitHubCredentials, query: Optional[str] = None, organization_role: graphql_schema.RoleInOrganization = None, order_by: graphql_schema.OrgEnterpriseOwnerOrder = {'field': 'LOGIN', 'direction': 'ASC'}, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of owners of the organization's enterprise account. **Args:** * `login`: The organization's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `query`: The search string to look for. * `organization_role`: The organization role to filter by. * `order_by`: Ordering options for enterprise owners returned from the connection. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_sponsors_activities` ```python theme={null} query_organization_sponsors_activities(login: str, actions: Iterable[graphql_schema.SponsorsActivityAction], github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, period: graphql_schema.SponsorsActivityPeriod = 'MONTH', order_by: graphql_schema.SponsorsActivityOrder = {'field': 'TIMESTAMP', 'direction': 'DESC'}, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Events involving this sponsorable, such as new sponsorships. **Args:** * `login`: The organization's login. * `actions`: Filter activities to only the specified actions. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `period`: Filter activities returned to only those that occurred in the most recent specified time period. Set to ALL to avoid filtering by when the activity occurred. * `order_by`: Ordering options for activity returned from the connection. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_interaction_ability` ```python theme={null} query_organization_interaction_ability(login: str, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` The interaction ability settings for this organization. **Args:** * `login`: The organization's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_ip_allow_list_entries` ```python theme={null} query_organization_ip_allow_list_entries(login: str, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, order_by: graphql_schema.IpAllowListEntryOrder = {'field': 'ALLOW_LIST_VALUE', 'direction': 'ASC'}, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` The IP addresses that are allowed to access resources owned by the organization. **Args:** * `login`: The organization's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `order_by`: Ordering options for IP allow list entries returned. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_repository_migrations` ```python theme={null} query_organization_repository_migrations(login: str, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, state: graphql_schema.MigrationState = None, repository_name: Optional[str] = None, order_by: graphql_schema.RepositoryMigrationOrder = {'field': 'CREATED_AT', 'direction': 'ASC'}, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of all repository migrations for this organization. **Args:** * `login`: The organization's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `state`: Filter repository migrations by state. * `repository_name`: Filter repository migrations by repository name. * `order_by`: Ordering options for repository migrations returned. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_saml_identity_provider` ```python theme={null} query_organization_saml_identity_provider(login: str, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` The Organization's SAML identity providers. **Args:** * `login`: The organization's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_repository_discussions` ```python theme={null} query_organization_repository_discussions(login: str, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, order_by: graphql_schema.DiscussionOrder = {'field': 'CREATED_AT', 'direction': 'DESC'}, repository_id: Optional[str] = None, answered: Optional[bool] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Discussions this user has started. **Args:** * `login`: The organization's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `order_by`: Ordering options for discussions returned from the connection. * `repository_id`: Filter discussions to only those in a specific repository. * `answered`: Filter discussions to only those that have been answered or not. Defaults to including both answered and unanswered discussions. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_sponsorships_as_sponsor` ```python theme={null} query_organization_sponsorships_as_sponsor(login: str, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, order_by: graphql_schema.SponsorshipOrder = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` This object's sponsorships as the sponsor. **Args:** * `login`: The organization's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `order_by`: Ordering options for sponsorships returned from this connection. If left blank, the sponsorships will be ordered based on relevancy to the viewer. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_sponsorship_newsletters` ```python theme={null} query_organization_sponsorship_newsletters(login: str, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, order_by: graphql_schema.SponsorshipNewsletterOrder = {'field': 'CREATED_AT', 'direction': 'DESC'}, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` List of sponsorship updates sent from this sponsorable to sponsors. **Args:** * `login`: The organization's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `order_by`: Ordering options for sponsorship updates returned from the connection. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_sponsorships_as_maintainer` ```python theme={null} query_organization_sponsorships_as_maintainer(login: str, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, include_private: bool = False, order_by: graphql_schema.SponsorshipOrder = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` This object's sponsorships as the maintainer. **Args:** * `login`: The organization's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `include_private`: Whether or not to include private sponsorships in the result set. * `order_by`: Ordering options for sponsorships returned from this connection. If left blank, the sponsorships will be ordered based on relevancy to the viewer. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_repository_discussion_comments` ```python theme={null} query_organization_repository_discussion_comments(login: str, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, repository_id: Optional[str] = None, only_answers: bool = False, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Discussion comments this user has authored. **Args:** * `login`: The organization's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `repository_id`: Filter discussion comments to only those in a specific repository. * `only_answers`: Filter discussion comments to only those that were marked as the answer. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_sponsorship_for_viewer_as_sponsor` ```python theme={null} query_organization_sponsorship_for_viewer_as_sponsor(login: str, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` The sponsorship from the viewer to this user/organization; that is, the sponsorship where you're the sponsor. Only returns a sponsorship if it is active. **Args:** * `login`: The organization's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_organization_sponsorship_for_viewer_as_sponsorable` ```python theme={null} query_organization_sponsorship_for_viewer_as_sponsorable(login: str, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` The sponsorship from this user/organization to the viewer; that is, the sponsorship you're receiving. Only returns a sponsorship if it is active. **Args:** * `login`: The organization's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. # repository Source: https://docs.prefect.io/integrations/prefect-github/api-ref/prefect_github-repository # `prefect_github.repository` This is a module containing: GitHub query\_repository\* tasks and the GitHub storage block. ## Functions ### `query_repository` ```python theme={null} query_repository(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` The query root of GitHub's GraphQL interface. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_ref` ```python theme={null} query_repository_ref(owner: str, name: str, qualified_name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Fetch a given ref from the repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `qualified_name`: The ref to retrieve. Fully qualified matches are checked in order (`refs/heads/master`) before falling back onto checks for short name matches (`master`). * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_refs` ```python theme={null} query_repository_refs(owner: str, name: str, ref_prefix: str, github_credentials: GitHubCredentials, follow_renames: bool = True, query: Optional[str] = None, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, direction: graphql_schema.OrderDirection = None, order_by: graphql_schema.RefOrder = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Fetch a list of refs from the repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `ref_prefix`: A ref name prefix like `refs/heads/`, `refs/tags/`, etc. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `query`: Filters refs with query on name. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `direction`: DEPRECATED: use orderBy. The ordering direction. * `order_by`: Ordering options for refs returned from the connection. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_owner` ```python theme={null} query_repository_owner(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` The User owner of the repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_forks` ```python theme={null} query_repository_forks(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, privacy: graphql_schema.RepositoryPrivacy = None, order_by: graphql_schema.RepositoryOrder = None, affiliations: Iterable[graphql_schema.RepositoryAffiliation] = None, owner_affiliations: Iterable[graphql_schema.RepositoryAffiliation] = ('OWNER', 'COLLABORATOR'), is_locked: Optional[bool] = None, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of direct forked repositories. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `privacy`: If non-null, filters repositories according to privacy. * `order_by`: Ordering options for repositories returned from the connection. * `affiliations`: Array of viewer's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the current viewer owns. * `owner_affiliations`: Array of owner's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the organization or user being viewed owns. * `is_locked`: If non-null, filters repositories according to whether they have been locked. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_issue` ```python theme={null} query_repository_issue(owner: str, name: str, number: int, github_credentials: GitHubCredentials, follow_renames: bool = True, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Returns a single issue from the current repository by number. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `number`: The number for the issue to be returned. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_label` ```python theme={null} query_repository_label(owner: str, name: str, label_name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Returns a single label by name. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `label_name`: Label name. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_issues` ```python theme={null} query_repository_issues(owner: str, name: str, labels: Iterable[str], states: Iterable[graphql_schema.IssueState], github_credentials: GitHubCredentials, follow_renames: bool = True, order_by: graphql_schema.IssueOrder = None, filter_by: graphql_schema.IssueFilters = None, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of issues that have been opened in the repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `labels`: A list of label names to filter the pull requests by. * `states`: A list of states to filter the issues by. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `order_by`: Ordering options for issues returned from the connection. * `filter_by`: Filtering options for issues returned from the connection. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_labels` ```python theme={null} query_repository_labels(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, order_by: graphql_schema.LabelOrder = {'field': 'CREATED_AT', 'direction': 'ASC'}, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, query: Optional[str] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of labels associated with the repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `order_by`: Ordering options for labels returned from the connection. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `query`: If provided, searches labels by name and description. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_object` ```python theme={null} query_repository_object(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, oid: Optional[datetime] = None, expression: Optional[str] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A Git object in the repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `oid`: The Git object ID. * `expression`: A Git revision expression suitable for rev-parse. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_project` ```python theme={null} query_repository_project(owner: str, name: str, number: int, github_credentials: GitHubCredentials, follow_renames: bool = True, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Find project by number. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `number`: The project number to find. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_release` ```python theme={null} query_repository_release(owner: str, name: str, tag_name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Lookup a single release given various criteria. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `tag_name`: The name of the Tag the Release was created from. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_projects` ```python theme={null} query_repository_projects(owner: str, name: str, states: Iterable[graphql_schema.ProjectState], github_credentials: GitHubCredentials, follow_renames: bool = True, order_by: graphql_schema.ProjectOrder = None, search: Optional[str] = None, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of projects under the owner. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `states`: A list of states to filter the projects by. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `order_by`: Ordering options for projects returned from the connection. * `search`: Query to search projects by, currently only searching by name. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_packages` ```python theme={null} query_repository_packages(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, names: Optional[Iterable[str]] = None, repository_id: Optional[str] = None, package_type: graphql_schema.PackageType = None, order_by: graphql_schema.PackageOrder = {'field': 'CREATED_AT', 'direction': 'DESC'}, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of packages under the owner. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `names`: Find packages by their names. * `repository_id`: Find packages in a repository by ID. * `package_type`: Filter registry package by type. * `order_by`: Ordering of the returned packages. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_releases` ```python theme={null} query_repository_releases(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, order_by: graphql_schema.ReleaseOrder = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` List of releases which are dependent on this repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `order_by`: Order for connection. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_watchers` ```python theme={null} query_repository_watchers(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of users watching the repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_languages` ```python theme={null} query_repository_languages(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, order_by: graphql_schema.LanguageOrder = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list containing a breakdown of the language composition of the repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `order_by`: Order for connection. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_milestone` ```python theme={null} query_repository_milestone(owner: str, name: str, number: int, github_credentials: GitHubCredentials, follow_renames: bool = True, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Returns a single milestone from the current repository by number. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `number`: The number for the milestone to be returned. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_project_v2` ```python theme={null} query_repository_project_v2(owner: str, name: str, number: int, github_credentials: GitHubCredentials, follow_renames: bool = True, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Finds and returns the Project according to the provided Project number. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `number`: The Project number. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_stargazers` ```python theme={null} query_repository_stargazers(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, order_by: graphql_schema.StarOrder = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of users who have starred this starrable. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `order_by`: Order for connection. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_deploy_keys` ```python theme={null} query_repository_deploy_keys(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of deploy keys that are on this repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_discussion` ```python theme={null} query_repository_discussion(owner: str, name: str, number: int, github_credentials: GitHubCredentials, follow_renames: bool = True, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Returns a single discussion from the current repository by number. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `number`: The number for the discussion to be returned. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_milestones` ```python theme={null} query_repository_milestones(owner: str, name: str, states: Iterable[graphql_schema.MilestoneState], github_credentials: GitHubCredentials, follow_renames: bool = True, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, order_by: graphql_schema.MilestoneOrder = None, query: Optional[str] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of milestones associated with the repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `states`: Filter by the state of the milestones. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `order_by`: Ordering options for milestones. * `query`: Filters milestones with a query on the title. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_projects_v2` ```python theme={null} query_repository_projects_v2(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, query: Optional[str] = None, order_by: graphql_schema.ProjectV2Order = {'field': 'NUMBER', 'direction': 'DESC'}, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` List of projects linked to this repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `query`: A project to search for linked to the repo. * `order_by`: How to order the returned projects. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_submodules` ```python theme={null} query_repository_submodules(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Returns a list of all submodules in this repository parsed from the .gitmodules file as of the default branch's HEAD commit. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_license_info` ```python theme={null} query_repository_license_info(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` The license associated with the repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_deployments` ```python theme={null} query_repository_deployments(owner: str, name: str, environments: Iterable[str], github_credentials: GitHubCredentials, follow_renames: bool = True, order_by: graphql_schema.DeploymentOrder = {'field': 'CREATED_AT', 'direction': 'ASC'}, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Deployments associated with the repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `environments`: Environments to list deployments for. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `order_by`: Ordering options for deployments returned from the connection. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_discussions` ```python theme={null} query_repository_discussions(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, category_id: Optional[str] = None, order_by: graphql_schema.DiscussionOrder = {'field': 'UPDATED_AT', 'direction': 'DESC'}, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of discussions that have been opened in the repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `category_id`: Only include discussions that belong to the category with this ID. * `order_by`: Ordering options for discussions returned from the connection. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_environment` ```python theme={null} query_repository_environment(owner: str, name: str, environment_name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Returns a single active environment from the current repository by name. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `environment_name`: The name of the environment to be returned. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_project_next` ```python theme={null} query_repository_project_next(owner: str, name: str, number: int, github_credentials: GitHubCredentials, follow_renames: bool = True, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Finds and returns the Project (beta) according to the provided Project (beta) number. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `number`: The ProjectNext number. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_pull_request` ```python theme={null} query_repository_pull_request(owner: str, name: str, number: int, github_credentials: GitHubCredentials, follow_renames: bool = True, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Returns a single pull request from the current repository by number. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `number`: The number for the pull request to be returned. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_contact_links` ```python theme={null} query_repository_contact_links(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Returns a list of contact links associated to the repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_environments` ```python theme={null} query_repository_environments(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of environments that are in this repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_funding_links` ```python theme={null} query_repository_funding_links(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` The funding links for this repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_pinned_issues` ```python theme={null} query_repository_pinned_issues(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of pinned issues for this repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_projects_next` ```python theme={null} query_repository_projects_next(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, query: Optional[str] = None, sort_by: graphql_schema.ProjectNextOrderField = 'TITLE', return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` List of projects (beta) linked to this repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `query`: A project (beta) to search for linked to the repo. * `sort_by`: How to order the returned project (beta) objects. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_pull_requests` ```python theme={null} query_repository_pull_requests(owner: str, name: str, states: Iterable[graphql_schema.PullRequestState], labels: Iterable[str], github_credentials: GitHubCredentials, follow_renames: bool = True, head_ref_name: Optional[str] = None, base_ref_name: Optional[str] = None, order_by: graphql_schema.IssueOrder = None, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of pull requests that have been opened in the repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `states`: A list of states to filter the pull requests by. * `labels`: A list of label names to filter the pull requests by. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `head_ref_name`: The head ref name to filter the pull requests by. * `base_ref_name`: The base ref name to filter the pull requests by. * `order_by`: Ordering options for pull requests returned from the connection. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_code_of_conduct` ```python theme={null} query_repository_code_of_conduct(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Returns the code of conduct for this repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_collaborators` ```python theme={null} query_repository_collaborators(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, affiliation: graphql_schema.CollaboratorAffiliation = None, query: Optional[str] = None, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of collaborators associated with the repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `affiliation`: Collaborators affiliation level with a repository. * `query`: Filters users with query on user name and login. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_latest_release` ```python theme={null} query_repository_latest_release(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Get the latest release for the repository if one exists. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_recent_projects` ```python theme={null} query_repository_recent_projects(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Recent projects that this user has modified in the context of the owner. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_commit_comments` ```python theme={null} query_repository_commit_comments(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of commit comments associated with the repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_issue_templates` ```python theme={null} query_repository_issue_templates(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Returns a list of issue templates associated to the repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_assignable_users` ```python theme={null} query_repository_assignable_users(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, query: Optional[str] = None, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of users that can be assigned to issues in this repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `query`: Filters users with query on user name and login. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_primary_language` ```python theme={null} query_repository_primary_language(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` The primary language of the repository's code. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_default_branch_ref` ```python theme={null} query_repository_default_branch_ref(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` The Ref associated with the repository's default branch. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_mentionable_users` ```python theme={null} query_repository_mentionable_users(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, query: Optional[str] = None, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of Users that can be mentioned in the context of the repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `query`: Filters users with query on user name and login. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_repository_topics` ```python theme={null} query_repository_repository_topics(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of applied repository-topic associations for this repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_pinned_discussions` ```python theme={null} query_repository_pinned_discussions(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of discussions that have been pinned in this repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_discussion_category` ```python theme={null} query_repository_discussion_category(owner: str, name: str, slug: str, github_credentials: GitHubCredentials, follow_renames: bool = True, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A discussion category by slug. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `slug`: The slug of the discussion category to be returned. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_interaction_ability` ```python theme={null} query_repository_interaction_ability(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` The interaction ability settings for this repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_issue_or_pull_request` ```python theme={null} query_repository_issue_or_pull_request(owner: str, name: str, number: int, github_credentials: GitHubCredentials, follow_renames: bool = True, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Returns a single issue-like object from the current repository by number. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `number`: The number for the issue to be returned. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_vulnerability_alerts` ```python theme={null} query_repository_vulnerability_alerts(owner: str, name: str, states: Iterable[graphql_schema.RepositoryVulnerabilityAlertState], dependency_scopes: Iterable[graphql_schema.RepositoryVulnerabilityAlertDependencyScope], github_credentials: GitHubCredentials, follow_renames: bool = True, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of vulnerability alerts that are on this repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `states`: Filter by the state of the alert. * `dependency_scopes`: Filter by the scope of the alert's dependency. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_discussion_categories` ```python theme={null} query_repository_discussion_categories(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, filter_by_assignable: bool = False, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of discussion categories that are available in the repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `filter_by_assignable`: Filter by categories that are assignable by the viewer. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_pull_request_templates` ```python theme={null} query_repository_pull_request_templates(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Returns a list of pull request templates associated to the repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_branch_protection_rules` ```python theme={null} query_repository_branch_protection_rules(owner: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of branch protection rules for this repository. **Args:** * `owner`: The login field of a user or organization. * `name`: The name of the repository. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ## Classes ### `GitHubRepository` Interact with files stored on GitHub repositories. **Methods:** #### `aget_directory` ```python theme={null} aget_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> None ``` Clones a GitHub project specified in `from_path` to the provided `local_path`; defaults to cloning the repository reference configured on the Block to the present working directory. Async version. **Args:** * `from_path`: If provided, interpreted as a subdirectory of the underlying repository that will be copied to the provided local path. * `local_path`: A local path to clone to; defaults to present working directory. #### `get_directory` ```python theme={null} get_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> None ``` Clones a GitHub project specified in `from_path` to the provided `local_path`; defaults to cloning the repository reference configured on the Block to the present working directory. **Args:** * `from_path`: If provided, interpreted as a subdirectory of the underlying repository that will be copied to the provided local path. * `local_path`: A local path to clone to; defaults to present working directory. # repository_owner Source: https://docs.prefect.io/integrations/prefect-github/api-ref/prefect_github-repository_owner # `prefect_github.repository_owner` This is a module containing: GitHub query\_repository\_owner\* tasks ## Functions ### `query_repository_owner` ```python theme={null} query_repository_owner(login: str, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` The query root of GitHub's GraphQL interface. **Args:** * `login`: The username to lookup the owner by. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_owner_repository` ```python theme={null} query_repository_owner_repository(login: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Find Repository. **Args:** * `login`: The username to lookup the owner by. * `name`: Name of Repository to find. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_repository_owner_repositories` ```python theme={null} query_repository_owner_repositories(login: str, github_credentials: GitHubCredentials, privacy: Optional[graphql_schema.RepositoryPrivacy] = None, order_by: Optional[graphql_schema.RepositoryOrder] = None, affiliations: Optional[Iterable[graphql_schema.RepositoryAffiliation]] = None, owner_affiliations: Optional[Iterable[graphql_schema.RepositoryAffiliation]] = ('OWNER', 'COLLABORATOR'), is_locked: Optional[bool] = None, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, is_fork: Optional[bool] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of repositories that the user owns. **Args:** * `login`: The username to lookup the owner by. * `github_credentials`: Credentials to use for authentication with GitHub. * `privacy`: If non-null, filters repositories according to privacy. * `order_by`: Ordering options for repositories returned from the connection. * `affiliations`: Array of viewer's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the current viewer owns. * `owner_affiliations`: Array of owner's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the organization or user being viewed owns. * `is_locked`: If non-null, filters repositories according to whether they have been locked. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `is_fork`: If non-null, filters repositories according to whether they are forks of another repository. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. # __init__ Source: https://docs.prefect.io/integrations/prefect-github/api-ref/prefect_github-schemas-__init__ # `prefect_github.schemas` *This module is empty or contains only private/internal implementations.* # graphql_schema Source: https://docs.prefect.io/integrations/prefect-github/api-ref/prefect_github-schemas-graphql_schema # `prefect_github.schemas.graphql_schema` ## Classes ### `ActorType` See source code for more info. ### `AuditLogOrderField` See source code for more info. ### `Base64String` See source code for more info. ### `CheckAnnotationLevel` See source code for more info. ### `CheckConclusionState` See source code for more info. ### `CheckRunState` See source code for more info. ### `CheckRunType` See source code for more info. ### `CheckStatusState` See source code for more info. ### `CollaboratorAffiliation` See source code for more info. ### `CommentAuthorAssociation` See source code for more info. ### `CommentCannotUpdateReason` See source code for more info. ### `CommitContributionOrderField` See source code for more info. ### `ContributionLevel` See source code for more info. ### `DefaultRepositoryPermissionField` See source code for more info. ### `DependencyGraphEcosystem` See source code for more info. ### `DeploymentOrderField` See source code for more info. ### `DeploymentProtectionRuleType` See source code for more info. ### `DeploymentReviewState` See source code for more info. ### `DeploymentState` See source code for more info. ### `DeploymentStatusState` See source code for more info. ### `DiffSide` See source code for more info. ### `DiscussionOrderField` See source code for more info. ### `DiscussionPollOptionOrderField` See source code for more info. ### `DismissReason` See source code for more info. ### `EnterpriseAdministratorInvitationOrderField` See source code for more info. ### `EnterpriseAdministratorRole` See source code for more info. ### `EnterpriseAllowPrivateRepositoryForkingPolicyValue` See source code for more info. ### `EnterpriseDefaultRepositoryPermissionSettingValue` See source code for more info. ### `EnterpriseEnabledDisabledSettingValue` See source code for more info. ### `EnterpriseEnabledSettingValue` See source code for more info. ### `EnterpriseMemberOrderField` See source code for more info. ### `EnterpriseMembersCanCreateRepositoriesSettingValue` See source code for more info. ### `EnterpriseMembersCanMakePurchasesSettingValue` See source code for more info. ### `EnterpriseServerInstallationOrderField` See source code for more info. ### `EnterpriseServerUserAccountEmailOrderField` See source code for more info. ### `EnterpriseServerUserAccountOrderField` See source code for more info. ### `EnterpriseServerUserAccountsUploadOrderField` See source code for more info. ### `EnterpriseServerUserAccountsUploadSyncState` See source code for more info. ### `EnterpriseUserAccountMembershipRole` See source code for more info. ### `EnterpriseUserDeployment` See source code for more info. ### `FileViewedState` See source code for more info. ### `FundingPlatform` See source code for more info. ### `GistOrderField` See source code for more info. ### `GistPrivacy` See source code for more info. ### `GitObjectID` See source code for more info. ### `GitSSHRemote` See source code for more info. ### `GitSignatureState` See source code for more info. ### `GitTimestamp` See source code for more info. ### `HTML` See source code for more info. ### `IdentityProviderConfigurationState` See source code for more info. ### `IpAllowListEnabledSettingValue` See source code for more info. ### `IpAllowListEntryOrderField` See source code for more info. ### `IpAllowListForInstalledAppsEnabledSettingValue` See source code for more info. ### `IssueClosedStateReason` See source code for more info. ### `IssueCommentOrderField` See source code for more info. ### `IssueOrderField` See source code for more info. ### `IssueState` See source code for more info. ### `IssueStateReason` See source code for more info. ### `IssueTimelineItemsItemType` See source code for more info. ### `LabelOrderField` See source code for more info. ### `LanguageOrderField` See source code for more info. ### `LockReason` See source code for more info. ### `MergeCommitMessage` See source code for more info. ### `MergeCommitTitle` See source code for more info. ### `MergeableState` See source code for more info. ### `MigrationSourceType` See source code for more info. ### `MigrationState` See source code for more info. ### `MilestoneOrderField` See source code for more info. ### `MilestoneState` See source code for more info. ### `NotificationRestrictionSettingValue` See source code for more info. ### `OIDCProviderType` See source code for more info. ### `OauthApplicationCreateAuditEntryState` See source code for more info. ### `OperationType` See source code for more info. ### `OrderDirection` See source code for more info. ### `OrgAddMemberAuditEntryPermission` See source code for more info. ### `OrgCreateAuditEntryBillingPlan` See source code for more info. ### `OrgEnterpriseOwnerOrderField` See source code for more info. ### `OrgRemoveBillingManagerAuditEntryReason` See source code for more info. ### `OrgRemoveMemberAuditEntryMembershipType` See source code for more info. ### `OrgRemoveMemberAuditEntryReason` See source code for more info. ### `OrgRemoveOutsideCollaboratorAuditEntryMembershipType` See source code for more info. ### `OrgRemoveOutsideCollaboratorAuditEntryReason` See source code for more info. ### `OrgUpdateDefaultRepositoryPermissionAuditEntryPermission` See source code for more info. ### `OrgUpdateMemberAuditEntryPermission` See source code for more info. ### `OrgUpdateMemberRepositoryCreationPermissionAuditEntryVisibility` See source code for more info. ### `OrganizationInvitationRole` See source code for more info. ### `OrganizationInvitationType` See source code for more info. ### `OrganizationMemberRole` See source code for more info. ### `OrganizationMembersCanCreateRepositoriesSettingValue` See source code for more info. ### `OrganizationOrderField` See source code for more info. ### `PackageFileOrderField` See source code for more info. ### `PackageOrderField` See source code for more info. ### `PackageType` See source code for more info. ### `PackageVersionOrderField` See source code for more info. ### `PatchStatus` See source code for more info. ### `PinnableItemType` See source code for more info. ### `PinnedDiscussionGradient` See source code for more info. ### `PinnedDiscussionPattern` See source code for more info. ### `PreciseDateTime` See source code for more info. ### `ProjectCardArchivedState` See source code for more info. ### `ProjectCardState` See source code for more info. ### `ProjectColumnPurpose` See source code for more info. ### `ProjectItemType` See source code for more info. ### `ProjectNextFieldType` See source code for more info. ### `ProjectNextOrderField` See source code for more info. ### `ProjectOrderField` See source code for more info. ### `ProjectState` See source code for more info. ### `ProjectTemplate` See source code for more info. ### `ProjectV2FieldOrderField` See source code for more info. ### `ProjectV2FieldType` See source code for more info. ### `ProjectV2ItemFieldValueOrderField` See source code for more info. ### `ProjectV2ItemOrderField` See source code for more info. ### `ProjectV2ItemType` See source code for more info. ### `ProjectV2OrderField` See source code for more info. ### `ProjectV2ViewLayout` See source code for more info. ### `ProjectV2ViewOrderField` See source code for more info. ### `ProjectViewLayout` See source code for more info. ### `PullRequestMergeMethod` See source code for more info. ### `PullRequestOrderField` See source code for more info. ### `PullRequestReviewCommentState` See source code for more info. ### `PullRequestReviewDecision` See source code for more info. ### `PullRequestReviewEvent` See source code for more info. ### `PullRequestReviewState` See source code for more info. ### `PullRequestState` See source code for more info. ### `PullRequestTimelineItemsItemType` See source code for more info. ### `PullRequestUpdateState` See source code for more info. ### `ReactionContent` See source code for more info. ### `ReactionOrderField` See source code for more info. ### `RefOrderField` See source code for more info. ### `ReleaseOrderField` See source code for more info. ### `RepoAccessAuditEntryVisibility` See source code for more info. ### `RepoAddMemberAuditEntryVisibility` See source code for more info. ### `RepoArchivedAuditEntryVisibility` See source code for more info. ### `RepoChangeMergeSettingAuditEntryMergeType` See source code for more info. ### `RepoCreateAuditEntryVisibility` See source code for more info. ### `RepoDestroyAuditEntryVisibility` See source code for more info. ### `RepoRemoveMemberAuditEntryVisibility` See source code for more info. ### `ReportedContentClassifiers` See source code for more info. ### `RepositoryAffiliation` See source code for more info. ### `RepositoryContributionType` See source code for more info. ### `RepositoryInteractionLimit` See source code for more info. ### `RepositoryInteractionLimitExpiry` See source code for more info. ### `RepositoryInteractionLimitOrigin` See source code for more info. ### `RepositoryInvitationOrderField` See source code for more info. ### `RepositoryLockReason` See source code for more info. ### `RepositoryMigrationOrderDirection` See source code for more info. ### `RepositoryMigrationOrderField` See source code for more info. ### `RepositoryOrderField` See source code for more info. ### `RepositoryPermission` See source code for more info. ### `RepositoryPrivacy` See source code for more info. ### `RepositoryVisibility` See source code for more info. ### `RepositoryVulnerabilityAlertDependencyScope` See source code for more info. ### `RepositoryVulnerabilityAlertState` See source code for more info. ### `RequestableCheckStatusState` See source code for more info. ### `RoleInOrganization` See source code for more info. ### `SamlDigestAlgorithm` See source code for more info. ### `SamlSignatureAlgorithm` See source code for more info. ### `SavedReplyOrderField` See source code for more info. ### `SearchType` See source code for more info. ### `SecurityAdvisoryClassification` See source code for more info. ### `SecurityAdvisoryEcosystem` See source code for more info. ### `SecurityAdvisoryIdentifierType` See source code for more info. ### `SecurityAdvisoryOrderField` See source code for more info. ### `SecurityAdvisorySeverity` See source code for more info. ### `SecurityVulnerabilityOrderField` See source code for more info. ### `SponsorOrderField` See source code for more info. ### `SponsorableOrderField` See source code for more info. ### `SponsorsActivityAction` See source code for more info. ### `SponsorsActivityOrderField` See source code for more info. ### `SponsorsActivityPeriod` See source code for more info. ### `SponsorsGoalKind` See source code for more info. ### `SponsorsTierOrderField` See source code for more info. ### `SponsorshipNewsletterOrderField` See source code for more info. ### `SponsorshipOrderField` See source code for more info. ### `SponsorshipPrivacy` See source code for more info. ### `SquashMergeCommitMessage` See source code for more info. ### `SquashMergeCommitTitle` See source code for more info. ### `StarOrderField` See source code for more info. ### `StatusState` See source code for more info. ### `SubscriptionState` See source code for more info. ### `TeamDiscussionCommentOrderField` See source code for more info. ### `TeamDiscussionOrderField` See source code for more info. ### `TeamMemberOrderField` See source code for more info. ### `TeamMemberRole` See source code for more info. ### `TeamMembershipType` See source code for more info. ### `TeamOrderField` See source code for more info. ### `TeamPrivacy` See source code for more info. ### `TeamRepositoryOrderField` See source code for more info. ### `TeamRole` See source code for more info. ### `TopicSuggestionDeclineReason` See source code for more info. ### `TrackedIssueStates` See source code for more info. ### `URI` See source code for more info. ### `UserBlockDuration` See source code for more info. ### `UserStatusOrderField` See source code for more info. ### `VerifiableDomainOrderField` See source code for more info. ### `WorkflowRunOrderField` See source code for more info. ### `X509Certificate` See source code for more info. ### `AbortQueuedMigrationsInput` See source code for more info. ### `AcceptEnterpriseAdministratorInvitationInput` See source code for more info. ### `AcceptTopicSuggestionInput` See source code for more info. ### `AddAssigneesToAssignableInput` See source code for more info. ### `AddCommentInput` See source code for more info. ### `AddDiscussionCommentInput` See source code for more info. ### `AddDiscussionPollVoteInput` See source code for more info. ### `AddEnterpriseSupportEntitlementInput` See source code for more info. ### `AddLabelsToLabelableInput` See source code for more info. ### `AddProjectCardInput` See source code for more info. ### `AddProjectColumnInput` See source code for more info. ### `AddProjectDraftIssueInput` See source code for more info. ### `AddProjectNextItemInput` See source code for more info. ### `AddProjectV2DraftIssueInput` See source code for more info. ### `AddProjectV2ItemByIdInput` See source code for more info. ### `AddPullRequestReviewCommentInput` See source code for more info. ### `AddPullRequestReviewInput` See source code for more info. ### `AddPullRequestReviewThreadInput` See source code for more info. ### `AddReactionInput` See source code for more info. ### `AddStarInput` See source code for more info. ### `AddUpvoteInput` See source code for more info. ### `AddVerifiableDomainInput` See source code for more info. ### `ApproveDeploymentsInput` See source code for more info. ### `ApproveVerifiableDomainInput` See source code for more info. ### `ArchiveRepositoryInput` See source code for more info. ### `AuditLogOrder` See source code for more info. ### `CancelEnterpriseAdminInvitationInput` See source code for more info. ### `CancelSponsorshipInput` See source code for more info. ### `ChangeUserStatusInput` See source code for more info. ### `CheckAnnotationData` See source code for more info. ### `CheckAnnotationRange` See source code for more info. ### `CheckRunAction` See source code for more info. ### `CheckRunFilter` See source code for more info. ### `CheckRunOutput` See source code for more info. ### `CheckRunOutputImage` See source code for more info. ### `CheckSuiteAutoTriggerPreference` See source code for more info. ### `CheckSuiteFilter` See source code for more info. ### `ClearLabelsFromLabelableInput` See source code for more info. ### `ClearProjectV2ItemFieldValueInput` See source code for more info. ### `CloneProjectInput` See source code for more info. ### `CloneTemplateRepositoryInput` See source code for more info. ### `CloseIssueInput` See source code for more info. ### `ClosePullRequestInput` See source code for more info. ### `CommitAuthor` See source code for more info. ### `CommitContributionOrder` See source code for more info. ### `CommitMessage` See source code for more info. ### `CommittableBranch` See source code for more info. ### `ContributionOrder` See source code for more info. ### `ConvertProjectCardNoteToIssueInput` See source code for more info. ### `ConvertPullRequestToDraftInput` See source code for more info. ### `CreateBranchProtectionRuleInput` See source code for more info. ### `CreateCheckRunInput` See source code for more info. ### `CreateCheckSuiteInput` See source code for more info. ### `CreateCommitOnBranchInput` See source code for more info. ### `CreateDiscussionInput` See source code for more info. ### `CreateEnterpriseOrganizationInput` See source code for more info. ### `CreateEnvironmentInput` See source code for more info. ### `CreateIpAllowListEntryInput` See source code for more info. ### `CreateIssueInput` See source code for more info. ### `CreateMigrationSourceInput` See source code for more info. ### `CreateProjectInput` See source code for more info. ### `CreateProjectV2Input` See source code for more info. ### `CreatePullRequestInput` See source code for more info. ### `CreateRefInput` See source code for more info. ### `CreateRepositoryInput` See source code for more info. ### `CreateSponsorsTierInput` See source code for more info. ### `CreateSponsorshipInput` See source code for more info. ### `CreateTeamDiscussionCommentInput` See source code for more info. ### `CreateTeamDiscussionInput` See source code for more info. ### `DeclineTopicSuggestionInput` See source code for more info. ### `DeleteBranchProtectionRuleInput` See source code for more info. ### `DeleteDeploymentInput` See source code for more info. ### `DeleteDiscussionCommentInput` See source code for more info. ### `DeleteDiscussionInput` See source code for more info. ### `DeleteEnvironmentInput` See source code for more info. ### `DeleteIpAllowListEntryInput` See source code for more info. ### `DeleteIssueCommentInput` See source code for more info. ### `DeleteIssueInput` See source code for more info. ### `DeleteProjectCardInput` See source code for more info. ### `DeleteProjectColumnInput` See source code for more info. ### `DeleteProjectInput` See source code for more info. ### `DeleteProjectNextItemInput` See source code for more info. ### `DeleteProjectV2ItemInput` See source code for more info. ### `DeletePullRequestReviewCommentInput` See source code for more info. ### `DeletePullRequestReviewInput` See source code for more info. ### `DeleteRefInput` See source code for more info. ### `DeleteTeamDiscussionCommentInput` See source code for more info. ### `DeleteTeamDiscussionInput` See source code for more info. ### `DeleteVerifiableDomainInput` See source code for more info. ### `DeploymentOrder` See source code for more info. ### `DisablePullRequestAutoMergeInput` See source code for more info. ### `DiscussionOrder` See source code for more info. ### `DiscussionPollOptionOrder` See source code for more info. ### `DismissPullRequestReviewInput` See source code for more info. ### `DismissRepositoryVulnerabilityAlertInput` See source code for more info. ### `DraftPullRequestReviewComment` See source code for more info. ### `DraftPullRequestReviewThread` See source code for more info. ### `EnablePullRequestAutoMergeInput` See source code for more info. ### `EnterpriseAdministratorInvitationOrder` See source code for more info. ### `EnterpriseMemberOrder` See source code for more info. ### `EnterpriseServerInstallationOrder` See source code for more info. ### `EnterpriseServerUserAccountEmailOrder` See source code for more info. ### `EnterpriseServerUserAccountOrder` See source code for more info. ### `EnterpriseServerUserAccountsUploadOrder` See source code for more info. ### `FileAddition` See source code for more info. ### `FileChanges` See source code for more info. ### `FileDeletion` See source code for more info. ### `FollowOrganizationInput` See source code for more info. ### `FollowUserInput` See source code for more info. ### `GistOrder` See source code for more info. ### `GrantEnterpriseOrganizationsMigratorRoleInput` See source code for more info. ### `GrantMigratorRoleInput` See source code for more info. ### `InviteEnterpriseAdminInput` See source code for more info. ### `IpAllowListEntryOrder` See source code for more info. ### `IssueCommentOrder` See source code for more info. ### `IssueFilters` See source code for more info. ### `IssueOrder` See source code for more info. ### `LabelOrder` See source code for more info. ### `LanguageOrder` See source code for more info. ### `LinkRepositoryToProjectInput` See source code for more info. ### `LockLockableInput` See source code for more info. ### `MarkDiscussionCommentAsAnswerInput` See source code for more info. ### `MarkFileAsViewedInput` See source code for more info. ### `MarkPullRequestReadyForReviewInput` See source code for more info. ### `MergeBranchInput` See source code for more info. ### `MergePullRequestInput` See source code for more info. ### `MilestoneOrder` See source code for more info. ### `MinimizeCommentInput` See source code for more info. ### `MoveProjectCardInput` See source code for more info. ### `MoveProjectColumnInput` See source code for more info. ### `OrgEnterpriseOwnerOrder` See source code for more info. ### `OrganizationOrder` See source code for more info. ### `PackageFileOrder` See source code for more info. ### `PackageOrder` See source code for more info. ### `PackageVersionOrder` See source code for more info. ### `PinIssueInput` See source code for more info. ### `ProjectOrder` See source code for more info. ### `ProjectV2FieldOrder` See source code for more info. ### `ProjectV2FieldValue` See source code for more info. ### `ProjectV2ItemFieldValueOrder` See source code for more info. ### `ProjectV2ItemOrder` See source code for more info. ### `ProjectV2Order` See source code for more info. ### `ProjectV2ViewOrder` See source code for more info. ### `PullRequestOrder` See source code for more info. ### `ReactionOrder` See source code for more info. ### `RefOrder` See source code for more info. ### `RegenerateEnterpriseIdentityProviderRecoveryCodesInput` See source code for more info. ### `RegenerateVerifiableDomainTokenInput` See source code for more info. ### `RejectDeploymentsInput` See source code for more info. ### `ReleaseOrder` See source code for more info. ### `RemoveAssigneesFromAssignableInput` See source code for more info. ### `RemoveEnterpriseAdminInput` See source code for more info. ### `RemoveEnterpriseIdentityProviderInput` See source code for more info. ### `RemoveEnterpriseOrganizationInput` See source code for more info. ### `RemoveEnterpriseSupportEntitlementInput` See source code for more info. ### `RemoveLabelsFromLabelableInput` See source code for more info. ### `RemoveOutsideCollaboratorInput` See source code for more info. ### `RemoveReactionInput` See source code for more info. ### `RemoveStarInput` See source code for more info. ### `RemoveUpvoteInput` See source code for more info. ### `ReopenIssueInput` See source code for more info. ### `ReopenPullRequestInput` See source code for more info. ### `RepositoryInvitationOrder` See source code for more info. ### `RepositoryMigrationOrder` See source code for more info. ### `RepositoryOrder` See source code for more info. ### `RequestReviewsInput` See source code for more info. ### `RequiredStatusCheckInput` See source code for more info. ### `RerequestCheckSuiteInput` See source code for more info. ### `ResolveReviewThreadInput` See source code for more info. ### `RevokeEnterpriseOrganizationsMigratorRoleInput` See source code for more info. ### `RevokeMigratorRoleInput` See source code for more info. ### `SavedReplyOrder` See source code for more info. ### `SecurityAdvisoryIdentifierFilter` See source code for more info. ### `SecurityAdvisoryOrder` See source code for more info. ### `SecurityVulnerabilityOrder` See source code for more info. ### `SetEnterpriseIdentityProviderInput` See source code for more info. ### `SetOrganizationInteractionLimitInput` See source code for more info. ### `SetRepositoryInteractionLimitInput` See source code for more info. ### `SetUserInteractionLimitInput` See source code for more info. ### `SponsorOrder` See source code for more info. ### `SponsorableOrder` See source code for more info. ### `SponsorsActivityOrder` See source code for more info. ### `SponsorsTierOrder` See source code for more info. ### `SponsorshipNewsletterOrder` See source code for more info. ### `SponsorshipOrder` See source code for more info. ### `StarOrder` See source code for more info. ### `StartRepositoryMigrationInput` See source code for more info. ### `SubmitPullRequestReviewInput` See source code for more info. ### `TeamDiscussionCommentOrder` See source code for more info. ### `TeamDiscussionOrder` See source code for more info. ### `TeamMemberOrder` See source code for more info. ### `TeamOrder` See source code for more info. ### `TeamRepositoryOrder` See source code for more info. ### `TransferIssueInput` See source code for more info. ### `UnarchiveRepositoryInput` See source code for more info. ### `UnfollowOrganizationInput` See source code for more info. ### `UnfollowUserInput` See source code for more info. ### `UnlinkRepositoryFromProjectInput` See source code for more info. ### `UnlockLockableInput` See source code for more info. ### `UnmarkDiscussionCommentAsAnswerInput` See source code for more info. ### `UnmarkFileAsViewedInput` See source code for more info. ### `UnmarkIssueAsDuplicateInput` See source code for more info. ### `UnminimizeCommentInput` See source code for more info. ### `UnpinIssueInput` See source code for more info. ### `UnresolveReviewThreadInput` See source code for more info. ### `UpdateBranchProtectionRuleInput` See source code for more info. ### `UpdateCheckRunInput` See source code for more info. ### `UpdateCheckSuitePreferencesInput` See source code for more info. ### `UpdateDiscussionCommentInput` See source code for more info. ### `UpdateDiscussionInput` See source code for more info. ### `UpdateEnterpriseAdministratorRoleInput` See source code for more info. ### `UpdateEnterpriseAllowPrivateRepositoryForkingSettingInput` See source code for more info. ### `UpdateEnterpriseDefaultRepositoryPermissionSettingInput` See source code for more info. ### `UpdateEnterpriseMembersCanChangeRepositoryVisibilitySettingInput` See source code for more info. ### `UpdateEnterpriseMembersCanCreateRepositoriesSettingInput` See source code for more info. ### `UpdateEnterpriseMembersCanDeleteIssuesSettingInput` See source code for more info. ### `UpdateEnterpriseMembersCanDeleteRepositoriesSettingInput` See source code for more info. ### `UpdateEnterpriseMembersCanInviteCollaboratorsSettingInput` See source code for more info. ### `UpdateEnterpriseMembersCanMakePurchasesSettingInput` See source code for more info. ### `UpdateEnterpriseMembersCanUpdateProtectedBranchesSettingInput` See source code for more info. ### `UpdateEnterpriseMembersCanViewDependencyInsightsSettingInput` See source code for more info. ### `UpdateEnterpriseOrganizationProjectsSettingInput` See source code for more info. ### `UpdateEnterpriseOwnerOrganizationRoleInput` See source code for more info. ### `UpdateEnterpriseProfileInput` See source code for more info. ### `UpdateEnterpriseRepositoryProjectsSettingInput` See source code for more info. ### `UpdateEnterpriseTeamDiscussionsSettingInput` See source code for more info. ### `UpdateEnterpriseTwoFactorAuthenticationRequiredSettingInput` See source code for more info. ### `UpdateEnvironmentInput` See source code for more info. ### `UpdateIpAllowListEnabledSettingInput` See source code for more info. ### `UpdateIpAllowListEntryInput` See source code for more info. ### `UpdateIpAllowListForInstalledAppsEnabledSettingInput` See source code for more info. ### `UpdateIssueCommentInput` See source code for more info. ### `UpdateIssueInput` See source code for more info. ### `UpdateNotificationRestrictionSettingInput` See source code for more info. ### `UpdateOrganizationAllowPrivateRepositoryForkingSettingInput` See source code for more info. ### `UpdateOrganizationWebCommitSignoffSettingInput` See source code for more info. ### `UpdateProjectCardInput` See source code for more info. ### `UpdateProjectColumnInput` See source code for more info. ### `UpdateProjectDraftIssueInput` See source code for more info. ### `UpdateProjectInput` See source code for more info. ### `UpdateProjectNextInput` See source code for more info. ### `UpdateProjectNextItemFieldInput` See source code for more info. ### `UpdateProjectV2DraftIssueInput` See source code for more info. ### `UpdateProjectV2Input` See source code for more info. ### `UpdateProjectV2ItemFieldValueInput` See source code for more info. ### `UpdateProjectV2ItemPositionInput` See source code for more info. ### `UpdatePullRequestBranchInput` See source code for more info. ### `UpdatePullRequestInput` See source code for more info. ### `UpdatePullRequestReviewCommentInput` See source code for more info. ### `UpdatePullRequestReviewInput` See source code for more info. ### `UpdateRefInput` See source code for more info. ### `UpdateRepositoryInput` See source code for more info. ### `UpdateRepositoryWebCommitSignoffSettingInput` See source code for more info. ### `UpdateSponsorshipPreferencesInput` See source code for more info. ### `UpdateSubscriptionInput` See source code for more info. ### `UpdateTeamDiscussionCommentInput` See source code for more info. ### `UpdateTeamDiscussionInput` See source code for more info. ### `UpdateTeamsRepositoryInput` See source code for more info. ### `UpdateTopicsInput` See source code for more info. ### `UserStatusOrder` See source code for more info. ### `VerifiableDomainOrder` See source code for more info. ### `VerifyVerifiableDomainInput` See source code for more info. ### `WorkflowRunOrder` See source code for more info. ### `AbortQueuedMigrationsPayload` See source code for more info. ### `AcceptEnterpriseAdministratorInvitationPayload` See source code for more info. ### `AcceptTopicSuggestionPayload` See source code for more info. ### `Actor` See source code for more info. ### `ActorLocation` See source code for more info. ### `AddAssigneesToAssignablePayload` See source code for more info. ### `AddCommentPayload` See source code for more info. ### `AddDiscussionCommentPayload` See source code for more info. ### `AddDiscussionPollVotePayload` See source code for more info. ### `AddEnterpriseSupportEntitlementPayload` See source code for more info. ### `AddLabelsToLabelablePayload` See source code for more info. ### `AddProjectCardPayload` See source code for more info. ### `AddProjectColumnPayload` See source code for more info. ### `AddProjectDraftIssuePayload` See source code for more info. ### `AddProjectNextItemPayload` See source code for more info. ### `AddProjectV2DraftIssuePayload` See source code for more info. ### `AddProjectV2ItemByIdPayload` See source code for more info. ### `AddPullRequestReviewCommentPayload` See source code for more info. ### `AddPullRequestReviewPayload` See source code for more info. ### `AddPullRequestReviewThreadPayload` See source code for more info. ### `AddReactionPayload` See source code for more info. ### `AddStarPayload` See source code for more info. ### `AddUpvotePayload` See source code for more info. ### `AddVerifiableDomainPayload` See source code for more info. ### `ApproveDeploymentsPayload` See source code for more info. ### `ApproveVerifiableDomainPayload` See source code for more info. ### `ArchiveRepositoryPayload` See source code for more info. ### `Assignable` See source code for more info. ### `AuditEntry` See source code for more info. ### `AutoMergeRequest` See source code for more info. ### `Blame` See source code for more info. ### `BlameRange` See source code for more info. ### `BranchProtectionRuleConflict` See source code for more info. ### `BranchProtectionRuleConflictConnection` See source code for more info. ### `BranchProtectionRuleConflictEdge` See source code for more info. ### `BranchProtectionRuleConnection` See source code for more info. ### `BranchProtectionRuleEdge` See source code for more info. ### `BypassForcePushAllowanceConnection` See source code for more info. ### `BypassForcePushAllowanceEdge` See source code for more info. ### `BypassPullRequestAllowanceConnection` See source code for more info. ### `BypassPullRequestAllowanceEdge` See source code for more info. ### `CVSS` See source code for more info. ### `CWEConnection` See source code for more info. ### `CWEEdge` See source code for more info. ### `CancelEnterpriseAdminInvitationPayload` See source code for more info. ### `CancelSponsorshipPayload` See source code for more info. ### `ChangeUserStatusPayload` See source code for more info. ### `CheckAnnotation` See source code for more info. ### `CheckAnnotationConnection` See source code for more info. ### `CheckAnnotationEdge` See source code for more info. ### `CheckAnnotationPosition` See source code for more info. ### `CheckAnnotationSpan` See source code for more info. ### `CheckRunConnection` See source code for more info. ### `CheckRunEdge` See source code for more info. ### `CheckRunStateCount` See source code for more info. ### `CheckStep` See source code for more info. ### `CheckStepConnection` See source code for more info. ### `CheckStepEdge` See source code for more info. ### `CheckSuiteConnection` See source code for more info. ### `CheckSuiteEdge` See source code for more info. ### `ClearLabelsFromLabelablePayload` See source code for more info. ### `ClearProjectV2ItemFieldValuePayload` See source code for more info. ### `CloneProjectPayload` See source code for more info. ### `CloneTemplateRepositoryPayload` See source code for more info. ### `Closable` See source code for more info. ### `CloseIssuePayload` See source code for more info. ### `ClosePullRequestPayload` See source code for more info. ### `Comment` See source code for more info. ### `CommitCommentConnection` See source code for more info. ### `CommitCommentEdge` See source code for more info. ### `CommitConnection` See source code for more info. ### `CommitContributionsByRepository` See source code for more info. ### `CommitEdge` See source code for more info. ### `CommitHistoryConnection` See source code for more info. ### `Contribution` See source code for more info. ### `ContributionCalendar` See source code for more info. ### `ContributionCalendarDay` See source code for more info. ### `ContributionCalendarMonth` See source code for more info. ### `ContributionCalendarWeek` See source code for more info. ### `ContributionsCollection` See source code for more info. ### `ConvertProjectCardNoteToIssuePayload` See source code for more info. ### `ConvertPullRequestToDraftPayload` See source code for more info. ### `CreateBranchProtectionRulePayload` See source code for more info. ### `CreateCheckRunPayload` See source code for more info. ### `CreateCheckSuitePayload` See source code for more info. ### `CreateCommitOnBranchPayload` See source code for more info. ### `CreateDiscussionPayload` See source code for more info. ### `CreateEnterpriseOrganizationPayload` See source code for more info. ### `CreateEnvironmentPayload` See source code for more info. ### `CreateIpAllowListEntryPayload` See source code for more info. ### `CreateIssuePayload` See source code for more info. ### `CreateMigrationSourcePayload` See source code for more info. ### `CreateProjectPayload` See source code for more info. ### `CreateProjectV2Payload` See source code for more info. ### `CreatePullRequestPayload` See source code for more info. ### `CreateRefPayload` See source code for more info. ### `CreateRepositoryPayload` See source code for more info. ### `CreateSponsorsTierPayload` See source code for more info. ### `CreateSponsorshipPayload` See source code for more info. ### `CreateTeamDiscussionCommentPayload` See source code for more info. ### `CreateTeamDiscussionPayload` See source code for more info. ### `CreatedCommitContributionConnection` See source code for more info. ### `CreatedCommitContributionEdge` See source code for more info. ### `CreatedIssueContributionConnection` See source code for more info. ### `CreatedIssueContributionEdge` See source code for more info. ### `CreatedPullRequestContributionConnection` See source code for more info. ### `CreatedPullRequestContributionEdge` See source code for more info. ### `CreatedPullRequestReviewContributionConnection` See source code for more info. ### `CreatedPullRequestReviewContributionEdge` See source code for more info. ### `CreatedRepositoryContributionConnection` See source code for more info. ### `CreatedRepositoryContributionEdge` See source code for more info. ### `DeclineTopicSuggestionPayload` See source code for more info. ### `Deletable` See source code for more info. ### `DeleteBranchProtectionRulePayload` See source code for more info. ### `DeleteDeploymentPayload` See source code for more info. ### `DeleteDiscussionCommentPayload` See source code for more info. ### `DeleteDiscussionPayload` See source code for more info. ### `DeleteEnvironmentPayload` See source code for more info. ### `DeleteIpAllowListEntryPayload` See source code for more info. ### `DeleteIssueCommentPayload` See source code for more info. ### `DeleteIssuePayload` See source code for more info. ### `DeleteProjectCardPayload` See source code for more info. ### `DeleteProjectColumnPayload` See source code for more info. ### `DeleteProjectNextItemPayload` See source code for more info. ### `DeleteProjectPayload` See source code for more info. ### `DeleteProjectV2ItemPayload` See source code for more info. ### `DeletePullRequestReviewCommentPayload` See source code for more info. ### `DeletePullRequestReviewPayload` See source code for more info. ### `DeleteRefPayload` See source code for more info. ### `DeleteTeamDiscussionCommentPayload` See source code for more info. ### `DeleteTeamDiscussionPayload` See source code for more info. ### `DeleteVerifiableDomainPayload` See source code for more info. ### `DependabotUpdateError` See source code for more info. ### `DeployKeyConnection` See source code for more info. ### `DeployKeyEdge` See source code for more info. ### `DeploymentConnection` See source code for more info. ### `DeploymentEdge` See source code for more info. ### `DeploymentProtectionRule` See source code for more info. ### `DeploymentProtectionRuleConnection` See source code for more info. ### `DeploymentProtectionRuleEdge` See source code for more info. ### `DeploymentRequest` See source code for more info. ### `DeploymentRequestConnection` See source code for more info. ### `DeploymentRequestEdge` See source code for more info. ### `DeploymentReviewConnection` See source code for more info. ### `DeploymentReviewEdge` See source code for more info. ### `DeploymentReviewerConnection` See source code for more info. ### `DeploymentReviewerEdge` See source code for more info. ### `DeploymentStatusConnection` See source code for more info. ### `DeploymentStatusEdge` See source code for more info. ### `DisablePullRequestAutoMergePayload` See source code for more info. ### `DiscussionCategoryConnection` See source code for more info. ### `DiscussionCategoryEdge` See source code for more info. ### `DiscussionCommentConnection` See source code for more info. ### `DiscussionCommentEdge` See source code for more info. ### `DiscussionConnection` See source code for more info. ### `DiscussionEdge` See source code for more info. ### `DiscussionPollOptionConnection` See source code for more info. ### `DiscussionPollOptionEdge` See source code for more info. ### `DismissPullRequestReviewPayload` See source code for more info. ### `DismissRepositoryVulnerabilityAlertPayload` See source code for more info. ### `EnablePullRequestAutoMergePayload` See source code for more info. ### `EnterpriseAdministratorConnection` See source code for more info. ### `EnterpriseAdministratorEdge` See source code for more info. ### `EnterpriseAdministratorInvitationConnection` See source code for more info. ### `EnterpriseAdministratorInvitationEdge` See source code for more info. ### `EnterpriseAuditEntryData` See source code for more info. ### `EnterpriseBillingInfo` See source code for more info. ### `EnterpriseMemberConnection` See source code for more info. ### `EnterpriseMemberEdge` See source code for more info. ### `EnterpriseOrganizationMembershipConnection` See source code for more info. ### `EnterpriseOrganizationMembershipEdge` See source code for more info. ### `EnterpriseOutsideCollaboratorConnection` See source code for more info. ### `EnterpriseOutsideCollaboratorEdge` See source code for more info. ### `EnterpriseOwnerInfo` See source code for more info. ### `EnterprisePendingMemberInvitationConnection` See source code for more info. ### `EnterprisePendingMemberInvitationEdge` See source code for more info. ### `EnterpriseRepositoryInfoConnection` See source code for more info. ### `EnterpriseRepositoryInfoEdge` See source code for more info. ### `EnterpriseServerInstallationConnection` See source code for more info. ### `EnterpriseServerInstallationEdge` See source code for more info. ### `EnterpriseServerUserAccountConnection` See source code for more info. ### `EnterpriseServerUserAccountEdge` See source code for more info. ### `EnterpriseServerUserAccountEmailConnection` See source code for more info. ### `EnterpriseServerUserAccountEmailEdge` See source code for more info. ### `EnterpriseServerUserAccountsUploadConnection` See source code for more info. ### `EnterpriseServerUserAccountsUploadEdge` See source code for more info. ### `EnvironmentConnection` See source code for more info. ### `EnvironmentEdge` See source code for more info. ### `ExternalIdentityAttribute` See source code for more info. ### `ExternalIdentityConnection` See source code for more info. ### `ExternalIdentityEdge` See source code for more info. ### `ExternalIdentitySamlAttributes` See source code for more info. ### `ExternalIdentityScimAttributes` See source code for more info. ### `FollowOrganizationPayload` See source code for more info. ### `FollowUserPayload` See source code for more info. ### `FollowerConnection` See source code for more info. ### `FollowingConnection` See source code for more info. ### `FundingLink` See source code for more info. ### `GistCommentConnection` See source code for more info. ### `GistCommentEdge` See source code for more info. ### `GistConnection` See source code for more info. ### `GistEdge` See source code for more info. ### `GistFile` See source code for more info. ### `GitActor` See source code for more info. ### `GitActorConnection` See source code for more info. ### `GitActorEdge` See source code for more info. ### `GitHubMetadata` See source code for more info. ### `GitObject` See source code for more info. ### `GitSignature` See source code for more info. ### `GrantEnterpriseOrganizationsMigratorRolePayload` See source code for more info. ### `GrantMigratorRolePayload` See source code for more info. ### `Hovercard` See source code for more info. ### `HovercardContext` See source code for more info. ### `InviteEnterpriseAdminPayload` See source code for more info. ### `IpAllowListEntryConnection` See source code for more info. ### `IpAllowListEntryEdge` See source code for more info. ### `IssueCommentConnection` See source code for more info. ### `IssueCommentEdge` See source code for more info. ### `IssueConnection` See source code for more info. ### `IssueContributionsByRepository` See source code for more info. ### `IssueEdge` See source code for more info. ### `IssueTemplate` See source code for more info. ### `IssueTimelineConnection` See source code for more info. ### `IssueTimelineItemEdge` See source code for more info. ### `IssueTimelineItemsConnection` See source code for more info. ### `IssueTimelineItemsEdge` See source code for more info. ### `LabelConnection` See source code for more info. ### `LabelEdge` See source code for more info. ### `Labelable` See source code for more info. ### `LanguageConnection` See source code for more info. ### `LanguageEdge` See source code for more info. ### `LicenseRule` See source code for more info. ### `LinkRepositoryToProjectPayload` See source code for more info. ### `LockLockablePayload` See source code for more info. ### `Lockable` See source code for more info. ### `MarkDiscussionCommentAsAnswerPayload` See source code for more info. ### `MarkFileAsViewedPayload` See source code for more info. ### `MarkPullRequestReadyForReviewPayload` See source code for more info. ### `MarketplaceListingConnection` See source code for more info. ### `MarketplaceListingEdge` See source code for more info. ### `MemberStatusable` See source code for more info. ### `MergeBranchPayload` See source code for more info. ### `MergePullRequestPayload` See source code for more info. ### `Migration` See source code for more info. ### `MilestoneConnection` See source code for more info. ### `MilestoneEdge` See source code for more info. ### `Minimizable` See source code for more info. ### `MinimizeCommentPayload` See source code for more info. ### `MoveProjectCardPayload` See source code for more info. ### `MoveProjectColumnPayload` See source code for more info. ### `Mutation` See source code for more info. ### `Node` See source code for more info. ### `OauthApplicationAuditEntryData` See source code for more info. ### `OrganizationAuditEntryConnection` See source code for more info. ### `OrganizationAuditEntryData` See source code for more info. ### `OrganizationAuditEntryEdge` See source code for more info. ### `OrganizationConnection` See source code for more info. ### `OrganizationEdge` See source code for more info. ### `OrganizationEnterpriseOwnerConnection` See source code for more info. ### `OrganizationEnterpriseOwnerEdge` See source code for more info. ### `OrganizationInvitationConnection` See source code for more info. ### `OrganizationInvitationEdge` See source code for more info. ### `OrganizationMemberConnection` See source code for more info. ### `OrganizationMemberEdge` See source code for more info. ### `PackageConnection` See source code for more info. ### `PackageEdge` See source code for more info. ### `PackageFileConnection` See source code for more info. ### `PackageFileEdge` See source code for more info. ### `PackageOwner` See source code for more info. ### `PackageStatistics` See source code for more info. ### `PackageVersionConnection` See source code for more info. ### `PackageVersionEdge` See source code for more info. ### `PackageVersionStatistics` See source code for more info. ### `PageInfo` See source code for more info. ### `PermissionSource` See source code for more info. ### `PinIssuePayload` See source code for more info. ### `PinnableItemConnection` See source code for more info. ### `PinnableItemEdge` See source code for more info. ### `PinnedDiscussionConnection` See source code for more info. ### `PinnedDiscussionEdge` See source code for more info. ### `PinnedIssueConnection` See source code for more info. ### `PinnedIssueEdge` See source code for more info. ### `ProfileItemShowcase` See source code for more info. ### `ProfileOwner` See source code for more info. ### `ProjectCardConnection` See source code for more info. ### `ProjectCardEdge` See source code for more info. ### `ProjectColumnConnection` See source code for more info. ### `ProjectColumnEdge` See source code for more info. ### `ProjectConnection` See source code for more info. ### `ProjectEdge` See source code for more info. ### `ProjectNextConnection` See source code for more info. ### `ProjectNextEdge` See source code for more info. ### `ProjectNextFieldCommon` See source code for more info. ### `ProjectNextFieldConnection` See source code for more info. ### `ProjectNextFieldEdge` See source code for more info. ### `ProjectNextItemConnection` See source code for more info. ### `ProjectNextItemEdge` See source code for more info. ### `ProjectNextItemFieldValueConnection` See source code for more info. ### `ProjectNextItemFieldValueEdge` See source code for more info. ### `ProjectNextOwner` See source code for more info. ### `ProjectOwner` See source code for more info. ### `ProjectProgress` See source code for more info. ### `ProjectV2Connection` See source code for more info. ### `ProjectV2Edge` See source code for more info. ### `ProjectV2FieldCommon` See source code for more info. ### `ProjectV2FieldConfigurationConnection` See source code for more info. ### `ProjectV2FieldConfigurationEdge` See source code for more info. ### `ProjectV2FieldConnection` See source code for more info. ### `ProjectV2FieldEdge` See source code for more info. ### `ProjectV2ItemConnection` See source code for more info. ### `ProjectV2ItemEdge` See source code for more info. ### `ProjectV2ItemFieldLabelValue` See source code for more info. ### `ProjectV2ItemFieldMilestoneValue` See source code for more info. ### `ProjectV2ItemFieldPullRequestValue` See source code for more info. ### `ProjectV2ItemFieldRepositoryValue` See source code for more info. ### `ProjectV2ItemFieldReviewerValue` See source code for more info. ### `ProjectV2ItemFieldUserValue` See source code for more info. ### `ProjectV2ItemFieldValueCommon` See source code for more info. ### `ProjectV2ItemFieldValueConnection` See source code for more info. ### `ProjectV2ItemFieldValueEdge` See source code for more info. ### `ProjectV2IterationFieldConfiguration` See source code for more info. ### `ProjectV2IterationFieldIteration` See source code for more info. ### `ProjectV2Owner` See source code for more info. ### `ProjectV2Recent` See source code for more info. ### `ProjectV2SingleSelectFieldOption` See source code for more info. ### `ProjectV2SortBy` See source code for more info. ### `ProjectV2SortByConnection` See source code for more info. ### `ProjectV2SortByEdge` See source code for more info. ### `ProjectV2ViewConnection` See source code for more info. ### `ProjectV2ViewEdge` See source code for more info. ### `ProjectViewConnection` See source code for more info. ### `ProjectViewEdge` See source code for more info. ### `PublicKeyConnection` See source code for more info. ### `PublicKeyEdge` See source code for more info. ### `PullRequestChangedFile` See source code for more info. ### `PullRequestChangedFileConnection` See source code for more info. ### `PullRequestChangedFileEdge` See source code for more info. ### `PullRequestCommitConnection` See source code for more info. ### `PullRequestCommitEdge` See source code for more info. ### `PullRequestConnection` See source code for more info. ### `PullRequestContributionsByRepository` See source code for more info. ### `PullRequestEdge` See source code for more info. ### `PullRequestReviewCommentConnection` See source code for more info. ### `PullRequestReviewCommentEdge` See source code for more info. ### `PullRequestReviewConnection` See source code for more info. ### `PullRequestReviewContributionsByRepository` See source code for more info. ### `PullRequestReviewEdge` See source code for more info. ### `PullRequestReviewThreadConnection` See source code for more info. ### `PullRequestReviewThreadEdge` See source code for more info. ### `PullRequestRevisionMarker` See source code for more info. ### `PullRequestTemplate` See source code for more info. ### `PullRequestTimelineConnection` See source code for more info. ### `PullRequestTimelineItemEdge` See source code for more info. ### `PullRequestTimelineItemsConnection` See source code for more info. ### `PullRequestTimelineItemsEdge` See source code for more info. ### `PushAllowanceConnection` See source code for more info. ### `PushAllowanceEdge` See source code for more info. ### `Query` See source code for more info. ### `RateLimit` See source code for more info. ### `Reactable` See source code for more info. ### `ReactingUserConnection` See source code for more info. ### `ReactingUserEdge` See source code for more info. ### `ReactionConnection` See source code for more info. ### `ReactionEdge` See source code for more info. ### `ReactionGroup` See source code for more info. ### `ReactorConnection` See source code for more info. ### `ReactorEdge` See source code for more info. ### `RefConnection` See source code for more info. ### `RefEdge` See source code for more info. ### `RefUpdateRule` See source code for more info. ### `RegenerateEnterpriseIdentityProviderRecoveryCodesPayload` See source code for more info. ### `RegenerateVerifiableDomainTokenPayload` See source code for more info. ### `RejectDeploymentsPayload` See source code for more info. ### `ReleaseAssetConnection` See source code for more info. ### `ReleaseAssetEdge` See source code for more info. ### `ReleaseConnection` See source code for more info. ### `ReleaseEdge` See source code for more info. ### `RemoveAssigneesFromAssignablePayload` See source code for more info. ### `RemoveEnterpriseAdminPayload` See source code for more info. ### `RemoveEnterpriseIdentityProviderPayload` See source code for more info. ### `RemoveEnterpriseOrganizationPayload` See source code for more info. ### `RemoveEnterpriseSupportEntitlementPayload` See source code for more info. ### `RemoveLabelsFromLabelablePayload` See source code for more info. ### `RemoveOutsideCollaboratorPayload` See source code for more info. ### `RemoveReactionPayload` See source code for more info. ### `RemoveStarPayload` See source code for more info. ### `RemoveUpvotePayload` See source code for more info. ### `ReopenIssuePayload` See source code for more info. ### `ReopenPullRequestPayload` See source code for more info. ### `RepositoryAuditEntryData` See source code for more info. ### `RepositoryCodeowners` See source code for more info. ### `RepositoryCodeownersError` See source code for more info. ### `RepositoryCollaboratorConnection` See source code for more info. ### `RepositoryCollaboratorEdge` See source code for more info. ### `RepositoryConnection` See source code for more info. ### `RepositoryContactLink` See source code for more info. ### `RepositoryDiscussionAuthor` See source code for more info. ### `RepositoryDiscussionCommentAuthor` See source code for more info. ### `RepositoryEdge` See source code for more info. ### `RepositoryInfo` See source code for more info. ### `RepositoryInteractionAbility` See source code for more info. ### `RepositoryInvitationConnection` See source code for more info. ### `RepositoryInvitationEdge` See source code for more info. ### `RepositoryMigrationConnection` See source code for more info. ### `RepositoryMigrationEdge` See source code for more info. ### `RepositoryNode` See source code for more info. ### `RepositoryOwner` See source code for more info. ### `RepositoryTopicConnection` See source code for more info. ### `RepositoryTopicEdge` See source code for more info. ### `RepositoryVulnerabilityAlertConnection` See source code for more info. ### `RepositoryVulnerabilityAlertEdge` See source code for more info. ### `RequestReviewsPayload` See source code for more info. ### `RequestedReviewerConnection` See source code for more info. ### `RequestedReviewerEdge` See source code for more info. ### `RequirableByPullRequest` See source code for more info. ### `RequiredStatusCheckDescription` See source code for more info. ### `RerequestCheckSuitePayload` See source code for more info. ### `ResolveReviewThreadPayload` See source code for more info. ### `ReviewDismissalAllowanceConnection` See source code for more info. ### `ReviewDismissalAllowanceEdge` See source code for more info. ### `ReviewRequestConnection` See source code for more info. ### `ReviewRequestEdge` See source code for more info. ### `RevokeEnterpriseOrganizationsMigratorRolePayload` See source code for more info. ### `RevokeMigratorRolePayload` See source code for more info. ### `SavedReplyConnection` See source code for more info. ### `SavedReplyEdge` See source code for more info. ### `SearchResultItemConnection` See source code for more info. ### `SearchResultItemEdge` See source code for more info. ### `SecurityAdvisoryConnection` See source code for more info. ### `SecurityAdvisoryEdge` See source code for more info. ### `SecurityAdvisoryIdentifier` See source code for more info. ### `SecurityAdvisoryPackage` See source code for more info. ### `SecurityAdvisoryPackageVersion` See source code for more info. ### `SecurityAdvisoryReference` See source code for more info. ### `SecurityVulnerability` See source code for more info. ### `SecurityVulnerabilityConnection` See source code for more info. ### `SecurityVulnerabilityEdge` See source code for more info. ### `SetEnterpriseIdentityProviderPayload` See source code for more info. ### `SetOrganizationInteractionLimitPayload` See source code for more info. ### `SetRepositoryInteractionLimitPayload` See source code for more info. ### `SetUserInteractionLimitPayload` See source code for more info. ### `SortBy` See source code for more info. ### `SponsorConnection` See source code for more info. ### `SponsorEdge` See source code for more info. ### `Sponsorable` See source code for more info. ### `SponsorableItemConnection` See source code for more info. ### `SponsorableItemEdge` See source code for more info. ### `SponsorsActivityConnection` See source code for more info. ### `SponsorsActivityEdge` See source code for more info. ### `SponsorsGoal` See source code for more info. ### `SponsorsTierAdminInfo` See source code for more info. ### `SponsorsTierConnection` See source code for more info. ### `SponsorsTierEdge` See source code for more info. ### `SponsorshipConnection` See source code for more info. ### `SponsorshipEdge` See source code for more info. ### `SponsorshipNewsletterConnection` See source code for more info. ### `SponsorshipNewsletterEdge` See source code for more info. ### `StargazerConnection` See source code for more info. ### `StargazerEdge` See source code for more info. ### `Starrable` See source code for more info. ### `StarredRepositoryConnection` See source code for more info. ### `StarredRepositoryEdge` See source code for more info. ### `StartRepositoryMigrationPayload` See source code for more info. ### `StatusCheckRollupContextConnection` See source code for more info. ### `StatusCheckRollupContextEdge` See source code for more info. ### `StatusContextStateCount` See source code for more info. ### `SubmitPullRequestReviewPayload` See source code for more info. ### `Submodule` See source code for more info. ### `SubmoduleConnection` See source code for more info. ### `SubmoduleEdge` See source code for more info. ### `Subscribable` See source code for more info. ### `SuggestedReviewer` See source code for more info. ### `TeamAuditEntryData` See source code for more info. ### `TeamConnection` See source code for more info. ### `TeamDiscussionCommentConnection` See source code for more info. ### `TeamDiscussionCommentEdge` See source code for more info. ### `TeamDiscussionConnection` See source code for more info. ### `TeamDiscussionEdge` See source code for more info. ### `TeamEdge` See source code for more info. ### `TeamMemberConnection` See source code for more info. ### `TeamMemberEdge` See source code for more info. ### `TeamRepositoryConnection` See source code for more info. ### `TeamRepositoryEdge` See source code for more info. ### `TextMatch` See source code for more info. ### `TextMatchHighlight` See source code for more info. ### `TopicAuditEntryData` See source code for more info. ### `TransferIssuePayload` See source code for more info. ### `TreeEntry` See source code for more info. ### `UnarchiveRepositoryPayload` See source code for more info. ### `UnfollowOrganizationPayload` See source code for more info. ### `UnfollowUserPayload` See source code for more info. ### `UniformResourceLocatable` See source code for more info. ### `UnlinkRepositoryFromProjectPayload` See source code for more info. ### `UnlockLockablePayload` See source code for more info. ### `UnmarkDiscussionCommentAsAnswerPayload` See source code for more info. ### `UnmarkFileAsViewedPayload` See source code for more info. ### `UnmarkIssueAsDuplicatePayload` See source code for more info. ### `UnminimizeCommentPayload` See source code for more info. ### `UnpinIssuePayload` See source code for more info. ### `UnresolveReviewThreadPayload` See source code for more info. ### `Updatable` See source code for more info. ### `UpdatableComment` See source code for more info. ### `UpdateBranchProtectionRulePayload` See source code for more info. ### `UpdateCheckRunPayload` See source code for more info. ### `UpdateCheckSuitePreferencesPayload` See source code for more info. ### `UpdateDiscussionCommentPayload` See source code for more info. ### `UpdateDiscussionPayload` See source code for more info. ### `UpdateEnterpriseAdministratorRolePayload` See source code for more info. ### `UpdateEnterpriseAllowPrivateRepositoryForkingSettingPayload` See source code for more info. ### `UpdateEnterpriseDefaultRepositoryPermissionSettingPayload` See source code for more info. ### `UpdateEnterpriseMembersCanChangeRepositoryVisibilitySettingPayload` See source code for more info. ### `UpdateEnterpriseMembersCanCreateRepositoriesSettingPayload` See source code for more info. ### `UpdateEnterpriseMembersCanDeleteIssuesSettingPayload` See source code for more info. ### `UpdateEnterpriseMembersCanDeleteRepositoriesSettingPayload` See source code for more info. ### `UpdateEnterpriseMembersCanInviteCollaboratorsSettingPayload` See source code for more info. ### `UpdateEnterpriseMembersCanMakePurchasesSettingPayload` See source code for more info. ### `UpdateEnterpriseMembersCanUpdateProtectedBranchesSettingPayload` See source code for more info. ### `UpdateEnterpriseMembersCanViewDependencyInsightsSettingPayload` See source code for more info. ### `UpdateEnterpriseOrganizationProjectsSettingPayload` See source code for more info. ### `UpdateEnterpriseOwnerOrganizationRolePayload` See source code for more info. ### `UpdateEnterpriseProfilePayload` See source code for more info. ### `UpdateEnterpriseRepositoryProjectsSettingPayload` See source code for more info. ### `UpdateEnterpriseTeamDiscussionsSettingPayload` See source code for more info. ### `UpdateEnterpriseTwoFactorAuthenticationRequiredSettingPayload` See source code for more info. ### `UpdateEnvironmentPayload` See source code for more info. ### `UpdateIpAllowListEnabledSettingPayload` See source code for more info. ### `UpdateIpAllowListEntryPayload` See source code for more info. ### `UpdateIpAllowListForInstalledAppsEnabledSettingPayload` See source code for more info. ### `UpdateIssueCommentPayload` See source code for more info. ### `UpdateIssuePayload` See source code for more info. ### `UpdateNotificationRestrictionSettingPayload` See source code for more info. ### `UpdateOrganizationAllowPrivateRepositoryForkingSettingPayload` See source code for more info. ### `UpdateOrganizationWebCommitSignoffSettingPayload` See source code for more info. ### `UpdateProjectCardPayload` See source code for more info. ### `UpdateProjectColumnPayload` See source code for more info. ### `UpdateProjectDraftIssuePayload` See source code for more info. ### `UpdateProjectNextItemFieldPayload` See source code for more info. ### `UpdateProjectNextPayload` See source code for more info. ### `UpdateProjectPayload` See source code for more info. ### `UpdateProjectV2DraftIssuePayload` See source code for more info. ### `UpdateProjectV2ItemFieldValuePayload` See source code for more info. ### `UpdateProjectV2ItemPositionPayload` See source code for more info. ### `UpdateProjectV2Payload` See source code for more info. ### `UpdatePullRequestBranchPayload` See source code for more info. ### `UpdatePullRequestPayload` See source code for more info. ### `UpdatePullRequestReviewCommentPayload` See source code for more info. ### `UpdatePullRequestReviewPayload` See source code for more info. ### `UpdateRefPayload` See source code for more info. ### `UpdateRepositoryPayload` See source code for more info. ### `UpdateRepositoryWebCommitSignoffSettingPayload` See source code for more info. ### `UpdateSponsorshipPreferencesPayload` See source code for more info. ### `UpdateSubscriptionPayload` See source code for more info. ### `UpdateTeamDiscussionCommentPayload` See source code for more info. ### `UpdateTeamDiscussionPayload` See source code for more info. ### `UpdateTeamsRepositoryPayload` See source code for more info. ### `UpdateTopicsPayload` See source code for more info. ### `UserConnection` See source code for more info. ### `UserContentEditConnection` See source code for more info. ### `UserContentEditEdge` See source code for more info. ### `UserEdge` See source code for more info. ### `UserEmailMetadata` See source code for more info. ### `UserStatusConnection` See source code for more info. ### `UserStatusEdge` See source code for more info. ### `VerifiableDomainConnection` See source code for more info. ### `VerifiableDomainEdge` See source code for more info. ### `VerifyVerifiableDomainPayload` See source code for more info. ### `Votable` See source code for more info. ### `WorkflowRunConnection` See source code for more info. ### `WorkflowRunEdge` See source code for more info. ### `AddedToProjectEvent` See source code for more info. ### `App` See source code for more info. ### `AssignedEvent` See source code for more info. ### `AutoMergeDisabledEvent` See source code for more info. ### `AutoMergeEnabledEvent` See source code for more info. ### `AutoRebaseEnabledEvent` See source code for more info. ### `AutoSquashEnabledEvent` See source code for more info. ### `AutomaticBaseChangeFailedEvent` See source code for more info. ### `AutomaticBaseChangeSucceededEvent` See source code for more info. ### `BaseRefChangedEvent` See source code for more info. ### `BaseRefDeletedEvent` See source code for more info. ### `BaseRefForcePushedEvent` See source code for more info. ### `Blob` See source code for more info. ### `Bot` See source code for more info. ### `BranchProtectionRule` See source code for more info. ### `BypassForcePushAllowance` See source code for more info. ### `BypassPullRequestAllowance` See source code for more info. ### `CWE` See source code for more info. ### `CheckRun` See source code for more info. ### `CheckSuite` See source code for more info. ### `ClosedEvent` See source code for more info. ### `CodeOfConduct` See source code for more info. ### `CommentDeletedEvent` See source code for more info. ### `Commit` See source code for more info. ### `CommitComment` See source code for more info. ### `CommitCommentThread` See source code for more info. ### `ConnectedEvent` See source code for more info. ### `ConvertToDraftEvent` See source code for more info. ### `ConvertedNoteToIssueEvent` See source code for more info. ### `ConvertedToDiscussionEvent` See source code for more info. ### `CreatedCommitContribution` See source code for more info. ### `CreatedIssueContribution` See source code for more info. ### `CreatedPullRequestContribution` See source code for more info. ### `CreatedPullRequestReviewContribution` See source code for more info. ### `CreatedRepositoryContribution` See source code for more info. ### `CrossReferencedEvent` See source code for more info. ### `DemilestonedEvent` See source code for more info. ### `DependabotUpdate` See source code for more info. ### `DeployKey` See source code for more info. ### `DeployedEvent` See source code for more info. ### `Deployment` See source code for more info. ### `DeploymentEnvironmentChangedEvent` See source code for more info. ### `DeploymentReview` See source code for more info. ### `DeploymentStatus` See source code for more info. ### `DisconnectedEvent` See source code for more info. ### `Discussion` See source code for more info. ### `DiscussionCategory` See source code for more info. ### `DiscussionComment` See source code for more info. ### `DiscussionPoll` See source code for more info. ### `DiscussionPollOption` See source code for more info. ### `DraftIssue` See source code for more info. ### `Enterprise` See source code for more info. ### `EnterpriseAdministratorInvitation` See source code for more info. ### `EnterpriseIdentityProvider` See source code for more info. ### `EnterpriseRepositoryInfo` See source code for more info. ### `EnterpriseServerInstallation` See source code for more info. ### `EnterpriseServerUserAccount` See source code for more info. ### `EnterpriseServerUserAccountEmail` See source code for more info. ### `EnterpriseServerUserAccountsUpload` See source code for more info. ### `EnterpriseUserAccount` See source code for more info. ### `Environment` See source code for more info. ### `ExternalIdentity` See source code for more info. ### `GenericHovercardContext` See source code for more info. ### `Gist` See source code for more info. ### `GistComment` See source code for more info. ### `GpgSignature` See source code for more info. ### `HeadRefDeletedEvent` See source code for more info. ### `HeadRefForcePushedEvent` See source code for more info. ### `HeadRefRestoredEvent` See source code for more info. ### `IpAllowListEntry` See source code for more info. ### `Issue` See source code for more info. ### `IssueComment` See source code for more info. ### `JoinedGitHubContribution` See source code for more info. ### `Label` See source code for more info. ### `LabeledEvent` See source code for more info. ### `Language` See source code for more info. ### `License` See source code for more info. ### `LockedEvent` See source code for more info. ### `Mannequin` See source code for more info. ### `MarkedAsDuplicateEvent` See source code for more info. ### `MarketplaceCategory` See source code for more info. ### `MarketplaceListing` See source code for more info. ### `MembersCanDeleteReposClearAuditEntry` See source code for more info. ### `MembersCanDeleteReposDisableAuditEntry` See source code for more info. ### `MembersCanDeleteReposEnableAuditEntry` See source code for more info. ### `MentionedEvent` See source code for more info. ### `MergedEvent` See source code for more info. ### `MigrationSource` See source code for more info. ### `Milestone` See source code for more info. ### `MilestonedEvent` See source code for more info. ### `MovedColumnsInProjectEvent` See source code for more info. ### `OIDCProvider` See source code for more info. ### `OauthApplicationCreateAuditEntry` See source code for more info. ### `OrgAddBillingManagerAuditEntry` See source code for more info. ### `OrgAddMemberAuditEntry` See source code for more info. ### `OrgBlockUserAuditEntry` See source code for more info. ### `OrgConfigDisableCollaboratorsOnlyAuditEntry` See source code for more info. ### `OrgConfigEnableCollaboratorsOnlyAuditEntry` See source code for more info. ### `OrgCreateAuditEntry` See source code for more info. ### `OrgDisableOauthAppRestrictionsAuditEntry` See source code for more info. ### `OrgDisableSamlAuditEntry` See source code for more info. ### `OrgDisableTwoFactorRequirementAuditEntry` See source code for more info. ### `OrgEnableOauthAppRestrictionsAuditEntry` See source code for more info. ### `OrgEnableSamlAuditEntry` See source code for more info. ### `OrgEnableTwoFactorRequirementAuditEntry` See source code for more info. ### `OrgInviteMemberAuditEntry` See source code for more info. ### `OrgInviteToBusinessAuditEntry` See source code for more info. ### `OrgOauthAppAccessApprovedAuditEntry` See source code for more info. ### `OrgOauthAppAccessDeniedAuditEntry` See source code for more info. ### `OrgOauthAppAccessRequestedAuditEntry` See source code for more info. ### `OrgRemoveBillingManagerAuditEntry` See source code for more info. ### `OrgRemoveMemberAuditEntry` See source code for more info. ### `OrgRemoveOutsideCollaboratorAuditEntry` See source code for more info. ### `OrgRestoreMemberAuditEntry` See source code for more info. ### `OrgRestoreMemberMembershipOrganizationAuditEntryData` See source code for more info. ### `OrgRestoreMemberMembershipRepositoryAuditEntryData` See source code for more info. ### `OrgRestoreMemberMembershipTeamAuditEntryData` See source code for more info. ### `OrgUnblockUserAuditEntry` See source code for more info. ### `OrgUpdateDefaultRepositoryPermissionAuditEntry` See source code for more info. ### `OrgUpdateMemberAuditEntry` See source code for more info. ### `OrgUpdateMemberRepositoryCreationPermissionAuditEntry` See source code for more info. ### `OrgUpdateMemberRepositoryInvitationPermissionAuditEntry` See source code for more info. ### `Organization` See source code for more info. ### `OrganizationIdentityProvider` See source code for more info. ### `OrganizationInvitation` See source code for more info. ### `OrganizationTeamsHovercardContext` See source code for more info. ### `OrganizationsHovercardContext` See source code for more info. ### `Package` See source code for more info. ### `PackageFile` See source code for more info. ### `PackageTag` See source code for more info. ### `PackageVersion` See source code for more info. ### `PinnedDiscussion` See source code for more info. ### `PinnedEvent` See source code for more info. ### `PinnedIssue` See source code for more info. ### `PrivateRepositoryForkingDisableAuditEntry` See source code for more info. ### `PrivateRepositoryForkingEnableAuditEntry` See source code for more info. ### `Project` See source code for more info. ### `ProjectCard` See source code for more info. ### `ProjectColumn` See source code for more info. ### `ProjectNext` See source code for more info. ### `ProjectNextField` See source code for more info. ### `ProjectNextItem` See source code for more info. ### `ProjectNextItemFieldValue` See source code for more info. ### `ProjectV2` See source code for more info. ### `ProjectV2Field` See source code for more info. ### `ProjectV2Item` See source code for more info. ### `ProjectV2ItemFieldDateValue` See source code for more info. ### `ProjectV2ItemFieldIterationValue` See source code for more info. ### `ProjectV2ItemFieldNumberValue` See source code for more info. ### `ProjectV2ItemFieldSingleSelectValue` See source code for more info. ### `ProjectV2ItemFieldTextValue` See source code for more info. ### `ProjectV2IterationField` See source code for more info. ### `ProjectV2SingleSelectField` See source code for more info. ### `ProjectV2View` See source code for more info. ### `ProjectView` See source code for more info. ### `PublicKey` See source code for more info. ### `PullRequest` See source code for more info. ### `PullRequestCommit` See source code for more info. ### `PullRequestCommitCommentThread` See source code for more info. ### `PullRequestReview` See source code for more info. ### `PullRequestReviewComment` See source code for more info. ### `PullRequestReviewThread` See source code for more info. ### `PullRequestThread` See source code for more info. ### `Push` See source code for more info. ### `PushAllowance` See source code for more info. ### `Reaction` See source code for more info. ### `ReadyForReviewEvent` See source code for more info. ### `Ref` See source code for more info. ### `ReferencedEvent` See source code for more info. ### `Release` See source code for more info. ### `ReleaseAsset` See source code for more info. ### `RemovedFromProjectEvent` See source code for more info. ### `RenamedTitleEvent` See source code for more info. ### `ReopenedEvent` See source code for more info. ### `RepoAccessAuditEntry` See source code for more info. ### `RepoAddMemberAuditEntry` See source code for more info. ### `RepoAddTopicAuditEntry` See source code for more info. ### `RepoArchivedAuditEntry` See source code for more info. ### `RepoChangeMergeSettingAuditEntry` See source code for more info. ### `RepoConfigDisableAnonymousGitAccessAuditEntry` See source code for more info. ### `RepoConfigDisableCollaboratorsOnlyAuditEntry` See source code for more info. ### `RepoConfigDisableContributorsOnlyAuditEntry` See source code for more info. ### `RepoConfigDisableSockpuppetDisallowedAuditEntry` See source code for more info. ### `RepoConfigEnableAnonymousGitAccessAuditEntry` See source code for more info. ### `RepoConfigEnableCollaboratorsOnlyAuditEntry` See source code for more info. ### `RepoConfigEnableContributorsOnlyAuditEntry` See source code for more info. ### `RepoConfigEnableSockpuppetDisallowedAuditEntry` See source code for more info. ### `RepoConfigLockAnonymousGitAccessAuditEntry` See source code for more info. ### `RepoConfigUnlockAnonymousGitAccessAuditEntry` See source code for more info. ### `RepoCreateAuditEntry` See source code for more info. ### `RepoDestroyAuditEntry` See source code for more info. ### `RepoRemoveMemberAuditEntry` See source code for more info. ### `RepoRemoveTopicAuditEntry` See source code for more info. ### `Repository` See source code for more info. ### `RepositoryInvitation` See source code for more info. ### `RepositoryMigration` See source code for more info. ### `RepositoryTopic` See source code for more info. ### `RepositoryVisibilityChangeDisableAuditEntry` See source code for more info. ### `RepositoryVisibilityChangeEnableAuditEntry` See source code for more info. ### `RepositoryVulnerabilityAlert` See source code for more info. ### `RestrictedContribution` See source code for more info. ### `ReviewDismissalAllowance` See source code for more info. ### `ReviewDismissedEvent` See source code for more info. ### `ReviewRequest` See source code for more info. ### `ReviewRequestRemovedEvent` See source code for more info. ### `ReviewRequestedEvent` See source code for more info. ### `ReviewStatusHovercardContext` See source code for more info. ### `SavedReply` See source code for more info. ### `SecurityAdvisory` See source code for more info. ### `SmimeSignature` See source code for more info. ### `SponsorsActivity` See source code for more info. ### `SponsorsListing` See source code for more info. ### `SponsorsTier` See source code for more info. ### `Sponsorship` See source code for more info. ### `SponsorshipNewsletter` See source code for more info. ### `SshSignature` See source code for more info. ### `Status` See source code for more info. ### `StatusCheckRollup` See source code for more info. ### `StatusContext` See source code for more info. ### `SubscribedEvent` See source code for more info. ### `Tag` See source code for more info. ### `Team` See source code for more info. ### `TeamAddMemberAuditEntry` See source code for more info. ### `TeamAddRepositoryAuditEntry` See source code for more info. ### `TeamChangeParentTeamAuditEntry` See source code for more info. ### `TeamDiscussion` See source code for more info. ### `TeamDiscussionComment` See source code for more info. ### `TeamRemoveMemberAuditEntry` See source code for more info. ### `TeamRemoveRepositoryAuditEntry` See source code for more info. ### `Topic` See source code for more info. ### `TransferredEvent` See source code for more info. ### `Tree` See source code for more info. ### `UnassignedEvent` See source code for more info. ### `UnknownSignature` See source code for more info. ### `UnlabeledEvent` See source code for more info. ### `UnlockedEvent` See source code for more info. ### `UnmarkedAsDuplicateEvent` See source code for more info. ### `UnpinnedEvent` See source code for more info. ### `UnsubscribedEvent` See source code for more info. ### `User` See source code for more info. ### `UserBlockedEvent` See source code for more info. ### `UserContentEdit` See source code for more info. ### `UserStatus` See source code for more info. ### `VerifiableDomain` See source code for more info. ### `ViewerHovercardContext` See source code for more info. ### `Workflow` See source code for more info. ### `WorkflowRun` See source code for more info. ### `Assignee` See source code for more info. ### `AuditEntryActor` See source code for more info. ### `BranchActorAllowanceActor` See source code for more info. ### `Closer` See source code for more info. ### `CreatedIssueOrRestrictedContribution` See source code for more info. ### `CreatedPullRequestOrRestrictedContribution` See source code for more info. ### `CreatedRepositoryOrRestrictedContribution` See source code for more info. ### `DeploymentReviewer` See source code for more info. ### `EnterpriseMember` See source code for more info. ### `IpAllowListOwner` See source code for more info. ### `IssueOrPullRequest` See source code for more info. ### `IssueTimelineItem` See source code for more info. ### `IssueTimelineItems` See source code for more info. ### `MilestoneItem` See source code for more info. ### `OrgRestoreMemberAuditEntryMembership` See source code for more info. ### `OrganizationAuditEntry` See source code for more info. ### `OrganizationOrUser` See source code for more info. ### `PermissionGranter` See source code for more info. ### `PinnableItem` See source code for more info. ### `ProjectCardItem` See source code for more info. ### `ProjectNextItemContent` See source code for more info. ### `ProjectV2FieldConfiguration` See source code for more info. ### `ProjectV2ItemContent` See source code for more info. ### `ProjectV2ItemFieldValue` See source code for more info. ### `PullRequestTimelineItem` See source code for more info. ### `PullRequestTimelineItems` See source code for more info. ### `PushAllowanceActor` See source code for more info. ### `Reactor` See source code for more info. ### `ReferencedSubject` See source code for more info. ### `RenamedTitleSubject` See source code for more info. ### `RequestedReviewer` See source code for more info. ### `ReviewDismissalAllowanceActor` See source code for more info. ### `SearchResultItem` See source code for more info. ### `Sponsor` See source code for more info. ### `SponsorableItem` See source code for more info. ### `StatusCheckRollupContext` See source code for more info. ### `VerifiableDomainOwner` See source code for more info. # user Source: https://docs.prefect.io/integrations/prefect-github/api-ref/prefect_github-user # `prefect_github.user` This is a module containing: GitHub query\_user\* tasks ## Functions ### `query_user` ```python theme={null} query_user(login: str, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` The query root of GitHub's GraphQL interface. **Args:** * `login`: The user's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_gist` ```python theme={null} query_user_gist(login: str, name: str, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Find gist by repo name. **Args:** * `login`: The user's login. * `name`: The gist name to find. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_gists` ```python theme={null} query_user_gists(login: str, github_credentials: GitHubCredentials, privacy: graphql_schema.GistPrivacy = None, order_by: graphql_schema.GistOrder = None, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of the Gists the user has created. **Args:** * `login`: The user's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `privacy`: Filters Gists according to privacy. * `order_by`: Ordering options for gists returned from the connection. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_issues` ```python theme={null} query_user_issues(login: str, labels: Iterable[str], states: Iterable[graphql_schema.IssueState], github_credentials: GitHubCredentials, order_by: graphql_schema.IssueOrder = None, filter_by: graphql_schema.IssueFilters = None, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of issues associated with this user. **Args:** * `login`: The user's login. * `labels`: A list of label names to filter the pull requests by. * `states`: A list of states to filter the issues by. * `github_credentials`: Credentials to use for authentication with GitHub. * `order_by`: Ordering options for issues returned from the connection. * `filter_by`: Filtering options for issues returned from the connection. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_status` ```python theme={null} query_user_status(login: str, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` The user's description of what they're currently doing. **Args:** * `login`: The user's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_project` ```python theme={null} query_user_project(login: str, number: int, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Find project by number. **Args:** * `login`: The user's login. * `number`: The project number to find. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_packages` ```python theme={null} query_user_packages(login: str, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, names: Optional[Iterable[str]] = None, repository_id: Optional[str] = None, package_type: graphql_schema.PackageType = None, order_by: graphql_schema.PackageOrder = {'field': 'CREATED_AT', 'direction': 'DESC'}, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of packages under the owner. **Args:** * `login`: The user's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `names`: Find packages by their names. * `repository_id`: Find packages in a repository by ID. * `package_type`: Filter registry package by type. * `order_by`: Ordering of the returned packages. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_projects` ```python theme={null} query_user_projects(login: str, states: Iterable[graphql_schema.ProjectState], github_credentials: GitHubCredentials, order_by: graphql_schema.ProjectOrder = None, search: Optional[str] = None, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of projects under the owner. **Args:** * `login`: The user's login. * `states`: A list of states to filter the projects by. * `github_credentials`: Credentials to use for authentication with GitHub. * `order_by`: Ordering options for projects returned from the connection. * `search`: Query to search projects by, currently only searching by name. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_sponsors` ```python theme={null} query_user_sponsors(login: str, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, tier_id: Optional[str] = None, order_by: graphql_schema.SponsorOrder = {'field': 'RELEVANCE', 'direction': 'DESC'}, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` List of sponsors for this user or organization. **Args:** * `login`: The user's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `tier_id`: If given, will filter for sponsors at the given tier. Will only return sponsors whose tier the viewer is permitted to see. * `order_by`: Ordering options for sponsors returned from the connection. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_watching` ```python theme={null} query_user_watching(login: str, github_credentials: GitHubCredentials, privacy: graphql_schema.RepositoryPrivacy = None, order_by: graphql_schema.RepositoryOrder = None, affiliations: Iterable[graphql_schema.RepositoryAffiliation] = None, owner_affiliations: Iterable[graphql_schema.RepositoryAffiliation] = ('OWNER', 'COLLABORATOR'), is_locked: Optional[bool] = None, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of repositories the given user is watching. **Args:** * `login`: The user's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `privacy`: If non-null, filters repositories according to privacy. * `order_by`: Ordering options for repositories returned from the connection. * `affiliations`: Affiliation options for repositories returned from the connection. If none specified, the results will include repositories for which the current viewer is an owner or collaborator, or member. * `owner_affiliations`: Array of owner's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the organization or user being viewed owns. * `is_locked`: If non-null, filters repositories according to whether they have been locked. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_project_v2` ```python theme={null} query_user_project_v2(login: str, number: int, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Find a project by number. **Args:** * `login`: The user's login. * `number`: The project number. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_followers` ```python theme={null} query_user_followers(login: str, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of users the given user is followed by. **Args:** * `login`: The user's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_following` ```python theme={null} query_user_following(login: str, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of users the given user is following. **Args:** * `login`: The user's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_projects_v2` ```python theme={null} query_user_projects_v2(login: str, github_credentials: GitHubCredentials, query: Optional[str] = None, order_by: graphql_schema.ProjectV2Order = {'field': 'NUMBER', 'direction': 'DESC'}, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of projects under the owner. **Args:** * `login`: The user's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `query`: A project to search for under the the owner. * `order_by`: How to order the returned projects. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_repository` ```python theme={null} query_user_repository(login: str, name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Find Repository. **Args:** * `login`: The user's login. * `name`: Name of Repository to find. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_sponsoring` ```python theme={null} query_user_sponsoring(login: str, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, order_by: graphql_schema.SponsorOrder = {'field': 'RELEVANCE', 'direction': 'DESC'}, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` List of users and organizations this entity is sponsoring. **Args:** * `login`: The user's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `order_by`: Ordering options for the users and organizations returned from the connection. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_public_keys` ```python theme={null} query_user_public_keys(login: str, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of public keys associated with this user. **Args:** * `login`: The user's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_project_next` ```python theme={null} query_user_project_next(login: str, number: int, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Find a project by project (beta) number. **Args:** * `login`: The user's login. * `number`: The project (beta) number. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_pinned_items` ```python theme={null} query_user_pinned_items(login: str, types: Iterable[graphql_schema.PinnableItemType], github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of repositories and gists this profile owner has pinned to their profile. **Args:** * `login`: The user's login. * `types`: Filter the types of pinned items that are returned. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_projects_next` ```python theme={null} query_user_projects_next(login: str, github_credentials: GitHubCredentials, query: Optional[str] = None, sort_by: graphql_schema.ProjectNextOrderField = 'TITLE', after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of projects (beta) under the owner. **Args:** * `login`: The user's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `query`: A project (beta) to search for under the the owner. * `sort_by`: How to order the returned projects (beta). * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_repositories` ```python theme={null} query_user_repositories(login: str, github_credentials: GitHubCredentials, privacy: graphql_schema.RepositoryPrivacy = None, order_by: graphql_schema.RepositoryOrder = None, affiliations: Iterable[graphql_schema.RepositoryAffiliation] = None, owner_affiliations: Iterable[graphql_schema.RepositoryAffiliation] = ('OWNER', 'COLLABORATOR'), is_locked: Optional[bool] = None, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, is_fork: Optional[bool] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of repositories that the user owns. **Args:** * `login`: The user's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `privacy`: If non-null, filters repositories according to privacy. * `order_by`: Ordering options for repositories returned from the connection. * `affiliations`: Array of viewer's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the current viewer owns. * `owner_affiliations`: Array of owner's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the organization or user being viewed owns. * `is_locked`: If non-null, filters repositories according to whether they have been locked. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `is_fork`: If non-null, filters repositories according to whether they are forks of another repository. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_item_showcase` ```python theme={null} query_user_item_showcase(login: str, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Showcases a selection of repositories and gists that the profile owner has either curated or that have been selected automatically based on popularity. **Args:** * `login`: The user's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_gist_comments` ```python theme={null} query_user_gist_comments(login: str, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of gist comments made by this user. **Args:** * `login`: The user's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_organization` ```python theme={null} query_user_organization(login: str, organization_login: str, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Find an organization by its login that the user belongs to. **Args:** * `login`: The user's login. * `organization_login`: The login of the organization to find. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_pull_requests` ```python theme={null} query_user_pull_requests(login: str, states: Iterable[graphql_schema.PullRequestState], labels: Iterable[str], github_credentials: GitHubCredentials, head_ref_name: Optional[str] = None, base_ref_name: Optional[str] = None, order_by: graphql_schema.IssueOrder = None, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of pull requests associated with this user. **Args:** * `login`: The user's login. * `states`: A list of states to filter the pull requests by. * `labels`: A list of label names to filter the pull requests by. * `github_credentials`: Credentials to use for authentication with GitHub. * `head_ref_name`: The head ref name to filter the pull requests by. * `base_ref_name`: The base ref name to filter the pull requests by. * `order_by`: Ordering options for pull requests returned from the connection. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_saved_replies` ```python theme={null} query_user_saved_replies(login: str, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, order_by: graphql_schema.SavedReplyOrder = {'field': 'UPDATED_AT', 'direction': 'DESC'}, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Replies this user has saved. **Args:** * `login`: The user's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `order_by`: The field to order saved replies by. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_pinnable_items` ```python theme={null} query_user_pinnable_items(login: str, types: Iterable[graphql_schema.PinnableItemType], github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of repositories and gists this profile owner can pin to their profile. **Args:** * `login`: The user's login. * `types`: Filter the types of pinnable items that are returned. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_issue_comments` ```python theme={null} query_user_issue_comments(login: str, github_credentials: GitHubCredentials, order_by: graphql_schema.IssueCommentOrder = None, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of issue comments made by this user. **Args:** * `login`: The user's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `order_by`: Ordering options for issue comments returned from the connection. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_organizations` ```python theme={null} query_user_organizations(login: str, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of organizations the user belongs to. **Args:** * `login`: The user's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_recent_projects` ```python theme={null} query_user_recent_projects(login: str, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Recent projects that this user has modified in the context of the owner. **Args:** * `login`: The user's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_commit_comments` ```python theme={null} query_user_commit_comments(login: str, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of commit comments made by this user. **Args:** * `login`: The user's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_sponsors_listing` ```python theme={null} query_user_sponsors_listing(login: str, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` The GitHub Sponsors listing for this user or organization. **Args:** * `login`: The user's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_top_repositories` ```python theme={null} query_user_top_repositories(login: str, order_by: graphql_schema.RepositoryOrder, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, since: Optional[datetime] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Repositories the user has contributed to, ordered by contribution rank, plus repositories the user has created. **Args:** * `login`: The user's login. * `order_by`: Ordering options for repositories returned from the connection. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `since`: How far back in time to fetch contributed repositories. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_sponsors_activities` ```python theme={null} query_user_sponsors_activities(login: str, actions: Iterable[graphql_schema.SponsorsActivityAction], github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, period: graphql_schema.SponsorsActivityPeriod = 'MONTH', order_by: graphql_schema.SponsorsActivityOrder = {'field': 'TIMESTAMP', 'direction': 'DESC'}, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Events involving this sponsorable, such as new sponsorships. **Args:** * `login`: The user's login. * `actions`: Filter activities to only the specified actions. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `period`: Filter activities returned to only those that occurred in the most recent specified time period. Set to ALL to avoid filtering by when the activity occurred. * `order_by`: Ordering options for activity returned from the connection. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_interaction_ability` ```python theme={null} query_user_interaction_ability(login: str, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` The interaction ability settings for this user. **Args:** * `login`: The user's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_starred_repositories` ```python theme={null} query_user_starred_repositories(login: str, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, owned_by_viewer: Optional[bool] = None, order_by: graphql_schema.StarOrder = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Repositories the user has starred. **Args:** * `login`: The user's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `owned_by_viewer`: Filters starred repositories to only return repositories owned by the viewer. * `order_by`: Order for connection. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_repository_discussions` ```python theme={null} query_user_repository_discussions(login: str, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, order_by: graphql_schema.DiscussionOrder = {'field': 'CREATED_AT', 'direction': 'DESC'}, repository_id: Optional[str] = None, answered: Optional[bool] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Discussions this user has started. **Args:** * `login`: The user's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `order_by`: Ordering options for discussions returned from the connection. * `repository_id`: Filter discussions to only those in a specific repository. * `answered`: Filter discussions to only those that have been answered or not. Defaults to including both answered and unanswered discussions. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_sponsorships_as_sponsor` ```python theme={null} query_user_sponsorships_as_sponsor(login: str, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, order_by: graphql_schema.SponsorshipOrder = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` This object's sponsorships as the sponsor. **Args:** * `login`: The user's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `order_by`: Ordering options for sponsorships returned from this connection. If left blank, the sponsorships will be ordered based on relevancy to the viewer. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_sponsorship_newsletters` ```python theme={null} query_user_sponsorship_newsletters(login: str, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, order_by: graphql_schema.SponsorshipNewsletterOrder = {'field': 'CREATED_AT', 'direction': 'DESC'}, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` List of sponsorship updates sent from this sponsorable to sponsors. **Args:** * `login`: The user's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `order_by`: Ordering options for sponsorship updates returned from the connection. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_contributions_collection` ```python theme={null} query_user_contributions_collection(login: str, github_credentials: GitHubCredentials, organization_id: Optional[str] = None, from_: Optional[datetime] = None, to: Optional[datetime] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` The collection of contributions this user has made to different repositories. **Args:** * `login`: The user's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `organization_id`: The ID of the organization used to filter contributions. * `from_`: Only contributions made at this time or later will be counted. If omitted, defaults to a year ago. * `to`: Only contributions made before and up to (including) this time will be counted. If omitted, defaults to the current time or one year from the provided from argument. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_sponsorships_as_maintainer` ```python theme={null} query_user_sponsorships_as_maintainer(login: str, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, include_private: bool = False, order_by: graphql_schema.SponsorshipOrder = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` This object's sponsorships as the maintainer. **Args:** * `login`: The user's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `include_private`: Whether or not to include private sponsorships in the result set. * `order_by`: Ordering options for sponsorships returned from this connection. If left blank, the sponsorships will be ordered based on relevancy to the viewer. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_repositories_contributed_to` ```python theme={null} query_user_repositories_contributed_to(login: str, github_credentials: GitHubCredentials, privacy: graphql_schema.RepositoryPrivacy = None, order_by: graphql_schema.RepositoryOrder = None, is_locked: Optional[bool] = None, include_user_repositories: Optional[bool] = None, contribution_types: Iterable[graphql_schema.RepositoryContributionType] = None, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of repositories that the user recently contributed to. **Args:** * `login`: The user's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `privacy`: If non-null, filters repositories according to privacy. * `order_by`: Ordering options for repositories returned from the connection. * `is_locked`: If non-null, filters repositories according to whether they have been locked. * `include_user_repositories`: If true, include user repositories. * `contribution_types`: If non-null, include only the specified types of contributions. The GitHub.com UI uses \[COMMIT, ISSUE, PULL\_REQUEST, REPOSITORY]. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_repository_discussion_comments` ```python theme={null} query_user_repository_discussion_comments(login: str, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, repository_id: Optional[str] = None, only_answers: bool = False, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Discussion comments this user has authored. **Args:** * `login`: The user's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `repository_id`: Filter discussion comments to only those in a specific repository. * `only_answers`: Filter discussion comments to only those that were marked as the answer. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_sponsorship_for_viewer_as_sponsor` ```python theme={null} query_user_sponsorship_for_viewer_as_sponsor(login: str, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` The sponsorship from the viewer to this user/organization; that is, the sponsorship where you're the sponsor. Only returns a sponsorship if it is active. **Args:** * `login`: The user's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_user_sponsorship_for_viewer_as_sponsorable` ```python theme={null} query_user_sponsorship_for_viewer_as_sponsorable(login: str, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` The sponsorship from this user/organization to the viewer; that is, the sponsorship you're receiving. Only returns a sponsorship if it is active. **Args:** * `login`: The user's login. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. # utils Source: https://docs.prefect.io/integrations/prefect-github/api-ref/prefect_github-utils # `prefect_github.utils` Utilities to assist with using generated collections. ## Functions ### `camel_to_snake_case` ```python theme={null} camel_to_snake_case(string: str) -> str ``` Converts CamelCase and lowerCamelCase to snake\_case. Args: string: The string in CamelCase or lowerCamelCase to convert. Returns: A snake\_case version of the string. ### `initialize_return_fields_defaults` ```python theme={null} initialize_return_fields_defaults(config_path: Union[Path, str]) -> List ``` Reads config\_path to parse out the desired default fields to return. Args: config\_path: The path to the config file. ### `strip_kwargs` ```python theme={null} strip_kwargs(**kwargs: Dict) -> Dict ``` Drops keyword arguments if value is None because sgqlc.Operation errors out if a keyword argument is provided, but set to None. **Args:** * `**kwargs`: Input keyword arguments. **Returns:** * Stripped version of kwargs. # viewer Source: https://docs.prefect.io/integrations/prefect-github/api-ref/prefect_github-viewer # `prefect_github.viewer` This is a module containing: GitHub query\_viewer\* tasks ## Functions ### `query_viewer` ```python theme={null} query_viewer(github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` The query root of GitHub's GraphQL interface. **Args:** * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_gist` ```python theme={null} query_viewer_gist(name: str, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Find gist by repo name. **Args:** * `name`: The gist name to find. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_gists` ```python theme={null} query_viewer_gists(github_credentials: GitHubCredentials, privacy: graphql_schema.GistPrivacy = None, order_by: graphql_schema.GistOrder = None, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of the Gists the user has created. **Args:** * `github_credentials`: Credentials to use for authentication with GitHub. * `privacy`: Filters Gists according to privacy. * `order_by`: Ordering options for gists returned from the connection. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_issues` ```python theme={null} query_viewer_issues(labels: Iterable[str], states: Iterable[graphql_schema.IssueState], github_credentials: GitHubCredentials, order_by: graphql_schema.IssueOrder = None, filter_by: graphql_schema.IssueFilters = None, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of issues associated with this user. **Args:** * `labels`: A list of label names to filter the pull requests by. * `states`: A list of states to filter the issues by. * `github_credentials`: Credentials to use for authentication with GitHub. * `order_by`: Ordering options for issues returned from the connection. * `filter_by`: Filtering options for issues returned from the connection. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_status` ```python theme={null} query_viewer_status(github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` The user's description of what they're currently doing. **Args:** * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_project` ```python theme={null} query_viewer_project(number: int, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Find project by number. **Args:** * `number`: The project number to find. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_packages` ```python theme={null} query_viewer_packages(github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, names: Optional[Iterable[str]] = None, repository_id: Optional[str] = None, package_type: graphql_schema.PackageType = None, order_by: graphql_schema.PackageOrder = {'field': 'CREATED_AT', 'direction': 'DESC'}, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of packages under the owner. **Args:** * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `names`: Find packages by their names. * `repository_id`: Find packages in a repository by ID. * `package_type`: Filter registry package by type. * `order_by`: Ordering of the returned packages. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_projects` ```python theme={null} query_viewer_projects(states: Iterable[graphql_schema.ProjectState], github_credentials: GitHubCredentials, order_by: graphql_schema.ProjectOrder = None, search: Optional[str] = None, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of projects under the owner. **Args:** * `states`: A list of states to filter the projects by. * `github_credentials`: Credentials to use for authentication with GitHub. * `order_by`: Ordering options for projects returned from the connection. * `search`: Query to search projects by, currently only searching by name. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_sponsors` ```python theme={null} query_viewer_sponsors(github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, tier_id: Optional[str] = None, order_by: graphql_schema.SponsorOrder = {'field': 'RELEVANCE', 'direction': 'DESC'}, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` List of sponsors for this user or organization. **Args:** * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `tier_id`: If given, will filter for sponsors at the given tier. Will only return sponsors whose tier the viewer is permitted to see. * `order_by`: Ordering options for sponsors returned from the connection. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_watching` ```python theme={null} query_viewer_watching(github_credentials: GitHubCredentials, privacy: graphql_schema.RepositoryPrivacy = None, order_by: graphql_schema.RepositoryOrder = None, affiliations: Iterable[graphql_schema.RepositoryAffiliation] = None, owner_affiliations: Iterable[graphql_schema.RepositoryAffiliation] = ('OWNER', 'COLLABORATOR'), is_locked: Optional[bool] = None, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of repositories the given user is watching. **Args:** * `github_credentials`: Credentials to use for authentication with GitHub. * `privacy`: If non-null, filters repositories according to privacy. * `order_by`: Ordering options for repositories returned from the connection. * `affiliations`: Affiliation options for repositories returned from the connection. If none specified, the results will include repositories for which the current viewer is an owner or collaborator, or member. * `owner_affiliations`: Array of owner's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the organization or user being viewed owns. * `is_locked`: If non-null, filters repositories according to whether they have been locked. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_project_v2` ```python theme={null} query_viewer_project_v2(number: int, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Find a project by number. **Args:** * `number`: The project number. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_followers` ```python theme={null} query_viewer_followers(github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of users the given user is followed by. **Args:** * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_following` ```python theme={null} query_viewer_following(github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of users the given user is following. **Args:** * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_projects_v2` ```python theme={null} query_viewer_projects_v2(github_credentials: GitHubCredentials, query: Optional[str] = None, order_by: graphql_schema.ProjectV2Order = {'field': 'NUMBER', 'direction': 'DESC'}, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of projects under the owner. **Args:** * `github_credentials`: Credentials to use for authentication with GitHub. * `query`: A project to search for under the the owner. * `order_by`: How to order the returned projects. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_repository` ```python theme={null} query_viewer_repository(name: str, github_credentials: GitHubCredentials, follow_renames: bool = True, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Find Repository. **Args:** * `name`: Name of Repository to find. * `github_credentials`: Credentials to use for authentication with GitHub. * `follow_renames`: Follow repository renames. If disabled, a repository referenced by its old name will return an error. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_sponsoring` ```python theme={null} query_viewer_sponsoring(github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, order_by: graphql_schema.SponsorOrder = {'field': 'RELEVANCE', 'direction': 'DESC'}, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` List of users and organizations this entity is sponsoring. **Args:** * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `order_by`: Ordering options for the users and organizations returned from the connection. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_public_keys` ```python theme={null} query_viewer_public_keys(github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of public keys associated with this user. **Args:** * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_project_next` ```python theme={null} query_viewer_project_next(number: int, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Find a project by project (beta) number. **Args:** * `number`: The project (beta) number. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_pinned_items` ```python theme={null} query_viewer_pinned_items(types: Iterable[graphql_schema.PinnableItemType], github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of repositories and gists this profile owner has pinned to their profile. **Args:** * `types`: Filter the types of pinned items that are returned. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_projects_next` ```python theme={null} query_viewer_projects_next(github_credentials: GitHubCredentials, query: Optional[str] = None, sort_by: graphql_schema.ProjectNextOrderField = 'TITLE', after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of projects (beta) under the owner. **Args:** * `github_credentials`: Credentials to use for authentication with GitHub. * `query`: A project (beta) to search for under the the owner. * `sort_by`: How to order the returned projects (beta). * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_repositories` ```python theme={null} query_viewer_repositories(github_credentials: GitHubCredentials, privacy: graphql_schema.RepositoryPrivacy = None, order_by: graphql_schema.RepositoryOrder = None, affiliations: Iterable[graphql_schema.RepositoryAffiliation] = None, owner_affiliations: Iterable[graphql_schema.RepositoryAffiliation] = ('OWNER', 'COLLABORATOR'), is_locked: Optional[bool] = None, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, is_fork: Optional[bool] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of repositories that the user owns. **Args:** * `github_credentials`: Credentials to use for authentication with GitHub. * `privacy`: If non-null, filters repositories according to privacy. * `order_by`: Ordering options for repositories returned from the connection. * `affiliations`: Array of viewer's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the current viewer owns. * `owner_affiliations`: Array of owner's affiliation options for repositories returned from the connection. For example, OWNER will include only repositories that the organization or user being viewed owns. * `is_locked`: If non-null, filters repositories according to whether they have been locked. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `is_fork`: If non-null, filters repositories according to whether they are forks of another repository. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_item_showcase` ```python theme={null} query_viewer_item_showcase(github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Showcases a selection of repositories and gists that the profile owner has either curated or that have been selected automatically based on popularity. **Args:** * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_gist_comments` ```python theme={null} query_viewer_gist_comments(github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of gist comments made by this user. **Args:** * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_organization` ```python theme={null} query_viewer_organization(login: str, github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Find an organization by its login that the user belongs to. **Args:** * `login`: The login of the organization to find. * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_pull_requests` ```python theme={null} query_viewer_pull_requests(states: Iterable[graphql_schema.PullRequestState], labels: Iterable[str], github_credentials: GitHubCredentials, head_ref_name: Optional[str] = None, base_ref_name: Optional[str] = None, order_by: graphql_schema.IssueOrder = None, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of pull requests associated with this user. **Args:** * `states`: A list of states to filter the pull requests by. * `labels`: A list of label names to filter the pull requests by. * `github_credentials`: Credentials to use for authentication with GitHub. * `head_ref_name`: The head ref name to filter the pull requests by. * `base_ref_name`: The base ref name to filter the pull requests by. * `order_by`: Ordering options for pull requests returned from the connection. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_saved_replies` ```python theme={null} query_viewer_saved_replies(github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, order_by: graphql_schema.SavedReplyOrder = {'field': 'UPDATED_AT', 'direction': 'DESC'}, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Replies this user has saved. **Args:** * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `order_by`: The field to order saved replies by. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_pinnable_items` ```python theme={null} query_viewer_pinnable_items(types: Iterable[graphql_schema.PinnableItemType], github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of repositories and gists this profile owner can pin to their profile. **Args:** * `types`: Filter the types of pinnable items that are returned. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_issue_comments` ```python theme={null} query_viewer_issue_comments(github_credentials: GitHubCredentials, order_by: graphql_schema.IssueCommentOrder = None, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of issue comments made by this user. **Args:** * `github_credentials`: Credentials to use for authentication with GitHub. * `order_by`: Ordering options for issue comments returned from the connection. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_organizations` ```python theme={null} query_viewer_organizations(github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of organizations the user belongs to. **Args:** * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_recent_projects` ```python theme={null} query_viewer_recent_projects(github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Recent projects that this user has modified in the context of the owner. **Args:** * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_commit_comments` ```python theme={null} query_viewer_commit_comments(github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of commit comments made by this user. **Args:** * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_sponsors_listing` ```python theme={null} query_viewer_sponsors_listing(github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` The GitHub Sponsors listing for this user or organization. **Args:** * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_top_repositories` ```python theme={null} query_viewer_top_repositories(order_by: graphql_schema.RepositoryOrder, github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, since: Optional[datetime] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Repositories the user has contributed to, ordered by contribution rank, plus repositories the user has created. **Args:** * `order_by`: Ordering options for repositories returned from the connection. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `since`: How far back in time to fetch contributed repositories. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_sponsors_activities` ```python theme={null} query_viewer_sponsors_activities(actions: Iterable[graphql_schema.SponsorsActivityAction], github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, period: graphql_schema.SponsorsActivityPeriod = 'MONTH', order_by: graphql_schema.SponsorsActivityOrder = {'field': 'TIMESTAMP', 'direction': 'DESC'}, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Events involving this sponsorable, such as new sponsorships. **Args:** * `actions`: Filter activities to only the specified actions. * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `period`: Filter activities returned to only those that occurred in the most recent specified time period. Set to ALL to avoid filtering by when the activity occurred. * `order_by`: Ordering options for activity returned from the connection. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_interaction_ability` ```python theme={null} query_viewer_interaction_ability(github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` The interaction ability settings for this user. **Args:** * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_starred_repositories` ```python theme={null} query_viewer_starred_repositories(github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, owned_by_viewer: Optional[bool] = None, order_by: graphql_schema.StarOrder = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Repositories the user has starred. **Args:** * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `owned_by_viewer`: Filters starred repositories to only return repositories owned by the viewer. * `order_by`: Order for connection. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_repository_discussions` ```python theme={null} query_viewer_repository_discussions(github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, order_by: graphql_schema.DiscussionOrder = {'field': 'CREATED_AT', 'direction': 'DESC'}, repository_id: Optional[str] = None, answered: Optional[bool] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Discussions this user has started. **Args:** * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `order_by`: Ordering options for discussions returned from the connection. * `repository_id`: Filter discussions to only those in a specific repository. * `answered`: Filter discussions to only those that have been answered or not. Defaults to including both answered and unanswered discussions. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_sponsorships_as_sponsor` ```python theme={null} query_viewer_sponsorships_as_sponsor(github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, order_by: graphql_schema.SponsorshipOrder = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` This object's sponsorships as the sponsor. **Args:** * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `order_by`: Ordering options for sponsorships returned from this connection. If left blank, the sponsorships will be ordered based on relevancy to the viewer. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_sponsorship_newsletters` ```python theme={null} query_viewer_sponsorship_newsletters(github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, order_by: graphql_schema.SponsorshipNewsletterOrder = {'field': 'CREATED_AT', 'direction': 'DESC'}, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` List of sponsorship updates sent from this sponsorable to sponsors. **Args:** * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `order_by`: Ordering options for sponsorship updates returned from the connection. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_contributions_collection` ```python theme={null} query_viewer_contributions_collection(github_credentials: GitHubCredentials, organization_id: Optional[str] = None, from_: Optional[datetime] = None, to: Optional[datetime] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` The collection of contributions this user has made to different repositories. **Args:** * `github_credentials`: Credentials to use for authentication with GitHub. * `organization_id`: The ID of the organization used to filter contributions. * `from_`: Only contributions made at this time or later will be counted. If omitted, defaults to a year ago. * `to`: Only contributions made before and up to (including) this time will be counted. If omitted, defaults to the current time or one year from the provided from argument. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_sponsorships_as_maintainer` ```python theme={null} query_viewer_sponsorships_as_maintainer(github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, include_private: bool = False, order_by: graphql_schema.SponsorshipOrder = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` This object's sponsorships as the maintainer. **Args:** * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `include_private`: Whether or not to include private sponsorships in the result set. * `order_by`: Ordering options for sponsorships returned from this connection. If left blank, the sponsorships will be ordered based on relevancy to the viewer. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_repositories_contributed_to` ```python theme={null} query_viewer_repositories_contributed_to(github_credentials: GitHubCredentials, privacy: graphql_schema.RepositoryPrivacy = None, order_by: graphql_schema.RepositoryOrder = None, is_locked: Optional[bool] = None, include_user_repositories: Optional[bool] = None, contribution_types: Iterable[graphql_schema.RepositoryContributionType] = None, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` A list of repositories that the user recently contributed to. **Args:** * `github_credentials`: Credentials to use for authentication with GitHub. * `privacy`: If non-null, filters repositories according to privacy. * `order_by`: Ordering options for repositories returned from the connection. * `is_locked`: If non-null, filters repositories according to whether they have been locked. * `include_user_repositories`: If true, include user repositories. * `contribution_types`: If non-null, include only the specified types of contributions. The GitHub.com UI uses \[COMMIT, ISSUE, PULL\_REQUEST, REPOSITORY]. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_repository_discussion_comments` ```python theme={null} query_viewer_repository_discussion_comments(github_credentials: GitHubCredentials, after: Optional[str] = None, before: Optional[str] = None, first: Optional[int] = None, last: Optional[int] = None, repository_id: Optional[str] = None, only_answers: bool = False, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` Discussion comments this user has authored. **Args:** * `github_credentials`: Credentials to use for authentication with GitHub. * `after`: Returns the elements in the list that come after the specified cursor. * `before`: Returns the elements in the list that come before the specified cursor. * `first`: Returns the first *n* elements from the list. * `last`: Returns the last *n* elements from the list. * `repository_id`: Filter discussion comments to only those in a specific repository. * `only_answers`: Filter discussion comments to only those that were marked as the answer. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_sponsorship_for_viewer_as_sponsor` ```python theme={null} query_viewer_sponsorship_for_viewer_as_sponsor(github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` The sponsorship from the viewer to this user/organization; that is, the sponsorship where you're the sponsor. Only returns a sponsorship if it is active. **Args:** * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. ### `query_viewer_sponsorship_for_viewer_as_sponsorable` ```python theme={null} query_viewer_sponsorship_for_viewer_as_sponsorable(github_credentials: GitHubCredentials, return_fields: Optional[Iterable[str]] = None) -> Dict[str, Any] ``` The sponsorship from this user/organization to the viewer; that is, the sponsorship you're receiving. Only returns a sponsorship if it is active. **Args:** * `github_credentials`: Credentials to use for authentication with GitHub. * `return_fields`: Subset the return fields (as snake\_case); defaults to fields listed in configs/query/\*.json. **Returns:** * A dict of the returned fields. # prefect-github Source: https://docs.prefect.io/integrations/prefect-github/index Prefect-github makes it easy to interact with GitHub repositories and use GitHub credentials. ## Getting started ### Prerequisites * A [GitHub account](https://github.com/). ### Install `prefect-github` The following command will install a version of `prefect-github` compatible with your installed version of `prefect`. If you don't already have `prefect` installed, it will install the newest version of `prefect` as well. ```bash theme={null} pip install "prefect[github]" ``` Upgrade to the latest versions of `prefect` and `prefect-github`: ```bash theme={null} pip install -U "prefect[github]" ``` ### Register newly installed block types Register the block types in the `prefect-github` module to make them available for use. ```bash theme={null} prefect block register -m prefect_github ``` ## Examples In the examples below, you create blocks with Python code. Alternatively, blocks can be created through the Prefect UI. To create a deployment and run a deployment where the flow code is stored in a private GitHub repository, you can use the `GitHubCredentials` block. A deployment can use flow code stored in a GitHub repository without using this library in either of the following cases: * The repository is public * The deployment uses a [Secret block](https://docs.prefect.io/latest/develop/blocks/) to store the token Code to create a GitHub Credentials block: ```python theme={null} from prefect_github import GitHubCredentials github_credentials_block = GitHubCredentials(token="my_token") github_credentials_block.save(name="my-github-credentials-block") ``` ### Access flow code stored in a private GitHub repository in a deployment Use the credentials block you created above to pass the GitHub access token during deployment creation. The code below assumes there's flow code stored in a private GitHub repository. ```python theme={null} from prefect import flow from prefect.runner.storage import GitRepository from prefect_github import GitHubCredentials if __name__ == "__main__": source = GitRepository( url="https://github.com/org/private-repo.git", credentials=GitHubCredentials.load("my-github-credentials-block") ) flow.from_source(source=source, entrypoint="my_file.py:my_flow").deploy( name="private-github-deploy", work_pool_name="my_pool", ) ``` Alternatively, if you use a `prefect.yaml` file to create the deployment, reference the GitHub Credentials block in the `pull` step: ```yaml theme={null} pull: - prefect.deployments.steps.git_clone: repository: https://github.com/org/repo.git credentials: "{{ prefect.blocks.github-credentials.my-github-credentials-block }}" ``` ### Interact with a GitHub repository You can use prefect-github to create and retrieve issues and PRs from a repository. Here's an example of adding a star to a GitHub repository: ```python theme={null} from prefect import flow from prefect_github import GitHubCredentials from prefect_github.repository import query_repository from prefect_github.mutations import add_star_starrable @flow() def github_add_star_flow(): github_credentials = GitHubCredentials.load("github-token") repository_id = query_repository( "PrefectHQ", "Prefect", github_credentials=github_credentials, return_fields="id" )["id"] starrable = add_star_starrable( repository_id, github_credentials ) return starrable if __name__ == "__main__": github_add_star_flow() ``` ## Resources For assistance using GitHub, consult the [GitHub documentation](https://docs.github.com). Refer to the `prefect-github` [SDK documentation](/integrations/prefect-github/api-ref/prefect_github-credentials) to explore all the capabilities of the `prefect-github` library. # credentials Source: https://docs.prefect.io/integrations/prefect-gitlab/api-ref/prefect_gitlab-credentials # `prefect_gitlab.credentials` Module used to enable authenticated interactions with GitLab ## Classes ### `GitLabCredentials` Store a GitLab personal access token to interact with private GitLab repositories. **Attributes:** * `token`: The personal access token to authenticate with GitLab. * `url`: URL to self-hosted GitLab instances. **Examples:** Load stored GitLab credentials: ```python theme={null} from prefect_gitlab import GitLabCredentials gitlab_credentials_block = GitLabCredentials.load("BLOCK_NAME") ``` **Methods:** #### `format_git_credentials` ```python theme={null} format_git_credentials(self, url: str) -> str ``` Format and return the full git URL with GitLab credentials embedded. Handles both personal access tokens and deploy tokens correctly: * Personal access tokens: prefixed with "oauth2:" * Deploy tokens (username:token format): used as-is * Already prefixed tokens: not double-prefixed **Args:** * `url`: Repository URL (e.g., "[https://gitlab.com/org/repo.git](https://gitlab.com/org/repo.git)") **Returns:** * Complete URL with credentials embedded **Raises:** * `ValueError`: If token is not configured #### `get_client` ```python theme={null} get_client(self) -> Gitlab ``` Gets an authenticated GitLab client. **Returns:** * An authenticated GitLab client. # repositories Source: https://docs.prefect.io/integrations/prefect-gitlab/api-ref/prefect_gitlab-repositories # `prefect_gitlab.repositories` Integrations with GitLab. The `GitLab` class in this collection is a storage block that lets Prefect agents pull Prefect flow code from GitLab repositories. The `GitLab` block is ideally configured via the Prefect UI, but can also be used in Python as the following examples demonstrate. Examples: ```python theme={null} from prefect_gitlab.repositories import GitLabRepository # public GitLab repository public_gitlab_block = GitLabRepository( name="my-gitlab-block", repository="https://gitlab.com/testing/my-repository.git" ) public_gitlab_block.save() # specific branch or tag of a GitLab repository branch_gitlab_block = GitLabRepository( name="my-gitlab-block", reference="branch-or-tag-name" repository="https://gitlab.com/testing/my-repository.git" ) branch_gitlab_block.save() # private GitLab repository private_gitlab_block = GitLabRepository( name="my-private-gitlab-block", repository="https://gitlab.com/testing/my-repository.git", access_token="YOUR_GITLAB_PERSONAL_ACCESS_TOKEN" ) private_gitlab_block.save() ``` ## Classes ### `GitLabRepository` Interact with files stored in GitLab repositories. An accessible installation of git is required for this block to function properly. **Methods:** #### `aget_directory` ```python theme={null} aget_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> None ``` Clones a GitLab project specified in `from_path` to the provided `local_path`; defaults to cloning the repository reference configured on the Block to the present working directory. Async version. **Args:** * `from_path`: If provided, interpreted as a subdirectory of the underlying repository that will be copied to the provided local path. * `local_path`: A local path to clone to; defaults to present working directory. #### `get_directory` ```python theme={null} get_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> None ``` Clones a GitLab project specified in `from_path` to the provided `local_path`; defaults to cloning the repository reference configured on the Block to the present working directory. **Args:** * `from_path`: If provided, interpreted as a subdirectory of the underlying repository that will be copied to the provided local path. * `local_path`: A local path to clone to; defaults to present working directory. # prefect-gitlab Source: https://docs.prefect.io/integrations/prefect-gitlab/index The prefect-gitlab library makes it easy to interact with GitLab repositories and credentials. ## Getting started ### Prerequisites * A [GitLab account](https://gitlab.com/). ### Install `prefect-gitlab` The following command will install a version of `prefect-gitlab` compatible with your installed version of `prefect`. If you don't already have `prefect` installed, it will install the newest version of `prefect` as well. ```bash theme={null} pip install "prefect[gitlab]" ``` Upgrade to the latest versions of `prefect` and `prefect-gitlab`: ```bash theme={null} pip install -U "prefect[gitlab]" ``` ### Register newly installed block types Register the block types in the `prefect-gitlab` module to make them available for use. ```bash theme={null} prefect block register -m prefect_gitlab ``` ## Examples In the examples below, you create blocks with Python code. Alternatively, blocks can be created through the Prefect UI. ## Store deployment flow code in a private GitLab repository To create a deployment where the flow code is stored in a private GitLab repository, you can use the `GitLabCredentials` block. A deployment can use flow code stored in a GitLab repository without using this library in either of the following cases: * The repository is public * The deployment uses a [Secret block](https://docs.prefect.io/latest/develop/blocks/) to store the token Code to create a GitLab Credentials block: ```python theme={null} from prefect_gitlab import GitLabCredentials gitlab_credentials_block = GitLabCredentials(token="my_token") gitlab_credentials_block.save(name="my-gitlab-credentials-block") ``` ### Access flow code stored in a private GitLab repository in a deployment Use the credentials block you created above to pass the GitLab access token during deployment creation. The code below assumes there's flow code in your private GitLab repository. ```python theme={null} from prefect import flow from prefect.runner.storage import GitRepository from prefect_gitlab import GitLabCredentials if __name__ == "__main__": source = GitRepository( url="https://gitlab.com/org/private-repo.git", credentials=GitLabCredentials.load("my-gitlab-credentials-block") ) source = GitRepository( flow.from_source( source=source, entrypoint="my_file.py:my_flow", ).deploy( name="private-gitlab-deploy", work_pool_name="my_pool", ) ``` Alternatively, if you use a `prefect.yaml` file to create the deployment, reference the GitLab Credentials block in the `pull` step: ```yaml theme={null} pull: - prefect.deployments.steps.git_clone: repository: https://github.com/org/repo.git credentials: "{{ prefect.blocks.gitlab-credentials.my-gitlab-credentials-block }}" ``` ### Interact with a GitLab repository The code below shows how to reference a particular branch or tag of a GitLab repository. ```python theme={null} from prefect_gitlab import GitLabRepository def save_private_gitlab_block(): private_gitlab_block = GitLabRepository( repository="https://gitlab.com/testing/my-repository.git", access_token="YOUR_GITLAB_PERSONAL_ACCESS_TOKEN", reference="branch-or-tag-name", ) private_gitlab_block.save("my-private-gitlab-block") if __name__ == "__main__": save_private_gitlab_block() ``` Exclude the `access_token` field if the repository is public and exclude the `reference` field to use the default branch. Use the newly created block to interact with the GitLab repository. For example, download the repository contents with the `.get_directory()` method like this: ```python theme={null} from prefect_gitlab.repositories import GitLabRepository def fetch_repo(): private_gitlab_block = GitLabRepository.load("my-gitlab-block") private_gitlab_block.get_directory() if __name__ == "__main__": fetch_repo() ``` ## Resources For assistance using GitLab, consult the [GitLab documentation](https://gitlab.com). Refer to the `prefect-gitlab` [SDK documentation](/integrations/prefect-gitlab/api-ref/prefect_gitlab-credentials) to explore all the capabilities of the `prefect-gitlab` library. # credentials Source: https://docs.prefect.io/integrations/prefect-kubernetes/api-ref/prefect_kubernetes-credentials # `prefect_kubernetes.credentials` Module for defining Kubernetes credential handling and client generation. ## Classes ### `KubernetesClusterConfig` Stores configuration for interaction with Kubernetes clusters. See `from_file` for creation. **Attributes:** * `config`: The entire loaded YAML contents of a kubectl config file * `context_name`: The name of the kubectl context to use **Methods:** #### `configure_client` ```python theme={null} configure_client(self) -> None ``` Activates this cluster configuration by loading the configuration into the Kubernetes Python client. After calling this, Kubernetes API clients can use this config's context. #### `from_file` ```python theme={null} from_file(cls: Type[Self], path: Optional[Path] = None, context_name: Optional[str] = None) -> Self ``` Create a cluster config from the a Kubernetes config file. By default, the current context in the default Kubernetes config file will be used. An alternative file or context may be specified. The entire config file will be loaded and stored. #### `get_api_client` ```python theme={null} get_api_client(self) -> 'ApiClient' ``` Returns a Kubernetes API client for this cluster config. #### `parse_yaml_config` ```python theme={null} parse_yaml_config(cls, value) ``` ### `KubernetesCredentials` Credentials block for generating configured Kubernetes API clients. **Attributes:** * `cluster_config`: A `KubernetesClusterConfig` block holding a JSON kube config for a specific kubernetes context. **Methods:** #### `get_client` ```python theme={null} get_client(self, client_type: Literal['apps', 'batch', 'core', 'custom_objects'], configuration: Optional[Configuration] = None) -> AsyncGenerator[KubernetesClient, None] ``` Convenience method for retrieving a Kubernetes API client for deployment resources. **Args:** * `client_type`: The resource-specific type of Kubernetes client to retrieve. #### `get_resource_specific_client` ```python theme={null} get_resource_specific_client(self, client_type: str, api_client: ApiClient) -> Union[AppsV1Api, BatchV1Api, CoreV1Api] ``` Utility function for configuring a generic Kubernetes client. It will attempt to connect to a Kubernetes cluster in three steps with the first successful connection attempt becoming the mode of communication with a cluster: 1. It will first attempt to use a `KubernetesCredentials` block's `cluster_config` to configure a client using `KubernetesClusterConfig.configure_client`. 2. Attempt in-cluster connection (will only work when running on a pod). 3. Attempt out-of-cluster connection using the default location for a kube config file. **Args:** * `client_type`: The Kubernetes API client type for interacting with specific Kubernetes resources. **Returns:** * An authenticated, resource-specific Kubernetes Client. **Raises:** * `ValueError`: If `client_type` is not a valid Kubernetes API client type. # custom_objects Source: https://docs.prefect.io/integrations/prefect-kubernetes/api-ref/prefect_kubernetes-custom_objects # `prefect_kubernetes.custom_objects` ## Functions ### `create_namespaced_custom_object` ```python theme={null} create_namespaced_custom_object(kubernetes_credentials: KubernetesCredentials, group: str, version: str, plural: str, body: Dict[str, Any], namespace: Optional[str] = 'default', **kube_kwargs: Dict[str, Any]) -> object ``` Task for creating a namespaced custom object. **Args:** * `kubernetes_credentials`: `KubernetesCredentials` block holding authentication needed to generate the required API client. * `group`: The custom resource object's group * `version`: The custom resource object's version * `plural`: The custom resource object's plural * `body`: A Dict containing the custom resource object's specification. * `namespace`: The Kubernetes namespace to create the custom object in. * `**kube_kwargs`: Optional extra keyword arguments to pass to the Kubernetes API (e.g. `{"pretty"\: "...", "dry_run"\: "..."}`). **Returns:** * object containing the custom resource created by this task. ### `delete_namespaced_custom_object` ```python theme={null} delete_namespaced_custom_object(kubernetes_credentials: KubernetesCredentials, group: str, version: str, plural: str, name: str, namespace: Optional[str] = 'default', **kube_kwargs: Dict[str, Any]) -> object ``` Task for deleting a namespaced custom object. **Args:** * `kubernetes_credentials`: `KubernetesCredentials` block holding authentication needed to generate the required API client. * `group`: The custom resource object's group * `version`: The custom resource object's version * `plural`: The custom resource object's plural * `name`: The name of a custom object to delete. * `namespace`: The Kubernetes namespace to create this custom object in. * `**kube_kwargs`: Optional extra keyword arguments to pass to the Kubernetes API (e.g. `{"pretty"\: "...", "dry_run"\: "..."}`). **Returns:** * object containing the custom resource deleted by this task. ### `get_namespaced_custom_object` ```python theme={null} get_namespaced_custom_object(kubernetes_credentials: KubernetesCredentials, group: str, version: str, plural: str, name: str, namespace: Optional[str] = 'default', **kube_kwargs: Dict[str, Any]) -> object ``` Task for reading a namespaced Kubernetes custom object. **Args:** * `kubernetes_credentials`: `KubernetesCredentials` block holding authentication needed to generate the required API client. * `group`: The custom resource object's group * `version`: The custom resource object's version * `plural`: The custom resource object's plural * `name`: The name of a custom object to read. * `namespace`: The Kubernetes namespace the custom resource is in. * `**kube_kwargs`: Optional extra keyword arguments to pass to the Kubernetes API (e.g. `{"pretty"\: "...", "dry_run"\: "..."}`). **Raises:** * `ValueError`: if `name` is `None`. **Returns:** * object containing the custom resource specification. ### `get_namespaced_custom_object_status` ```python theme={null} get_namespaced_custom_object_status(kubernetes_credentials: KubernetesCredentials, group: str, version: str, plural: str, name: str, namespace: str = 'default', **kube_kwargs: Dict[str, Any]) -> object ``` Task for fetching status of a namespaced custom object. **Args:** * `kubernetes_credentials`: `KubernetesCredentials` block holding authentication needed to generate the required API client. * `group`: The custom resource object's group * `version`: The custom resource object's version * `plural`: The custom resource object's plural * `name`: The name of a custom object to read. * `namespace`: The Kubernetes namespace the custom resource is in. * `**kube_kwargs`: Optional extra keyword arguments to pass to the Kubernetes API (e.g. `{"pretty"\: "...", "dry_run"\: "..."}`). **Returns:** * object containing the custom-object specification with status. ### `list_namespaced_custom_object` ```python theme={null} list_namespaced_custom_object(kubernetes_credentials: KubernetesCredentials, group: str, version: str, plural: str, namespace: str = 'default', **kube_kwargs: Dict[str, Any]) -> object ``` Task for listing namespaced custom objects. **Args:** * `kubernetes_credentials`: `KubernetesCredentials` block holding authentication needed to generate the required API client. * `group`: The custom resource object's group * `version`: The custom resource object's version * `plural`: The custom resource object's plural * `namespace`: The Kubernetes namespace to list custom resources for. * `**kube_kwargs`: Optional extra keyword arguments to pass to the Kubernetes API (e.g. `{"pretty"\: "...", "dry_run"\: "..."}`). **Returns:** * object containing a list of custom resources. ### `patch_namespaced_custom_object` ```python theme={null} patch_namespaced_custom_object(kubernetes_credentials: KubernetesCredentials, group: str, version: str, plural: str, name: str, body: Dict[str, Any], namespace: str = 'default', **kube_kwargs: Dict[str, Any]) -> object ``` Task for patching a namespaced custom resource. **Args:** * `kubernetes_credentials`: KubernetesCredentials block holding authentication needed to generate the required API client. * `group`: The custom resource object's group * `version`: The custom resource object's version * `plural`: The custom resource object's plural * `name`: The name of a custom object to patch. * `body`: A Dict containing the custom resource object's patch. * `namespace`: The custom resource's Kubernetes namespace. * `**kube_kwargs`: Optional extra keyword arguments to pass to the Kubernetes API (e.g. `{"pretty"\: "...", "dry_run"\: "..."}`). **Raises:** * `ValueError`: if `body` is `None`. **Returns:** * object containing the custom resource specification * after the patch gets applied. ### `replace_namespaced_custom_object` ```python theme={null} replace_namespaced_custom_object(kubernetes_credentials: KubernetesCredentials, group: str, version: str, plural: str, name: str, body: Dict[str, Any], namespace: str = 'default', **kube_kwargs: Dict[str, Any]) -> object ``` Task for replacing a namespaced custom resource. **Args:** * `kubernetes_credentials`: KubernetesCredentials block holding authentication needed to generate the required API client. * `group`: The custom resource object's group * `version`: The custom resource object's version * `plural`: The custom resource object's plural * `name`: The name of a custom object to replace. * `body`: A Dict containing the custom resource object's specification. * `namespace`: The custom resource's Kubernetes namespace. * `**kube_kwargs`: Optional extra keyword arguments to pass to the Kubernetes API (e.g. `{"pretty"\: "...", "dry_run"\: "..."}`). **Raises:** * `ValueError`: if `body` is `None`. **Returns:** * object containing the custom resource specification after the replacement. # deployments Source: https://docs.prefect.io/integrations/prefect-kubernetes/api-ref/prefect_kubernetes-deployments # `prefect_kubernetes.deployments` Module for interacting with Kubernetes deployments from Prefect flows. ## Functions ### `create_namespaced_deployment` ```python theme={null} create_namespaced_deployment(kubernetes_credentials: KubernetesCredentials, new_deployment: V1Deployment, namespace: Optional[str] = 'default', **kube_kwargs: Dict[str, Any]) -> V1Deployment ``` Create a Kubernetes deployment in a given namespace. **Args:** * `kubernetes_credentials`: `KubernetesCredentials` block for creating authenticated Kubernetes API clients. * `new_deployment`: A Kubernetes `V1Deployment` specification. * `namespace`: The Kubernetes namespace to create this deployment in. * `**kube_kwargs`: Optional extra keyword arguments to pass to the Kubernetes API. **Returns:** * A Kubernetes `V1Deployment` object. ### `delete_namespaced_deployment` ```python theme={null} delete_namespaced_deployment(kubernetes_credentials: KubernetesCredentials, deployment_name: str, delete_options: Optional[V1DeleteOptions] = None, namespace: Optional[str] = 'default', **kube_kwargs: Dict[str, Any]) -> V1Deployment ``` Delete a Kubernetes deployment in a given namespace. **Args:** * `kubernetes_credentials`: `KubernetesCredentials` block for creating authenticated Kubernetes API clients. * `deployment_name`: The name of the deployment to delete. * `delete_options`: A Kubernetes `V1DeleteOptions` object. * `namespace`: The Kubernetes namespace to delete this deployment from. * `**kube_kwargs`: Optional extra keyword arguments to pass to the Kubernetes API. **Returns:** * A Kubernetes `V1Deployment` object. ### `list_namespaced_deployment` ```python theme={null} list_namespaced_deployment(kubernetes_credentials: KubernetesCredentials, namespace: Optional[str] = 'default', **kube_kwargs: Dict[str, Any]) -> V1DeploymentList ``` List all deployments in a given namespace. **Args:** * `kubernetes_credentials`: `KubernetesCredentials` block for creating authenticated Kubernetes API clients. * `namespace`: The Kubernetes namespace to list deployments from. * `**kube_kwargs`: Optional extra keyword arguments to pass to the Kubernetes API. **Returns:** * A Kubernetes `V1DeploymentList` object. ### `patch_namespaced_deployment` ```python theme={null} patch_namespaced_deployment(kubernetes_credentials: KubernetesCredentials, deployment_name: str, deployment_updates: V1Deployment, namespace: Optional[str] = 'default', **kube_kwargs: Dict[str, Any]) -> V1Deployment ``` Patch a Kubernetes deployment in a given namespace. **Args:** * `kubernetes_credentials`: `KubernetesCredentials` block for creating authenticated Kubernetes API clients. * `deployment_name`: The name of the deployment to patch. * `deployment_updates`: A Kubernetes `V1Deployment` object. * `namespace`: The Kubernetes namespace to patch this deployment in. * `**kube_kwargs`: Optional extra keyword arguments to pass to the Kubernetes API. **Returns:** * A Kubernetes `V1Deployment` object. ### `read_namespaced_deployment` ```python theme={null} read_namespaced_deployment(kubernetes_credentials: KubernetesCredentials, deployment_name: str, namespace: Optional[str] = 'default', **kube_kwargs: Dict[str, Any]) -> V1Deployment ``` Read information on a Kubernetes deployment in a given namespace. **Args:** * `kubernetes_credentials`: `KubernetesCredentials` block for creating authenticated Kubernetes API clients. * `deployment_name`: The name of the deployment to read. * `namespace`: The Kubernetes namespace to read this deployment from. * `**kube_kwargs`: Optional extra keyword arguments to pass to the Kubernetes API. **Returns:** * A Kubernetes `V1Deployment` object. ### `replace_namespaced_deployment` ```python theme={null} replace_namespaced_deployment(kubernetes_credentials: KubernetesCredentials, deployment_name: str, new_deployment: V1Deployment, namespace: Optional[str] = 'default', **kube_kwargs: Dict[str, Any]) -> V1Deployment ``` Replace a Kubernetes deployment in a given namespace. **Args:** * `kubernetes_credentials`: `KubernetesCredentials` block for creating authenticated Kubernetes API clients. * `deployment_name`: The name of the deployment to replace. * `new_deployment`: A Kubernetes `V1Deployment` object. * `namespace`: The Kubernetes namespace to replace this deployment in. * `**kube_kwargs`: Optional extra keyword arguments to pass to the Kubernetes API. **Returns:** * A Kubernetes `V1Deployment` object. # diagnostics Source: https://docs.prefect.io/integrations/prefect-kubernetes/api-ref/prefect_kubernetes-diagnostics # `prefect_kubernetes.diagnostics` Kubernetes pod failure diagnostics. Pattern-matches pod status into structured failure diagnoses with actionable resolution hints. Designed to consume the kopf `status` parameter directly — no extra K8s API calls required. ## Functions ### `diagnose_k8s_pod` ```python theme={null} diagnose_k8s_pod(status: dict[str, Any]) -> InfrastructureDiagnosis | None ``` Inspect a pod's `status` dict and return a diagnosis for known failure conditions. Returns `None` when the pod is healthy or in a state that does not require user intervention. **Args:** * `status`: The `status` field from a Kubernetes pod object (the same dict kopf passes as the *status* parameter). ## Classes ### `DiagnosisLevel` Severity level for an infrastructure diagnosis. ### `InfrastructureDiagnosis` A structured diagnosis of a Kubernetes pod failure. # exceptions Source: https://docs.prefect.io/integrations/prefect-kubernetes/api-ref/prefect_kubernetes-exceptions # `prefect_kubernetes.exceptions` Module to define common exceptions within `prefect_kubernetes`. ## Classes ### `KubernetesJobDefinitionError` An exception for when a Kubernetes job definition is invalid. ### `KubernetesJobFailedError` An exception for when a Kubernetes job fails. ### `KubernetesResourceNotFoundError` An exception for when a Kubernetes resource cannot be found by a client. ### `KubernetesJobTimeoutError` An exception for when a Kubernetes job times out. # __init__ Source: https://docs.prefect.io/integrations/prefect-kubernetes/api-ref/prefect_kubernetes-experimental-__init__ # `prefect_kubernetes.experimental` *This module is empty or contains only private/internal implementations.* # decorators Source: https://docs.prefect.io/integrations/prefect-kubernetes/api-ref/prefect_kubernetes-experimental-decorators # `prefect_kubernetes.experimental.decorators` ## Functions ### `kubernetes` ```python theme={null} kubernetes(work_pool: str, include_files: Sequence[str] | None = None, **job_variables: Any) -> Callable[[Flow[P, R]], InfrastructureBoundFlow[P, R]] ``` Decorator that binds execution of a flow to a Kubernetes work pool **Args:** * `work_pool`: The name of the Kubernetes work pool to use * `include_files`: Optional sequence of file patterns to include in the bundle. Patterns are relative to the flow file location. Supports glob patterns (e.g., "*.yaml", "data/\*\*/*.csv"). Files matching these patterns will be bundled and available in the remote execution environment. * `**job_variables`: Additional job variables to use for infrastructure configuration # flows Source: https://docs.prefect.io/integrations/prefect-kubernetes/api-ref/prefect_kubernetes-flows # `prefect_kubernetes.flows` A module to define flows interacting with Kubernetes resources. ## Functions ### `run_namespaced_job` ```python theme={null} run_namespaced_job(kubernetes_job: KubernetesJob, print_func: Optional[Callable] = None) -> Dict[str, Any] ``` Flow for running a namespaced Kubernetes job. **Args:** * `kubernetes_job`: The `KubernetesJob` block that specifies the job to run. * `print_func`: A function to print the logs from the job pods. **Returns:** * A dict of logs from each pod in the job, e.g. `{'pod_name': 'pod_log_str'}`. **Raises:** * `RuntimeError`: If the created Kubernetes job attains a failed status. Example: ```python theme={null} from prefect_kubernetes import KubernetesJob, run_namespaced_job from prefect_kubernetes.credentials import KubernetesCredentials run_namespaced_job( kubernetes_job=KubernetesJob.from_yaml_file( credentials=KubernetesCredentials.load("k8s-creds"), manifest_path="path/to/job.yaml", ) ) ``` ### `run_namespaced_job_async` ```python theme={null} run_namespaced_job_async(kubernetes_job: KubernetesJob, print_func: Optional[Callable] = None) -> Dict[str, Any] ``` Flow for running a namespaced Kubernetes job. **Args:** * `kubernetes_job`: The `KubernetesJob` block that specifies the job to run. * `print_func`: A function to print the logs from the job pods. **Returns:** * A dict of logs from each pod in the job, e.g. `{'pod_name': 'pod_log_str'}`. **Raises:** * `RuntimeError`: If the created Kubernetes job attains a failed status. Example: ```python theme={null} from prefect_kubernetes import KubernetesJob, run_namespaced_job from prefect_kubernetes.credentials import KubernetesCredentials run_namespaced_job( kubernetes_job=KubernetesJob.from_yaml_file( credentials=KubernetesCredentials.load("k8s-creds"), manifest_path="path/to/job.yaml", ) ) ``` # jobs Source: https://docs.prefect.io/integrations/prefect-kubernetes/api-ref/prefect_kubernetes-jobs # `prefect_kubernetes.jobs` Module to define tasks for interacting with Kubernetes jobs. ## Functions ### `create_namespaced_job` ```python theme={null} create_namespaced_job(kubernetes_credentials: KubernetesCredentials, new_job: Union[V1Job, Dict[str, Any]], namespace: Optional[str] = 'default', **kube_kwargs: Dict[str, Any]) -> V1Job ``` Task for creating a namespaced Kubernetes job. **Args:** * `kubernetes_credentials`: `KubernetesCredentials` block holding authentication needed to generate the required API client. * `new_job`: A Kubernetes `V1Job` specification. * `namespace`: The Kubernetes namespace to create this job in. * `**kube_kwargs`: Optional extra keyword arguments to pass to the Kubernetes API (e.g. `{"pretty"\: "...", "dry_run"\: "..."}`). **Returns:** * A Kubernetes `V1Job` object. ### `delete_namespaced_job` ```python theme={null} delete_namespaced_job(kubernetes_credentials: KubernetesCredentials, job_name: str, delete_options: Optional[V1DeleteOptions] = None, namespace: Optional[str] = 'default', **kube_kwargs: Dict[str, Any]) -> V1Status ``` Task for deleting a namespaced Kubernetes job. **Args:** * `kubernetes_credentials`: `KubernetesCredentials` block holding authentication needed to generate the required API client. * `job_name`: The name of a job to delete. * `delete_options`: A Kubernetes `V1DeleteOptions` object. * `namespace`: The Kubernetes namespace to delete this job in. * `**kube_kwargs`: Optional extra keyword arguments to pass to the Kubernetes API (e.g. `{"pretty"\: "...", "dry_run"\: "..."}`). **Returns:** * A Kubernetes `V1Status` object. ### `list_namespaced_job` ```python theme={null} list_namespaced_job(kubernetes_credentials: KubernetesCredentials, namespace: Optional[str] = 'default', **kube_kwargs: Dict[str, Any]) -> V1JobList ``` Task for listing namespaced Kubernetes jobs. **Args:** * `kubernetes_credentials`: `KubernetesCredentials` block holding authentication needed to generate the required API client. * `namespace`: The Kubernetes namespace to list jobs from. * `**kube_kwargs`: Optional extra keyword arguments to pass to the Kubernetes API (e.g. `{"pretty"\: "...", "dry_run"\: "..."}`). **Returns:** * A Kubernetes `V1JobList` object. ### `patch_namespaced_job` ```python theme={null} patch_namespaced_job(kubernetes_credentials: KubernetesCredentials, job_name: str, job_updates: V1Job, namespace: Optional[str] = 'default', **kube_kwargs: Dict[str, Any]) -> V1Job ``` Task for patching a namespaced Kubernetes job. **Args:** * `kubernetes_credentials`: KubernetesCredentials block holding authentication needed to generate the required API client. * `job_name`: The name of a job to patch. * `job_updates`: A Kubernetes `V1Job` specification. * `namespace`: The Kubernetes namespace to patch this job in. * `**kube_kwargs`: Optional extra keyword arguments to pass to the Kubernetes API (e.g. `{"pretty"\: "...", "dry_run"\: "..."}`). **Raises:** * `ValueError`: if `job_name` is `None`. **Returns:** * A Kubernetes `V1Job` object. ### `read_namespaced_job` ```python theme={null} read_namespaced_job(kubernetes_credentials: KubernetesCredentials, job_name: str, namespace: Optional[str] = 'default', **kube_kwargs: Dict[str, Any]) -> V1Job ``` Task for reading a namespaced Kubernetes job. **Args:** * `kubernetes_credentials`: `KubernetesCredentials` block holding authentication needed to generate the required API client. * `job_name`: The name of a job to read. * `namespace`: The Kubernetes namespace to read this job in. * `**kube_kwargs`: Optional extra keyword arguments to pass to the Kubernetes API (e.g. `{"pretty"\: "...", "dry_run"\: "..."}`). **Raises:** * `ValueError`: if `job_name` is `None`. **Returns:** * A Kubernetes `V1Job` object. ### `replace_namespaced_job` ```python theme={null} replace_namespaced_job(kubernetes_credentials: KubernetesCredentials, job_name: str, new_job: V1Job, namespace: Optional[str] = 'default', **kube_kwargs: Dict[str, Any]) -> V1Job ``` Task for replacing a namespaced Kubernetes job. **Args:** * `kubernetes_credentials`: `KubernetesCredentials` block holding authentication needed to generate the required API client. * `job_name`: The name of a job to replace. * `new_job`: A Kubernetes `V1Job` specification. * `namespace`: The Kubernetes namespace to replace this job in. * `**kube_kwargs`: Optional extra keyword arguments to pass to the Kubernetes API (e.g. `{"pretty"\: "...", "dry_run"\: "..."}`). **Returns:** * A Kubernetes `V1Job` object. ### `read_namespaced_job_status` ```python theme={null} read_namespaced_job_status(kubernetes_credentials: KubernetesCredentials, job_name: str, namespace: Optional[str] = 'default', **kube_kwargs: Dict[str, Any]) -> V1Job ``` Task for fetching status of a namespaced Kubernetes job. **Args:** * `kubernetes_credentials`: `KubernetesCredentials` block holding authentication needed to generate the required API client. * `job_name`: The name of a job to fetch status for. * `namespace`: The Kubernetes namespace to fetch status of job in. * `**kube_kwargs`: Optional extra keyword arguments to pass to the Kubernetes API (e.g. `{"pretty"\: "...", "dry_run"\: "..."}`). **Returns:** * A Kubernetes `V1JobStatus` object. ## Classes ### `KubernetesJobRun` A container representing a run of a Kubernetes job. **Methods:** #### `afetch_result` ```python theme={null} afetch_result(self) -> Dict[str, Any] ``` Async implementation: fetch the results of the job. **Returns:** * The logs from each of the pods in the job. **Raises:** * `ValueError`: If this method is called when the job has a non-terminal state. #### `await_for_completion` ```python theme={null} await_for_completion(self, print_func: Optional[Callable] = None) ``` Async implementation: waits for the job to complete. If the job has `delete_after_completion` set to `True`, the job will be deleted if it is observed by this method to enter a completed state. **Raises:** * `RuntimeError`: If the Kubernetes job fails. * `KubernetesJobTimeoutError`: If the Kubernetes job times out. #### `fetch_result` ```python theme={null} fetch_result(self) -> Dict[str, Any] ``` Fetch the results of the job. **Returns:** * The logs from each of the pods in the job. **Raises:** * `ValueError`: If this method is called when the job has a non-terminal state. #### `v1_job_model` ```python theme={null} v1_job_model(self) -> dict[str, Any] ``` #### `wait_for_completion` ```python theme={null} wait_for_completion(self, print_func: Optional[Callable] = None) ``` Waits for the job to complete. If the job has `delete_after_completion` set to `True`, the job will be deleted if it is observed by this method to enter a completed state. **Raises:** * `RuntimeError`: If the Kubernetes job fails. * `KubernetesJobTimeoutError`: If the Kubernetes job times out. ### `KubernetesJob` A block representing a Kubernetes job configuration. **Methods:** #### `atrigger` ```python theme={null} atrigger(self) ``` Async implementation: create a Kubernetes job and return a `KubernetesJobRun` object. #### `from_yaml_file` ```python theme={null} from_yaml_file(cls: Type[Self], manifest_path: Union[Path, str], **kwargs) -> Self ``` Create a `KubernetesJob` from a YAML file. **Args:** * `manifest_path`: The YAML file to create the `KubernetesJob` from. **Returns:** * A KubernetesJob object. #### `trigger` ```python theme={null} trigger(self) ``` Create a Kubernetes job and return a `KubernetesJobRun` object. # observer Source: https://docs.prefect.io/integrations/prefect-kubernetes/api-ref/prefect_kubernetes-observer # `prefect_kubernetes.observer` ## Functions ### `configure` ```python theme={null} configure(settings: kopf.OperatorSettings, **_) ``` ### `initialize_clients` ```python theme={null} initialize_clients(logger: kopf.Logger, **kwargs: Any) ``` ### `cleanup_fn` ```python theme={null} cleanup_fn(logger: kopf.Logger, **kwargs: Any) ``` ### `start_observer` ```python theme={null} start_observer() ``` Start the observer in a separate thread. ### `stop_observer` ```python theme={null} stop_observer() ``` Stop the observer thread. # pods Source: https://docs.prefect.io/integrations/prefect-kubernetes/api-ref/prefect_kubernetes-pods # `prefect_kubernetes.pods` Module for interacting with Kubernetes pods from Prefect flows. ## Functions ### `create_namespaced_pod` ```python theme={null} create_namespaced_pod(kubernetes_credentials: KubernetesCredentials, new_pod: V1Pod, namespace: Optional[str] = 'default', **kube_kwargs: Dict[str, Any]) -> V1Pod ``` Create a Kubernetes pod in a given namespace. **Args:** * `kubernetes_credentials`: `KubernetesCredentials` block for creating authenticated Kubernetes API clients. * `new_pod`: A Kubernetes `V1Pod` specification. * `namespace`: The Kubernetes namespace to create this pod in. * `**kube_kwargs`: Optional extra keyword arguments to pass to the Kubernetes API. **Returns:** * A Kubernetes `V1Pod` object. ### `delete_namespaced_pod` ```python theme={null} delete_namespaced_pod(kubernetes_credentials: KubernetesCredentials, pod_name: str, delete_options: Optional[V1DeleteOptions] = None, namespace: Optional[str] = 'default', **kube_kwargs: Dict[str, Any]) -> V1Pod ``` Delete a Kubernetes pod in a given namespace. **Args:** * `kubernetes_credentials`: `KubernetesCredentials` block for creating authenticated Kubernetes API clients. * `pod_name`: The name of the pod to delete. * `delete_options`: A Kubernetes `V1DeleteOptions` object. * `namespace`: The Kubernetes namespace to delete this pod from. * `**kube_kwargs`: Optional extra keyword arguments to pass to the Kubernetes API. **Returns:** * A Kubernetes `V1Pod` object. ### `list_namespaced_pod` ```python theme={null} list_namespaced_pod(kubernetes_credentials: KubernetesCredentials, namespace: Optional[str] = 'default', **kube_kwargs: Dict[str, Any]) -> V1PodList ``` List all pods in a given namespace. **Args:** * `kubernetes_credentials`: `KubernetesCredentials` block for creating authenticated Kubernetes API clients. * `namespace`: The Kubernetes namespace to list pods from. * `**kube_kwargs`: Optional extra keyword arguments to pass to the Kubernetes API. **Returns:** * A Kubernetes `V1PodList` object. ### `patch_namespaced_pod` ```python theme={null} patch_namespaced_pod(kubernetes_credentials: KubernetesCredentials, pod_name: str, pod_updates: V1Pod, namespace: Optional[str] = 'default', **kube_kwargs: Dict[str, Any]) -> V1Pod ``` Patch a Kubernetes pod in a given namespace. **Args:** * `kubernetes_credentials`: `KubernetesCredentials` block for creating authenticated Kubernetes API clients. * `pod_name`: The name of the pod to patch. * `pod_updates`: A Kubernetes `V1Pod` object. * `namespace`: The Kubernetes namespace to patch this pod in. * `**kube_kwargs`: Optional extra keyword arguments to pass to the Kubernetes API. **Returns:** * A Kubernetes `V1Pod` object. ### `read_namespaced_pod` ```python theme={null} read_namespaced_pod(kubernetes_credentials: KubernetesCredentials, pod_name: str, namespace: Optional[str] = 'default', **kube_kwargs: Dict[str, Any]) -> V1Pod ``` Read information on a Kubernetes pod in a given namespace. **Args:** * `kubernetes_credentials`: `KubernetesCredentials` block for creating authenticated Kubernetes API clients. * `pod_name`: The name of the pod to read. * `namespace`: The Kubernetes namespace to read this pod from. * `**kube_kwargs`: Optional extra keyword arguments to pass to the Kubernetes API. **Returns:** * A Kubernetes `V1Pod` object. ### `read_namespaced_pod_log` ```python theme={null} read_namespaced_pod_log(kubernetes_credentials: KubernetesCredentials, pod_name: str, container: str, namespace: Optional[str] = 'default', print_func: Optional[Callable] = None, **kube_kwargs: Dict[str, Any]) -> Union[str, None] ``` Read logs from a Kubernetes pod in a given namespace. If `print_func` is provided, the logs will be streamed using that function. If the pod is no longer running, logs generated up to that point will be returned. **Args:** * `kubernetes_credentials`: `KubernetesCredentials` block for creating authenticated Kubernetes API clients. * `pod_name`: The name of the pod to read logs from. * `container`: The name of the container to read logs from. * `namespace`: The Kubernetes namespace to read this pod from. * `print_func`: If provided, it will stream the pod logs by calling `print_func` for every line and returning `None`. If not provided, the current pod logs will be returned immediately. * `**kube_kwargs`: Optional extra keyword arguments to pass to the Kubernetes API. **Returns:** * A string containing the logs from the pod's container. ### `replace_namespaced_pod` ```python theme={null} replace_namespaced_pod(kubernetes_credentials: KubernetesCredentials, pod_name: str, new_pod: V1Pod, namespace: Optional[str] = 'default', **kube_kwargs: Dict[str, Any]) -> V1Pod ``` Replace a Kubernetes pod in a given namespace. **Args:** * `kubernetes_credentials`: `KubernetesCredentials` block for creating authenticated Kubernetes API clients. * `pod_name`: The name of the pod to replace. * `new_pod`: A Kubernetes `V1Pod` object. * `namespace`: The Kubernetes namespace to replace this pod in. * `**kube_kwargs`: Optional extra keyword arguments to pass to the Kubernetes API. **Returns:** * A Kubernetes `V1Pod` object. # services Source: https://docs.prefect.io/integrations/prefect-kubernetes/api-ref/prefect_kubernetes-services # `prefect_kubernetes.services` Tasks for working with Kubernetes services. ## Functions ### `create_namespaced_service` ```python theme={null} create_namespaced_service(kubernetes_credentials: KubernetesCredentials, new_service: V1Service, namespace: Optional[str] = 'default', **kube_kwargs: Optional[Dict[str, Any]]) -> V1Service ``` Create a namespaced Kubernetes service. **Args:** * `kubernetes_credentials`: A `KubernetesCredentials` block used to generate a `CoreV1Api` client. * `new_service`: A `V1Service` object representing the service to create. * `namespace`: The namespace to create the service in. * `**kube_kwargs`: Additional keyword arguments to pass to the `CoreV1Api` method call. **Returns:** * A `V1Service` representing the created service. ### `delete_namespaced_service` ```python theme={null} delete_namespaced_service(kubernetes_credentials: KubernetesCredentials, service_name: str, delete_options: Optional[V1DeleteOptions] = None, namespace: Optional[str] = 'default', **kube_kwargs: Optional[Dict[str, Any]]) -> V1Service ``` Delete a namespaced Kubernetes service. **Args:** * `kubernetes_credentials`: `KubernetesCredentials` block for creating authenticated Kubernetes API clients. * `service_name`: The name of the service to delete. * `delete_options`: A `V1DeleteOptions` object representing the options to delete the service with. * `namespace`: The namespace to delete the service from. * `**kube_kwargs`: Optional extra keyword arguments to pass to the Kubernetes API. **Returns:** * A `V1Service` representing the deleted service. ### `list_namespaced_service` ```python theme={null} list_namespaced_service(kubernetes_credentials: KubernetesCredentials, namespace: Optional[str] = 'default', **kube_kwargs: Optional[Dict[str, Any]]) -> V1ServiceList ``` List namespaced Kubernetes services. **Args:** * `kubernetes_credentials`: `KubernetesCredentials` block for creating authenticated Kubernetes API clients. * `namespace`: The namespace to list services from. * `**kube_kwargs`: Optional extra keyword arguments to pass to the Kubernetes API. **Returns:** * A `V1ServiceList` representing the list of services in the given namespace. ### `patch_namespaced_service` ```python theme={null} patch_namespaced_service(kubernetes_credentials: KubernetesCredentials, service_name: str, service_updates: V1Service, namespace: Optional[str] = 'default', **kube_kwargs: Optional[Dict[str, Any]]) -> V1Service ``` Patch a namespaced Kubernetes service. **Args:** * `kubernetes_credentials`: `KubernetesCredentials` block for creating authenticated Kubernetes API clients. * `service_name`: The name of the service to patch. * `service_updates`: A `V1Service` object representing patches to `service_name`. * `namespace`: The namespace to patch the service in. * `**kube_kwargs`: Optional extra keyword arguments to pass to the Kubernetes API. **Returns:** * A `V1Service` representing the patched service. ### `read_namespaced_service` ```python theme={null} read_namespaced_service(kubernetes_credentials: KubernetesCredentials, service_name: str, namespace: Optional[str] = 'default', **kube_kwargs: Optional[Dict[str, Any]]) -> V1Service ``` Read a namespaced Kubernetes service. **Args:** * `kubernetes_credentials`: `KubernetesCredentials` block for creating authenticated Kubernetes API clients. * `service_name`: The name of the service to read. * `namespace`: The namespace to read the service from. * `**kube_kwargs`: Optional extra keyword arguments to pass to the Kubernetes API. **Returns:** * A `V1Service` object representing the service. ### `replace_namespaced_service` ```python theme={null} replace_namespaced_service(kubernetes_credentials: KubernetesCredentials, service_name: str, new_service: V1Service, namespace: Optional[str] = 'default', **kube_kwargs: Optional[Dict[str, Any]]) -> V1Service ``` Replace a namespaced Kubernetes service. **Args:** * `kubernetes_credentials`: `KubernetesCredentials` block for creating authenticated Kubernetes API clients. * `service_name`: The name of the service to replace. * `new_service`: A `V1Service` object representing the new service. * `namespace`: The namespace to replace the service in. * `**kube_kwargs`: Optional extra keyword arguments to pass to the Kubernetes API. **Returns:** * A `V1Service` representing the new service. # settings Source: https://docs.prefect.io/integrations/prefect-kubernetes/api-ref/prefect_kubernetes-settings # `prefect_kubernetes.settings` ## Classes ### `KubernetesObserverSettings` ### `KubernetesWorkerCreateJobRetrySettings` ### `KubernetesWorkerSettings` ### `KubernetesSettings` # utilities Source: https://docs.prefect.io/integrations/prefect-kubernetes/api-ref/prefect_kubernetes-utilities # `prefect_kubernetes.utilities` Utilities for working with the Python Kubernetes API. ## Classes ### `KeepAliveClientRequest` aiohttp only directly implements socket keepalive for incoming connections in its RequestHandler. For client connections, we need to set the keepalive ourselves. Refer to [https://github.com/aio-libs/aiohttp/issues/3904#issuecomment-759205696](https://github.com/aio-libs/aiohttp/issues/3904#issuecomment-759205696) **Methods:** #### `send` ```python theme={null} send(self, conn: Connection) -> ClientResponse ``` # worker Source: https://docs.prefect.io/integrations/prefect-kubernetes/api-ref/prefect_kubernetes-worker # `prefect_kubernetes.worker` Module containing the Kubernetes worker used for executing flow runs as Kubernetes jobs. To start a Kubernetes worker, run the following command: ```bash theme={null} prefect worker start --pool 'my-work-pool' --type kubernetes ``` Replace `my-work-pool` with the name of the work pool you want the worker to poll for flow runs. ### Securing your Prefect Cloud API key If you are using Prefect Cloud and would like to pass your Prefect Cloud API key to created jobs via a Kubernetes secret, set the `PREFECT_INTEGRATIONS_KUBERNETES_WORKER_CREATE_SECRET_FOR_API_KEY` environment variable before starting your worker: ```bash theme={null} export PREFECT_INTEGRATIONS_KUBERNETES_WORKER_CREATE_SECRET_FOR_API_KEY="true" prefect worker start --pool 'my-work-pool' --type kubernetes ``` Note that your work will need permission to create secrets in the same namespace(s) that Kubernetes jobs are created in to execute flow runs. ### Using a custom Kubernetes job manifest template The default template used for Kubernetes job manifests looks like this: ```yaml theme={null} --- apiVersion: batch/v1 kind: Job metadata: annotations: "{{ annotations }}" labels: "{{ labels }}" namespace: "{{ namespace }}" generateName: "{{ name }}-" spec: ttlSecondsAfterFinished: "{{ finished_job_ttl }}" template: spec: parallelism: 1 completions: 1 restartPolicy: Never serviceAccountName: "{{ service_account_name }}" containers: - name: prefect-job env: "{{ env }}" image: "{{ image }}" imagePullPolicy: "{{ image_pull_policy }}" args: "{{ command }}" ``` Each values enclosed in `{{ }}` is a placeholder that will be replaced with a value at runtime. The values that can be used a placeholders are defined by the `variables` schema defined in the base job template. The default job manifest and available variables can be customized on a work pool by work pool basis. These customizations can be made via the Prefect UI when creating or editing a work pool. For example, if you wanted to allow custom memory requests for a Kubernetes work pool you could update the job manifest template to look like this: ```yaml theme={null} --- apiVersion: batch/v1 kind: Job metadata: annotations: "{{ annotations }}" labels: "{{ labels }}" namespace: "{{ namespace }}" generateName: "{{ name }}-" spec: ttlSecondsAfterFinished: "{{ finished_job_ttl }}" template: spec: parallelism: 1 completions: 1 restartPolicy: Never serviceAccountName: "{{ service_account_name }}" containers: - name: prefect-job env: "{{ env }}" image: "{{ image }}" imagePullPolicy: "{{ image_pull_policy }}" args: "{{ command }}" resources: requests: memory: "{{ memory }}Mi" limits: memory: 128Mi ``` In this new template, the `memory` placeholder allows customization of the memory allocated to Kubernetes jobs created by workers in this work pool, but the limit is hard-coded and cannot be changed by deployments. For more information about work pools and workers, checkout out the [Prefect docs](https://docs.prefect.io/concepts/work-pools/). ## Classes ### `KubernetesImagePullPolicy` Enum representing the image pull policy options for a Kubernetes job. ### `KubernetesWorkerJobConfiguration` Configuration class used by the Kubernetes worker. An instance of this class is passed to the Kubernetes worker's `run` method for each flow run. It contains all of the information necessary to execute the flow run as a Kubernetes job. **Attributes:** * `name`: The name to give to created Kubernetes job. * `command`: The command executed in created Kubernetes jobs to kick off flow run execution. * `env`: The environment variables to set in created Kubernetes jobs. * `labels`: The labels to set on created Kubernetes jobs. * `namespace`: The Kubernetes namespace to create Kubernetes jobs in. * `job_manifest`: The Kubernetes job manifest to use to create Kubernetes jobs. * `cluster_config`: The Kubernetes cluster configuration to use for authentication to a Kubernetes cluster. * `job_watch_timeout_seconds`: The number of seconds to wait for the job to complete before timing out. If `None`, the worker will wait indefinitely. * `pod_watch_timeout_seconds`: The number of seconds to wait for the pod to complete before timing out. * `stream_output`: Whether or not to stream the job's output. **Methods:** #### `get_environment_variable_value` ```python theme={null} get_environment_variable_value(self, name: str) -> str | None ``` Returns the value of an environment variable from the job manifest. #### `prepare_for_flow_run` ```python theme={null} prepare_for_flow_run(self, flow_run: 'FlowRun', deployment: 'DeploymentResponse | None' = None, flow: 'APIFlow | None' = None, work_pool: 'WorkPool | None' = None, worker_name: str | None = None, worker_id: 'UUID | None' = None) ``` Prepares the job configuration for a flow run. Ensures that necessary values are present in the job manifest and that the job manifest is valid. **Args:** * `flow_run`: The flow run to prepare the job configuration for * `deployment`: The deployment associated with the flow run used for preparation. * `flow`: The flow associated with the flow run used for preparation. * `work_pool`: The work pool associated with the flow run used for preparation. * `worker_name`: The name of the worker used for preparation. ### `KubernetesWorkerVariables` Default variables for the Kubernetes worker. The schema for this class is used to populate the `variables` section of the default base job template. ### `KubernetesWorkerResult` Contains information about the final state of a completed process ### `KubernetesWorker` Prefect worker that executes flow runs within Kubernetes Jobs. **Methods:** #### `kill_infrastructure` ```python theme={null} kill_infrastructure(self, infrastructure_pid: str, configuration: KubernetesWorkerJobConfiguration, grace_seconds: int = 30) -> None ``` Kill a Kubernetes job by deleting it. **Args:** * `infrastructure_pid`: The infrastructure identifier in format "namespace:job\_name". * `configuration`: The job configuration used to connect to the cluster. * `grace_seconds`: Time to allow for graceful shutdown before force killing. **Raises:** * `InfrastructureNotFound`: If the job doesn't exist. * `InfrastructureNotAvailable`: If unable to connect to the cluster. #### `run` ```python theme={null} run(self, flow_run: 'FlowRun', configuration: KubernetesWorkerJobConfiguration, task_status: anyio.abc.TaskStatus[int] | None = None) -> KubernetesWorkerResult ``` Executes a flow run within a Kubernetes Job and waits for the flow run to complete. **Args:** * `flow_run`: The flow run to execute * `configuration`: The configuration to use when executing the flow run. * `task_status`: The task status object for the current flow run. If provided, the task will be marked as started. **Returns:** * A result object containing information about the final state of the flow run #### `teardown` ```python theme={null} teardown(self, *exc_info: Any) ``` # prefect-kubernetes Source: https://docs.prefect.io/integrations/prefect-kubernetes/index `prefect-kubernetes` contains Prefect tasks, flows, and blocks enabling orchestration, observation and management of Kubernetes resources. This library is most commonly used for installation with a Kubernetes worker. See the [Prefect docs on deploying with Kubernetes](/v3/how-to-guides/deployment_infra/kubernetes) to learn how to create and run deployments in Kubernetes. Prefect provides a Helm chart for deploying a worker, a self-hosted Prefect server instance, and other resources to a Kubernetes cluster. See the [Prefect Helm chart](https://github.com/PrefectHQ/prefect-helm) for more information. ## Kubernetes Worker The Kubernetes worker executes flow runs as Kubernetes Jobs. When you create a Kubernetes work pool, you can customize the base job template to control how jobs are created. **Important**: When customizing a work pool's base job template, variables defined in the `variables` section must be explicitly referenced in `job_configuration` using `{{ variable_name }}` syntax to take effect. If you add or modify a variable in `variables` but don't reference it in `job_configuration`, its value (including defaults) will not be passed to the worker. For example, if you set a default for `cluster_config` in `variables`, ensure your `job_configuration` includes `"cluster_config": "{{ cluster_config }}"`. See the [Kubernetes deployment guide](/v3/how-to-guides/deployment_infra/kubernetes) for complete setup instructions. ## Getting started ### Prerequisites * [Kubernetes installed](https://kubernetes.io/). ### Install `prefect-kubernetes` The following command will install a version of `prefect-kubernetes` compatible with your installed version of `prefect`. If you don't already have `prefect` installed, it will install the newest version of `prefect` as well. ```bash theme={null} pip install "prefect[kubernetes]" ``` Upgrade to the latest versions of `prefect` and `prefect-kubernetes`: ```bash theme={null} pip install -U "prefect[kubernetes]" ``` ### Register newly installed block types Register the block types in the `prefect-kubernetes` module to make them available for use. ```bash theme={null} prefect block register -m prefect_kubernetes ``` ## Examples ### Use `with_options` to customize options on an existing task or flow ```python theme={null} from prefect_kubernetes.flows import run_namespaced_job customized_run_namespaced_job = run_namespaced_job.with_options( name="My flow running a Kubernetes Job", retries=2, retry_delay_seconds=10, ) # this is now a new flow object that can be called ``` ### Specify and run a Kubernetes Job from a YAML file ```python theme={null} from prefect import flow, get_run_logger from prefect_kubernetes.credentials import KubernetesCredentials from prefect_kubernetes.flows import run_namespaced_job # this is a flow from prefect_kubernetes.jobs import KubernetesJob k8s_creds = KubernetesCredentials.load("k8s-creds") job = KubernetesJob.from_yaml_file( # or create in the UI with a dict manifest credentials=k8s_creds, manifest_path="path/to/job.yaml", ) job.save("my-k8s-job", overwrite=True) @flow def kubernetes_orchestrator(): # run the flow and send logs to the parent flow run's logger logger = get_run_logger() run_namespaced_job(job, print_func=logger.info) if __name__ == "__main__": kubernetes_orchestrator() ``` As with all Prefect flows and tasks, you can call the underlying function directly if you don't need Prefect features: ```python theme={null} run_namespaced_job.fn(job, print_func=print) ``` ### Generate a resource-specific client from `KubernetesClusterConfig` ```python theme={null} # with minikube / docker desktop & a valid ~/.kube/config this should ~just work~ from prefect_kubernetes.credentials import KubernetesCredentials, KubernetesClusterConfig k8s_config = KubernetesClusterConfig.from_file('~/.kube/config') k8s_credentials = KubernetesCredentials(cluster_config=k8s_config) with k8s_credentials.get_client("core") as v1_core_client: for namespace in v1_core_client.list_namespace().items: print(namespace.metadata.name) ``` ### List jobs in a namespace ```python theme={null} from prefect import flow from prefect_kubernetes.credentials import KubernetesCredentials from prefect_kubernetes.jobs import list_namespaced_job @flow def kubernetes_orchestrator(): v1_job_list = list_namespaced_job( kubernetes_credentials=KubernetesCredentials.load("k8s-creds"), namespace="my-namespace", ) ``` For assistance using Kubernetes, consult the [Kubernetes documentation](https://kubernetes.io/). Refer to the `prefect-kubernetes` [SDK documentation](/integrations/prefect-kubernetes/api-ref/prefect_kubernetes-credentials) to explore all the capabilities of the `prefect-kubernetes` library. # context Source: https://docs.prefect.io/integrations/prefect-ray/api-ref/prefect_ray-context # `prefect_ray.context` Contexts to manage Ray clusters and tasks. ## Functions ### `remote_options` ```python theme={null} remote_options(**new_remote_options: Dict[str, Any]) -> Generator[None, Dict[str, Any], None] ``` Context manager to add keyword arguments to Ray `@remote` calls for task runs. If contexts are nested, new options are merged with options in the outer context. If a key is present in both, the new option will be used. **Examples:** Use 4 CPUs and 2 GPUs for the `process` task: ```python theme={null} from prefect import flow, task from prefect_ray.task_runners import RayTaskRunner from prefect_ray.context import remote_options @task def process(x): return x + 1 @flow(task_runner=RayTaskRunner()) def my_flow(): # equivalent to setting @ray.remote(num_cpus=4, num_gpus=2) with remote_options(num_cpus=4, num_gpus=2): process.submit(42) ``` ## Classes ### `RemoteOptionsContext` The context for Ray remote\_options management. **Attributes:** * `current_remote_options`: A set of current remote\_options in the context. **Methods:** #### `get` ```python theme={null} get(cls) -> 'RemoteOptionsContext' ``` Return an empty `RemoteOptionsContext` instead of `None` if no context exists. # task_runners Source: https://docs.prefect.io/integrations/prefect-ray/api-ref/prefect_ray-task_runners # `prefect_ray.task_runners` Interface and implementations of the Ray Task Runner. [Task Runners](https://docs.prefect.io/latest/develop/task-runners/) in Prefect are responsible for managing the execution of Prefect task runs. Generally speaking, users are not expected to interact with task runners outside of configuring and initializing them for a flow. Example: ```python theme={null} import time from prefect import flow, task @task def shout(number): time.sleep(0.5) print(f"#{number}") @flow def count_to(highest_number): for number in range(highest_number): shout.submit(number) if __name__ == "__main__": count_to(10) # outputs #0 #1 #2 #3 #4 #5 #6 #7 #8 #9 ``` Switching to a `RayTaskRunner`: ```python theme={null} import time from prefect import flow, task from prefect_ray import RayTaskRunner @task def shout(number): time.sleep(0.5) print(f"#{number}") @flow(task_runner=RayTaskRunner) def count_to(highest_number): shout.map(range(highest_number)).wait() if __name__ == "__main__": count_to(10) # outputs #3 #7 #2 #6 #4 #0 #1 #5 #8 #9 ``` ## Classes ### `PrefectRayFuture` **Methods:** #### `add_done_callback` ```python theme={null} add_done_callback(self, fn: Callable[['PrefectRayFuture[R]'], Any]) ``` #### `result` ```python theme={null} result(self, timeout: float | None = None, raise_on_failure: bool = True) -> R ``` #### `wait` ```python theme={null} wait(self, timeout: float | None = None) -> None ``` ### `RayTaskRunner` A parallel task\_runner that submits tasks to `ray`. By default, a temporary Ray cluster is created for the duration of the flow run. Alternatively, if you already have a `ray` instance running, you can provide the connection URL via the `address` kwarg. Args: address (string, optional): Address of a currently running `ray` instance; if one is not provided, a temporary instance will be created. init\_kwargs (dict, optional): Additional kwargs to use when calling `ray.init`. Examples: Using a temporary local ray cluster: ```python theme={null} from prefect import flow from prefect_ray.task_runners import RayTaskRunner @flow(task_runner=RayTaskRunner()) def my_flow(): ... ``` Connecting to an existing ray instance: ```python theme={null} RayTaskRunner(address="ray://:10001") ``` **Methods:** #### `duplicate` ```python theme={null} duplicate(self) ``` Return a new instance of with the same settings as this one. #### `map` ```python theme={null} map(self, task: 'Task[P, Coroutine[Any, Any, R]]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None) -> PrefectFutureList[PrefectRayFuture[R]] ``` #### `map` ```python theme={null} map(self, task: 'Task[Any, R]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None) -> PrefectFutureList[PrefectRayFuture[R]] ``` #### `map` ```python theme={null} map(self, task: 'Task[P, R]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None) -> PrefectFutureList[PrefectRayFuture[R]] ``` #### `submit` ```python theme={null} submit(self, task: 'Task[P, Coroutine[Any, Any, R]]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None, dependencies: dict[str, set[RunInput]] | None = None) -> PrefectRayFuture[R] ``` #### `submit` ```python theme={null} submit(self, task: 'Task[P, R]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None, dependencies: dict[str, set[RunInput]] | None = None) -> PrefectRayFuture[R] ``` #### `submit` ```python theme={null} submit(self, task: Task[P, R], parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None, dependencies: dict[str, set[RunInput]] | None = None) ``` # prefect-ray Source: https://docs.prefect.io/integrations/prefect-ray/index Accelerate your workflows by running tasks in parallel with Ray [Ray](https://docs.ray.io/en/latest/index.html) can run your tasks in parallel by distributing them over multiple machines. The `prefect-ray` integration makes it easy to accelerate your flow runs with Ray. ## Install `prefect-ray` The following command will install a version of `prefect-ray` compatible with your installed version of `prefect`. If you don't already have `prefect` installed, it will install the newest version of `prefect` as well. ```bash theme={null} pip install "prefect[ray]" ``` Upgrade to the latest versions of `prefect` and `prefect-ray`: ```bash theme={null} pip install -U "prefect[ray]" ``` **Ray limitations** There are a few limitations with Ray: * Ray has [experimental](https://docs.ray.io/en/latest/ray-overview/installation.html#install-nightlies) support for Python 3.13, but Prefect [does *not* currently support](https://github.com/PrefectHQ/prefect/issues/16910) Python 3.13. * Ray support for non-x86/64 architectures such as ARM/M1 processors with installation from `pip` alone and will be skipped during installation of Prefect. It is possible to manually install the blocking component with `conda`. See the [Ray documentation](https://docs.ray.io/en/latest/ray-overview/installation.html#m1-mac-apple-silicon-support) for instructions. * Ray support for Windows is currently in beta. See the [Ray installation documentation](https://docs.ray.io/en/latest/ray-overview/installation.html) for further compatibility information. ## Run tasks on Ray The `RayTaskRunner` is a [Prefect task runner](https://docs.prefect.io/develop/task-runners/) that submits tasks to [Ray](https://www.ray.io/) for parallel execution. By default, a temporary Ray instance is created for the duration of the flow run. For example, this flow counts to three in parallel: ```python theme={null} import time from prefect import flow, task from prefect_ray import RayTaskRunner @task def shout(number): time.sleep(0.5) print(f"#{number}") @flow(task_runner=RayTaskRunner) def count_to(highest_number): shout.map(range(highest_number)).wait() if __name__ == "__main__": count_to(10) # outputs #3 #7 #2 #6 #4 #0 #1 #5 #8 #9 ``` If you already have a Ray instance running, you can provide the connection URL via an `address` argument. To configure your flow to use the `RayTaskRunner`: 1. Make sure the `prefect-ray` collection is installed as described earlier: `pip install prefect-ray`. 2. In your flow code, import `RayTaskRunner` from `prefect_ray.task_runners`. 3. Assign it as the task runner when the flow is defined using the `task_runner=RayTaskRunner` argument. For example, this flow uses the `RayTaskRunner` with a local, temporary Ray instance created by Prefect at flow run time. ```python theme={null} from prefect import flow from prefect_ray.task_runners import RayTaskRunner @flow(task_runner=RayTaskRunner()) def my_flow(): ... ``` This flow uses the `RayTaskRunner` configured to access an existing Ray instance at `ray://:10001`. ```python theme={null} from prefect import flow from prefect_ray.task_runners import RayTaskRunner @flow( task_runner=RayTaskRunner( address="ray://:10001", init_kwargs={"runtime_env": {"pip": ["prefect-ray"]}}, ) ) def my_flow(): ... ``` `RayTaskRunner` accepts the following optional parameters: | Parameter | Description | | ------------ | ----------------------------------------------------------------------------------------------------------------------------------- | | address | Address of a currently running Ray instance, starting with the [ray://](https://docs.ray.io/en/master/cluster/ray-client.html) URI. | | init\_kwargs | Additional kwargs to use when calling `ray.init`. | The Ray client uses the [ray://](https://docs.ray.io/en/master/cluster/ray-client.html) URI to indicate the address of a Ray instance. If you don't provide the `address` of a Ray instance, Prefect creates a temporary instance automatically. ## Where the flow's driver runs When you connect `RayTaskRunner` to a shared Ray cluster, the `address` you pass determines where the *driver* — the process actually running the flow engine and calling `.submit()` — sits relative to that cluster. This choice matters more than it looks. ### Driver outside the cluster (`ray://`) ```python theme={null} from prefect import flow from prefect_ray.task_runners import RayTaskRunner @flow( task_runner=RayTaskRunner( address="ray://:10001", init_kwargs={"runtime_env": {"pip": ["prefect-ray"]}}, ) ) def my_flow(): ... ``` Your Prefect worker runs off-cluster and reaches the Ray head over the Ray Client protocol. Easy to set up from a laptop or a worker that lives outside Kubernetes. **`runtime_env` only covers workers here** Ray's [dependency docs](https://docs.ray.io/en/latest/ray-core/handling-dependencies.html) spell out that a `runtime_env` passed to `ray.init(...)` — which is what `init_kwargs` feeds into — "is only applied to all children Tasks and Actors, not the entrypoint script (Driver) itself." Whatever your flow module imports at the top (including `prefect` and `prefect-ray`) must already be installed on the machine running the driver; you cannot ship driver dependencies via `init_kwargs`. A couple of other sharp edges to know about when running real workloads over `ray://`: * **The Ray Client connection is not durable.** Per Ray's [Ray Client docs](https://docs.ray.io/en/latest/cluster/running-applications/job-submission/ray-client.html), a network interruption of 30+ seconds will terminate the workload. Ray recommends the Jobs API over Ray Client for long-running and ML workloads; the same page labels Ray Client "(For Experts only)" for interactive use. * **The Ray head node needs `prefect-ray` and any package your tasks expose in their signatures or close over.** When you `.submit()` a task over `ray://`, the Ray Client server on the head has to prepare the pickled task before scheduling it onto a worker. Anything in that pickled object graph that the head's Python environment cannot import surfaces as a `ModuleNotFoundError` returned from the head, even when your workers would handle the task fine. In practice this means the head image must have: * `prefect-ray` itself installed — `RayTaskRunner` submissions reference `prefect_ray` classes, so a head without it fails before a task ever runs. * Any package whose classes appear in a task's parameter or return type annotations. * Any package whose objects are passed as task arguments or captured as module-level closures. A plain `import pandas` at the top of your flow file is not enough to trip this on its own — the head only needs the packages that actually end up in the pickled task graph. But as soon as a task signature mentions `pd.DataFrame`, or you `.submit(df)`, or a task reads a module-level DataFrame, the head needs `pandas` too. The usual mitigations (matching the head image to your worker image, runtime installs on the head, running one head per flow stack) are all costly. The cleanest fix is usually to move the driver inside the cluster so the Ray Client path is out of the picture entirely. ### Driver inside the cluster (`address="auto"`) ```python theme={null} from prefect import flow from prefect_ray.task_runners import RayTaskRunner @flow(task_runner=RayTaskRunner(address="auto")) def my_flow(): ... ``` With `address="auto"`, `ray.init` attaches to a raylet running on the same machine as the driver instead of dialing the head's Ray Client server. Pickling happens in the driver process and puts objects into Ray's distributed object store; workers on other nodes pull those objects and unpickle them at task-run time. The head stays pure control plane and does not need to import your flow's dependencies — not even `prefect-ray` itself. There are a few ways to give your Prefect driver process a local raylet to attach to: * **Prefect pod joins the Ray cluster as a zero-capacity node.** Run `ray start --address=:6379 --num-cpus=0 --num-gpus=0 --block` on your Prefect worker pod at startup, wait for the raylet socket to appear, then exec the normal Prefect entrypoint. The pod joins the Ray cluster as a zero-capacity node: it never runs Ray tasks itself, but it hosts a local raylet that `address="auto"` can attach to. A minimal image entrypoint script: ```bash theme={null} #!/usr/bin/env bash set -euo pipefail : "${RAY_HEAD_GCS:?set RAY_HEAD_GCS to host:port of the Ray head GCS}" ray start --address="${RAY_HEAD_GCS}" --num-cpus=0 --num-gpus=0 --block & until [[ -S /tmp/ray/session_latest/sockets/raylet ]]; do sleep 1; done exec "$@" ``` This is usually the cleanest production pattern: your Prefect worker image is still a Prefect image, the Ray head image stays stock, and the two are only coupled by the GCS address. * **Sidecar container in an existing Ray pod.** Run the Prefect worker as a second container in the Ray head pod (or a Ray worker-group pod) with a shared `/tmp/ray` `emptyDir` volume. Simpler to set up if you already manage Ray via KubeRay, but couples the Prefect worker lifecycle to that of the Ray pod. If the always-on sidecar is a cost concern, pair it with something like [KEDA](https://keda.sh/) for scale-to-zero. * **Submit the flow script as a Ray Job.** `ray job submit --runtime-env-json='...' -- python run_flow.py` places the entrypoint on a cluster node, sets `RAY_ADDRESS` in its environment, and — unlike the `ray://` case above — applies `runtime_env` to the entrypoint script itself. Inside `run_flow.py`, `RayTaskRunner(address="auto")` attaches to the same cluster. Good for one-off submissions; harder to reconcile with Prefect deployments that themselves want to kick off runs. All three avoid the `ray://` sharp edges above. The ephemeral "one Ray cluster per flow run" pattern (e.g. via KubeRay) is a further option when you want full per-run isolation, at the cost of cluster start-up time on every run. ## Troubleshooting a remote Ray cluster When using the `RayTaskRunner` with a remote Ray cluster, you may run into issues that are not seen when using a local Ray instance. To resolve these issues, we recommend taking the following steps when working with a remote Ray cluster: 1. By default, Prefect will not persist any data to the filesystem of the remote ray worker. However, if you want to take advantage of Prefect's caching ability, you will need to configure a remote result storage to persist results across task runs. We recommend using the [Prefect UI to configure a storage block](https://docs.prefect.io/develop/blocks/) to use for remote results storage. Here's an example of a flow that uses caching and remote result storage: ```python theme={null} from typing import List from prefect import flow, task from prefect.logging import get_run_logger from prefect.tasks import task_input_hash from prefect_aws import S3Bucket from prefect_ray.task_runners import RayTaskRunner # The result of this task will be cached in the configured result storage @task(cache_key_fn=task_input_hash) def say_hello(name: str) -> None: logger = get_run_logger() # This log statement will print only on the first run. Subsequent runs will be cached. logger.info(f"hello {name}!") return name @flow( task_runner=RayTaskRunner( address="ray://:10001", ), # Using an S3 block that has already been created via the Prefect UI result_storage="s3/my-result-storage", ) def greetings(names: List[str]) -> None: say_hello.map(names).wait() if __name__ == "__main__": greetings(["arthur", "trillian", "ford", "marvin"]) ``` 2. If you get an error stating that the module 'prefect' cannot be found, ensure `prefect` is installed on the remote cluster, with: ```bash theme={null} pip install prefect ``` 3. If you get an error with a message similar to "File system created with scheme 's3' could not be created", ensure the required Python modules are installed on **both local and remote machines**. For example, if using S3 for storage: ```bash theme={null} pip install s3fs ``` 4. If you are seeing timeout or other connection errors, double check the address provided to the `RayTaskRunner`. The address should look similar to: `address='ray://:10001'`: ```bash theme={null} RayTaskRunner(address="ray://1.23.199.255:10001") ``` ## Specify remote options The `remote_options` context can be used to control the task's remote options. For example, we can set the number of CPUs and GPUs to use for the `process` task: ```python theme={null} from prefect import flow, task from prefect_ray.task_runners import RayTaskRunner from prefect_ray.context import remote_options @task def process(x): return x + 1 @flow(task_runner=RayTaskRunner()) def my_flow(): # equivalent to setting @ray.remote(num_cpus=4, num_gpus=2) with remote_options(num_cpus=4, num_gpus=2): process.submit(42).wait() ``` ## Resources Refer to the `prefect-ray` [SDK documentation](/integrations/prefect-ray/api-ref/prefect_ray-context) to explore all the capabilities of the `prefect-ray` library. For further assistance using Ray, consult the [Ray documentation](https://docs.ray.io/en/latest/index.html). # blocks Source: https://docs.prefect.io/integrations/prefect-redis/api-ref/prefect_redis-blocks # `prefect_redis.blocks` Redis credentials handling ## Classes ### `RedisDatabase` Block used to manage authentication with a Redis database **Attributes:** * `host`: The host of the Redis server * `port`: The port the Redis server is running on * `db`: The database to write to and read from * `username`: The username to use when connecting to the Redis server * `password`: The password to use when connecting to the Redis server * `ssl`: Whether to use SSL when connecting to the Redis server **Methods:** #### `as_connection_params` ```python theme={null} as_connection_params(self) -> Dict[str, Any] ``` Return a dictionary suitable for unpacking #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` Validate parameters #### `from_connection_string` ```python theme={null} from_connection_string(cls, connection_string: Union[str, SecretStr]) -> 'RedisDatabase' ``` Create block from a Redis connection string Supports the following URL schemes: * `redis://` creates a TCP socket connection * `rediss://` creates a SSL wrapped TCP socket connection **Args:** * `connection_string`: Redis connection string **Returns:** * `RedisCredentials` instance #### `get_async_client` ```python theme={null} get_async_client(self) -> redis.asyncio.Redis ``` Get Redis Client **Returns:** * An initialized Redis async client #### `get_client` ```python theme={null} get_client(self) -> redis.Redis ``` Get Redis Client **Returns:** * An initialized Redis async client #### `read_path` ```python theme={null} read_path(self, path: str) -> bytes ``` Read a redis key **Args:** * `path`: Redis key to read from **Returns:** * Contents at key as bytes #### `write_path` ```python theme={null} write_path(self, path: str, content: bytes) -> None ``` Write to a redis key **Args:** * `path`: Redis key to write to * `content`: Binary object to write # client Source: https://docs.prefect.io/integrations/prefect-redis/api-ref/prefect_redis-client # `prefect_redis.client` ## Functions ### `cached` ```python theme={null} cached(fn: Callable[..., Any]) -> Callable[..., Any] ``` ### `close_all_cached_connections` ```python theme={null} close_all_cached_connections() -> None ``` Close all cached Redis connections. ### `clear_cached_clients` ```python theme={null} clear_cached_clients() -> None ``` Clear all cached Redis clients to force fresh connections. This should be called when a connection error is detected to ensure subsequent calls to get\_async\_redis\_client() return fresh clients rather than stale ones with broken connections. ### `get_async_redis_client` ```python theme={null} get_async_redis_client(url: Union[str, None] = None, host: Union[str, None] = None, port: Union[int, None] = None, db: Union[int, None] = None, password: Union[str, None] = None, username: Union[str, None] = None, health_check_interval: Union[int, None] = None, decode_responses: bool = True, ssl: Union[bool, None] = None) -> Redis ``` Retrieves an async Redis client. When `url` is provided (or configured via `PREFECT_REDIS_MESSAGING_URL`), `Redis.from_url` is used and the discrete host/port/… arguments are ignored. **Args:** * `url`: Full Redis URL (e.g. `redis\://localhost\:6379/0`). * `host`: The host location. * `port`: The port to connect to the host with. * `db`: The Redis database to interact with. * `password`: The password for the redis host * `username`: Username for the redis instance * `health_check_interval`: Health check interval in seconds. * `decode_responses`: Whether to decode binary responses from Redis to unicode strings. * `ssl`: Whether to use SSL for the connection. **Returns:** * a Redis client ### `async_redis_from_settings` ```python theme={null} async_redis_from_settings(settings: RedisMessagingSettings, **options: Any) -> Redis ``` ## Classes ### `RedisMessagingSettings` Settings for connecting to Redis. Connection can be configured either via a single `url` field (e.g. `redis://user:pass@host:6379/0`) or with the individual `host`/`port`/`db`/… fields. When `url` is set it takes precedence and the discrete fields are ignored. Environment variable: `PREFECT_REDIS_MESSAGING_URL` # lease_storage Source: https://docs.prefect.io/integrations/prefect-redis/api-ref/prefect_redis-lease_storage # `prefect_redis.lease_storage` ## Classes ### `ConcurrencyLeaseStorage` A Redis-based concurrency lease storage implementation. **Methods:** #### `create_lease` ```python theme={null} create_lease(self, resource_ids: list[UUID], ttl: timedelta, metadata: ConcurrencyLimitLeaseMetadata | None = None) -> ResourceLease[ConcurrencyLimitLeaseMetadata] ``` #### `list_holders_for_limit` ```python theme={null} list_holders_for_limit(self, limit_id: UUID) -> list[tuple[UUID, ConcurrencyLeaseHolder]] ``` #### `read_active_lease_ids` ```python theme={null} read_active_lease_ids(self, limit: int = 100, offset: int = 0) -> list[UUID] ``` #### `read_expired_lease_ids` ```python theme={null} read_expired_lease_ids(self, limit: int = 100) -> list[UUID] ``` #### `read_lease` ```python theme={null} read_lease(self, lease_id: UUID) -> ResourceLease[ConcurrencyLimitLeaseMetadata] | None ``` #### `renew_lease` ```python theme={null} renew_lease(self, lease_id: UUID, ttl: timedelta) -> bool ``` Atomically renew a concurrency lease by updating its expiration. Uses a Lua script to atomically check if the lease exists, update its expiration in the lease data, and update the index - all in a single atomic operation, preventing race conditions from creating orphaned index entries. **Args:** * `lease_id`: The ID of the lease to renew * `ttl`: The new time-to-live duration **Returns:** * True if the lease was renewed, False if it didn't exist #### `revoke_lease` ```python theme={null} revoke_lease(self, lease_id: UUID) -> None ``` # locking Source: https://docs.prefect.io/integrations/prefect-redis/api-ref/prefect_redis-locking # `prefect_redis.locking` ## Classes ### `RedisLockManager` A lock manager that uses Redis as a backend. **Attributes:** * `host`: The host of the Redis server * `port`: The port the Redis server is running on * `db`: The database to write to and read from * `username`: The username to use when connecting to the Redis server * `password`: The password to use when connecting to the Redis server * `ssl`: Whether to use SSL when connecting to the Redis server * `client`: The Redis client used to communicate with the Redis server * `async_client`: The asynchronous Redis client used to communicate with the Redis server **Methods:** #### `aacquire_lock` ```python theme={null} aacquire_lock(self, key: str, holder: str, acquire_timeout: Optional[float] = None, hold_timeout: Optional[float] = None) -> bool ``` Acquires a lock asynchronously. **Args:** * `key`: Unique identifier for the transaction record. * `holder`: Unique identifier for the holder of the lock. Must match the holder provided when acquiring the lock. * `acquire_timeout`: Maximum time to wait for the lock to be acquired. * `hold_timeout`: Maximum time to hold the lock. **Returns:** * True if the lock was acquired, False otherwise. #### `acquire_lock` ```python theme={null} acquire_lock(self, key: str, holder: str, acquire_timeout: Optional[float] = None, hold_timeout: Optional[float] = None) -> bool ``` Acquires a lock synchronously. **Args:** * `key`: Unique identifier for the transaction record. * `holder`: Unique identifier for the holder of the lock. * `acquire_timeout`: Maximum time to wait for the lock to be acquired. * `hold_timeout`: Maximum time to hold the lock. **Returns:** * True if the lock was acquired, False otherwise. #### `await_for_lock` ```python theme={null} await_for_lock(self, key: str, timeout: Optional[float] = None) -> bool ``` #### `is_lock_holder` ```python theme={null} is_lock_holder(self, key: str, holder: str) -> bool ``` #### `is_locked` ```python theme={null} is_locked(self, key: str) -> bool ``` #### `release_lock` ```python theme={null} release_lock(self, key: str, holder: str) -> None ``` Releases the lock on the corresponding transaction record. Handles the case where a lock might have been released during a task retry If the lock doesn't exist in Redis at all, this method will succeed even if the holder ID doesn't match the original holder. **Args:** * `key`: Unique identifier for the transaction record. * `holder`: Unique identifier for the holder of the lock. Must match the holder provided when acquiring the lock. **Raises:** * `ValueError`: If the lock is held by a different holder. #### `wait_for_lock` ```python theme={null} wait_for_lock(self, key: str, timeout: Optional[float] = None) -> bool ``` # messaging Source: https://docs.prefect.io/integrations/prefect-redis/api-ref/prefect_redis-messaging # `prefect_redis.messaging` ## Functions ### `ephemeral_subscription` ```python theme={null} ephemeral_subscription(topic: str, source: Optional[str] = None, group: Optional[str] = None) -> AsyncGenerator[dict[str, Any], None] ``` ### `break_topic` ```python theme={null} break_topic() ``` ## Classes ### `RedisMessagingPublisherSettings` Settings for the Redis messaging publisher. No settings are required to be set by the user but any of the settings can be overridden by the user using environment variables. ### `RedisMessagingConsumerSettings` Settings for the Redis messaging consumer. No settings are required to be set by the user but any of the settings can be overridden by the user using environment variables. ### `Cache` **Methods:** #### `clear_recently_seen_messages` ```python theme={null} clear_recently_seen_messages(self) -> None ``` #### `forget_duplicates` ```python theme={null} forget_duplicates(self, attribute: str, messages: list[M]) -> None ``` #### `without_duplicates` ```python theme={null} without_duplicates(self, attribute: str, messages: list[M]) -> list[M] ``` ### `RedisStreamsMessage` A message sent to a Redis stream. **Methods:** #### `acknowledge` ```python theme={null} acknowledge(self) -> None ``` ### `Subscription` A subscription-like object for Redis. We mimic the memory subscription interface so that we can set max\_retries and handle dead letter queue storage in Redis. ### `Publisher` **Methods:** #### `publish_data` ```python theme={null} publish_data(self, data: bytes, attributes: dict[str, Any]) ``` ### `Consumer` Consumer implementation for Redis Streams with DLQ support. **Methods:** #### `process_pending_messages` ```python theme={null} process_pending_messages(self, handler: MessageHandler, redis_client: Redis, message_batch_size: int, start_id: str = '0-0') ``` #### `run` ```python theme={null} run(self, handler: MessageHandler) -> None ``` # ordering Source: https://docs.prefect.io/integrations/prefect-redis/api-ref/prefect_redis-ordering # `prefect_redis.ordering` Manages the partial causal ordering of events for a particular consumer. This module maintains a buffer of events to be processed, aiming to process them in the order they occurred causally. ## Classes ### `EventProcessingCompletion` Holds the result of completing event processing, including any followers. ### `CausalOrdering` **Methods:** #### `complete_event_and_get_followers` ```python theme={null} complete_event_and_get_followers(self, event: ReceivedEvent) -> list[ReceivedEvent] ``` Atomically marks the event as seen, retrieves any waiting followers, and releases the processing lock. This operation is atomic to prevent a race condition where a follower could park itself between the lock release and the followers check. #### `event_has_been_seen` ```python theme={null} event_has_been_seen(self, event: Union[UUID, Event]) -> bool ``` #### `event_has_started_processing` ```python theme={null} event_has_started_processing(self, event: Union[UUID, Event]) -> bool ``` #### `event_is_processing` ```python theme={null} event_is_processing(self, event: ReceivedEvent) -> AsyncGenerator[EventProcessingCompletion, None] ``` Mark an event as being processed for the duration of its lifespan through the ordering system. Yields an EventProcessingCompletion object that will be populated with any followers after successful processing. #### `followers_by_id` ```python theme={null} followers_by_id(self, follower_ids: list[UUID]) -> list[ReceivedEvent] ``` Returns the events with the given IDs, in the order they occurred #### `forget_event_is_processing` ```python theme={null} forget_event_is_processing(self, event: ReceivedEvent) -> None ``` #### `forget_follower` ```python theme={null} forget_follower(self, follower: ReceivedEvent) ``` Forget that this event is waiting on another event to arrive #### `get_followers` ```python theme={null} get_followers(self, leader: ReceivedEvent) -> list[ReceivedEvent] ``` Returns events that were waiting on this leader event to arrive #### `get_lost_followers` ```python theme={null} get_lost_followers(self) -> list[ReceivedEvent] ``` Returns events that were waiting on a leader event that never arrived #### `preceding_event_confirmed` ```python theme={null} preceding_event_confirmed(self, handler: event_handler, event: ReceivedEvent, depth: int = 0) -> AsyncGenerator[None, None] ``` Events may optionally declare that they logically follow another event, so that we can preserve important event orderings in the face of unreliable delivery and ordering of messages from the queues. This function keeps track of the ID of each event that this shard has successfully processed going back to the PRECEDING\_EVENT\_LOOKBACK period. If an event arrives that must follow another one, confirm that we have recently seen and processed that event before proceeding. is ready to be processed event (ReceivedEvent): The event to be processed. This object should include metadata indicating if and what event it follows. depth (int, optional): The current recursion depth, used to prevent infinite recursion due to cyclic dependencies between events. Defaults to 0. Raises EventArrivedEarly if the current event shouldn't be processed yet. #### `record_event_as_processing` ```python theme={null} record_event_as_processing(self, event: ReceivedEvent) -> bool ``` Record that an event is being processed, returning False if the event is already being processed. #### `record_event_as_seen` ```python theme={null} record_event_as_seen(self, event: ReceivedEvent) -> None ``` #### `record_follower` ```python theme={null} record_follower(self, event: ReceivedEvent) ``` Remember that this event is waiting on another event to arrive #### `wait_for_leader` ```python theme={null} wait_for_leader(self, event: ReceivedEvent) ``` Given an event, wait for its leader to be processed before proceeding, or raise EventArrivedEarly if we would wait too long in this attempt. # tasks Source: https://docs.prefect.io/integrations/prefect-redis/api-ref/prefect_redis-tasks # `prefect_redis.tasks` Prebuilt Prefect tasks for reading and writing data to Redis ## Functions ### `redis_set` ```python theme={null} redis_set(credentials: 'RedisDatabase', key: str, value: Any, ex: Optional[float] = None, px: Optional[float] = None, nx: bool = False, xx: bool = False) -> None ``` Set a Redis key to a any value. Will use `cloudpickle` to convert `value` to binary representation. **Args:** * `credentials`: Redis credential block * `key`: Key to be set * `value`: Value to be set to `key`. Does not accept open connections such as database-connections * `ex`: If provided, sets an expire flag in seconds on `key` set * `px`: If provided, sets an expire flag in milliseconds on `key` set * `nx`: If set to `True`, set the value at `key` to `value` only if it does not already exist * `xx`: If set tot `True`, set the value at `key` to `value` only if it already exists ### `redis_set_binary` ```python theme={null} redis_set_binary(credentials: 'RedisDatabase', key: str, value: bytes, ex: Optional[float] = None, px: Optional[float] = None, nx: bool = False, xx: bool = False) -> None ``` Set a Redis key to a binary value **Args:** * `credentials`: Redis credential block * `key`: Key to be set * `value`: Value to be set to `key`. Must be bytes * `ex`: If provided, sets an expire flag in seconds on `key` set * `px`: If provided, sets an expire flag in milliseconds on `key` set * `nx`: If set to `True`, set the value at `key` to `value` only if it does not already exist * `xx`: If set tot `True`, set the value at `key` to `value` only if it already exists ### `redis_get` ```python theme={null} redis_get(credentials: 'RedisDatabase', key: str) -> Any ``` Get an object stored at a redis key. Will use cloudpickle to reconstruct the object. **Args:** * `credentials`: Redis credential block * `key`: Key to get **Returns:** * Fully reconstructed object, decoded brom bytes in redis ### `redis_get_binary` ```python theme={null} redis_get_binary(credentials: 'RedisDatabase', key: str) -> bytes ``` Get an bytes stored at a redis key **Args:** * `credentials`: Redis credential block * `key`: Key to get **Returns:** * Bytes from `key` in Redis ### `redis_execute` ```python theme={null} redis_execute(credentials: 'RedisDatabase', cmd: str) -> Any ``` Execute Redis command **Args:** * `credentials`: Redis credential block * `cmd`: Command to be executed **Returns:** * Command response # prefect-redis Source: https://docs.prefect.io/integrations/prefect-redis/index Integrations to extend Prefect's functionality with Redis. ## Getting started ### Install `prefect-redis` The following command will install a version of `prefect-redis` compatible with your installed version of `prefect`. If you don't already have `prefect` installed, it will install the newest version of `prefect` as well. ```bash theme={null} pip install "prefect[redis]" ``` Upgrade to the latest versions of `prefect` and `prefect-redis`: ```bash theme={null} pip install -U "prefect[redis]" ``` ### Register newly installed block types Register the block types in the `prefect-redis` module to make them available for use. ```bash theme={null} prefect block register -m prefect_redis ``` ## Resources Refer to the [SDK documentation](/integrations/prefect-redis/api-ref/prefect_redis-blocks) to explore all the capabilities of `prefect-redis`. # commands Source: https://docs.prefect.io/integrations/prefect-shell/api-ref/prefect_shell-commands # `prefect_shell.commands` Tasks for interacting with shell commands ## Functions ### `shell_run_command` ```python theme={null} shell_run_command(command: str, env: dict[str, str] | None = None, helper_command: str | None = None, shell: str | None = None, extension: str | None = None, return_all: bool = False, stream_level: int = logging.INFO, cwd: str | bytes | os.PathLike[str] | None = None) -> list[str] | str ``` Runs arbitrary shell commands. **Args:** * `command`: Shell command to be executed; can also be provided post-initialization by calling this task instance. * `env`: Dictionary of environment variables to use for the subprocess; can also be provided at runtime. * `helper_command`: String representing a shell command, which will be executed prior to the `command` in the same process. Can be used to change directories, define helper functions, etc. for different commands in a flow. * `shell`: Shell to run the command with. * `extension`: File extension to be appended to the command to be executed. * `return_all`: Whether this task should return all lines of stdout as a list, or just the last line as a string. * `stream_level`: The logging level of the stream; defaults to 20 equivalent to `logging.INFO`. * `cwd`: The working directory context the command will be executed within **Returns:** * If return all, returns all lines as a list; else the last line as a string. ## Classes ### `ShellProcess` A class representing a shell process. Supports both async (anyio.abc.Process) and sync (subprocess.Popen) processes. **Methods:** #### `afetch_result` ```python theme={null} afetch_result(self) -> list[str] ``` Retrieve the output of the shell operation (async version). **Returns:** * The lines output from the shell operation as a list. #### `await_for_completion` ```python theme={null} await_for_completion(self) -> None ``` Wait for the shell command to complete after a process is triggered (async version). #### `fetch_result` ```python theme={null} fetch_result(self) -> list[str] ``` Retrieve the output of the shell operation (sync version). **Returns:** * The lines output from the shell operation as a list. #### `pid` ```python theme={null} pid(self) -> int ``` The PID of the process. **Returns:** * The PID of the process. #### `return_code` ```python theme={null} return_code(self) -> int | None ``` The return code of the process. **Returns:** * The return code of the process, or `None` if the process is still running. #### `wait_for_completion` ```python theme={null} wait_for_completion(self) -> None ``` Wait for the shell command to complete after a process is triggered (sync version). ### `ShellOperation` A block representing a shell operation, containing multiple commands. For long-lasting operations, use the trigger method and utilize the block as a context manager for automatic closure of processes when context is exited. If not, manually call the close method to close processes. For short-lasting operations, use the run method. Context is automatically managed with this method. **Attributes:** * `commands`: A list of commands to execute sequentially. * `stream_output`: Whether to stream output. * `env`: A dictionary of environment variables to set for the shell operation. * `working_dir`: The working directory context the commands will be executed within. * `shell`: The shell to use to execute the commands. * `extension`: The extension to use for the temporary file. if unset defaults to `.ps1` on Windows and `.sh` on other platforms. **Examples:** Load a configured block: ```python theme={null} from prefect_shell import ShellOperation shell_operation = ShellOperation.load("BLOCK_NAME") ``` **Methods:** #### `aclose` ```python theme={null} aclose(self) ``` Close the job block (async version). #### `arun` ```python theme={null} arun(self, **open_kwargs: dict[str, Any]) -> list[str] ``` Runs a shell command (async version), but unlike the trigger method, additionally waits and fetches the result directly, automatically managing the context. This method is ideal for short-lasting shell commands; for long-lasting shell commands, it is recommended to use the `trigger` method instead. **Args:** * `**open_kwargs`: Additional keyword arguments to pass to `open_process`. **Returns:** * The lines output from the shell command as a list. **Examples:** Sleep for 5 seconds and then print "Hello, world!": ```python theme={null} from prefect_shell import ShellOperation shell_output = await ShellOperation( commands=["sleep 5", "echo 'Hello, world!'"] ).arun() ``` #### `atrigger` ```python theme={null} atrigger(self, **open_kwargs: dict[str, Any]) -> ShellProcess ``` Triggers a shell command and returns the shell command run object to track the execution of the run (async version). This method is ideal for long-lasting shell commands; for short-lasting shell commands, it is recommended to use the `run` method instead. **Args:** * `**open_kwargs`: Additional keyword arguments to pass to `open_process`. **Returns:** * A `ShellProcess` object. **Examples:** Sleep for 5 seconds and then print "Hello, world!": ```python theme={null} from prefect_shell import ShellOperation async with ShellOperation( commands=["sleep 5", "echo 'Hello, world!'"], ) as shell_operation: shell_process = await shell_operation.atrigger() await shell_process.await_for_completion() shell_output = await shell_process.afetch_result() ``` #### `close` ```python theme={null} close(self) ``` Close the job block (sync version). #### `run` ```python theme={null} run(self, **open_kwargs: dict[str, Any]) -> list[str] ``` Runs a shell command (sync version), but unlike the trigger method, additionally waits and fetches the result directly, automatically managing the context. This method is ideal for short-lasting shell commands; for long-lasting shell commands, it is recommended to use the `trigger` method instead. **Args:** * `**open_kwargs`: Additional keyword arguments to pass to subprocess.Popen. **Returns:** * The lines output from the shell command as a list. **Examples:** Sleep for 5 seconds and then print "Hello, world!": ```python theme={null} from prefect_shell import ShellOperation shell_output = ShellOperation( commands=["sleep 5", "echo 'Hello, world!'"] ).run() ``` #### `trigger` ```python theme={null} trigger(self, **open_kwargs: dict[str, Any]) -> ShellProcess ``` Triggers a shell command and returns the shell command run object to track the execution of the run (sync version). This method is ideal for long-lasting shell commands; for short-lasting shell commands, it is recommended to use the `run` method instead. **Args:** * `**open_kwargs`: Additional keyword arguments to pass to subprocess.Popen. **Returns:** * A `ShellProcess` object. **Examples:** Sleep for 5 seconds and then print "Hello, world!": ```python theme={null} from prefect_shell import ShellOperation with ShellOperation( commands=["sleep 5", "echo 'Hello, world!'"], ) as shell_operation: shell_process = shell_operation.trigger() shell_process.wait_for_completion() shell_output = shell_process.fetch_result() ``` # prefect-shell Source: https://docs.prefect.io/integrations/prefect-shell/index Execute shell commands from within Prefect flows. ## Getting started ### Install `prefect-shell` The following command will install a version of `prefect-shell` compatible with your installed version of `prefect`. If you don't already have `prefect` installed, it will install the newest version of `prefect` as well. ```bash theme={null} pip install "prefect[shell]" ``` Upgrade to the latest versions of `prefect` and `prefect-shell`: ```bash theme={null} pip install -U "prefect[shell]" ``` ### Register newly installed block types Register the block types in the `prefect-shell` module to make them available for use. ```bash theme={null} prefect block register -m prefect_shell ``` ## Examples ### Integrate shell commands with Prefect flows With `prefect-shell`, you can use shell commands (and/or scripts) in Prefect flows to provide observability and resiliency. `prefect-shell` can be a useful tool if you're transitioning your orchestration from shell scripts to Prefect. Let's get the shell-abration started! The Python code below has shell commands embedded in a Prefect flow: ```python theme={null} from prefect import flow from datetime import datetime from prefect_shell import ShellOperation @flow def download_data(): today = datetime.today().strftime("%Y%m%d") # for short running operations, you can use the `run` method # which automatically manages the context ShellOperation( commands=[ "mkdir -p data", "mkdir -p data/${today}" ], env={"today": today} ).run() # for long running operations, you can use a context manager with ShellOperation( commands=[ "curl -O https://masie_web.apps.nsidc.org/pub/DATASETS/NOAA/G02135/north/daily/data/N_seaice_extent_daily_v3.0.csv", ], working_dir=f"data/{today}", ) as download_csv_operation: # trigger runs the process in the background download_csv_process = download_csv_operation.trigger() # then do other things here in the meantime, like download another file ... # when you're ready, wait for the process to finish download_csv_process.wait_for_completion() # if you'd like to get the output lines, you can use the `fetch_result` method output_lines = download_csv_process.fetch_result() if __name__ == "__main__": download_data() ``` Running this script results in output like this: ```bash theme={null} 14:48:16.550 | INFO | prefect.engine - Created flow run 'tentacled-chachalaca' for flow 'download-data' 14:48:17.977 | INFO | Flow run 'tentacled-chachalaca' - PID 19360 triggered with 2 commands running inside the '.' directory. 14:48:17.987 | INFO | Flow run 'tentacled-chachalaca' - PID 19360 completed with return code 0. 14:48:17.994 | INFO | Flow run 'tentacled-chachalaca' - PID 19363 triggered with 1 commands running inside the PosixPath('data/20230201') directory. 14:48:18.009 | INFO | Flow run 'tentacled-chachalaca' - PID 19363 stream output: % Total % Received % Xferd Average Speed Time Time Time Current Dl 14:48:18.010 | INFO | Flow run 'tentacled-chachalaca' - PID 19363 stream output: oad Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 14:48:18.840 | INFO | Flow run 'tentacled-chachalaca' - PID 19363 stream output: 11 1630k 11 192k 0 0 229k 0 0:00:07 --:--:-- 0:00:07 231k 14:48:19.839 | INFO | Flow run 'tentacled-chachalaca' - PID 19363 stream output: 83 1630k 83 1368k 0 0 745k 0 0:00:02 0:00:01 0:00:01 747k 14:48:19.993 | INFO | Flow run 'tentacled-chachalaca' - PID 19363 stream output: 100 1630k 100 1630k 0 0 819k 0 0 14:48:19.994 | INFO | Flow run 'tentacled-chachalaca' - PID 19363 stream output: :00:01 0:00:01 --:--:-- 821k 14:48:19.996 | INFO | Flow run 'tentacled-chachalaca' - PID 19363 completed with return code 0. 14:48:19.998 | INFO | Flow run 'tentacled-chachalaca' - Successfully closed all open processes. 14:48:20.203 | INFO | Flow run 'tentacled-chachalaca' - Finished in state Completed() ``` ### Save shell commands in Prefect blocks You can save commands within a `ShellOperation` block, then reuse them across multiple flows. Save the block with desired commands: ```python theme={null} from prefect_shell import ShellOperation ping_op = ShellOperation(commands=["ping -t 1 prefect.io"]) ping_op.save("block-name") # Load the saved block: ping_op = ShellOperation.load("block-name") ``` ## Resources Refer to the `prefect-shell` [SDK documentation](/integrations/prefect-shell/api-ref/prefect_shell-commands) to explore all the capabilities of the `prefect-shell` library. # credentials Source: https://docs.prefect.io/integrations/prefect-slack/api-ref/prefect_slack-credentials # `prefect_slack.credentials` Credential classes use to store Slack credentials. ## Classes ### `SlackCredentials` Block holding Slack credentials for use in tasks and flows. **Args:** * `token`: Bot user OAuth token for the Slack app used to perform actions. **Examples:** Load stored Slack credentials: ```python theme={null} from prefect_slack import SlackCredentials slack_credentials_block = SlackCredentials.load("BLOCK_NAME") ``` Get a Slack client: ```python theme={null} from prefect_slack import SlackCredentials slack_credentials_block = SlackCredentials.load("BLOCK_NAME") client = slack_credentials_block.get_client() ``` **Methods:** #### `get_client` ```python theme={null} get_client(self) -> AsyncWebClient ``` Returns an authenticated `AsyncWebClient` to interact with the Slack API. ### `SlackWebhook` Block holding a Slack webhook for use in tasks and flows. **Args:** * `url`: Slack webhook URL which can be used to send messages (e.g. `https\://hooks.slack.com/XXX`). **Examples:** Load stored Slack webhook: ```python theme={null} from prefect_slack import SlackWebhook slack_webhook_block = SlackWebhook.load("BLOCK_NAME") ``` Get a Slack webhook client: ```python theme={null} from prefect_slack import SlackWebhook slack_webhook_block = SlackWebhook.load("BLOCK_NAME") client = slack_webhook_block.get_client() ``` Send a notification in Slack: ```python theme={null} from prefect_slack import SlackWebhook slack_webhook_block = SlackWebhook.load("BLOCK_NAME") slack_webhook_block.notify("Hello, world!") ``` **Methods:** #### `get_client` ```python theme={null} get_client(self, sync_client: bool = False) -> Union[AsyncWebhookClient, WebhookClient] ``` Returns an authenticated `AsyncWebhookClient` to interact with the configured Slack webhook. #### `notify` ```python theme={null} notify(self, body: str, subject: Optional[str] = None) ``` Sends a message to the Slack channel. #### `notify_async` ```python theme={null} notify_async(self, body: str, subject: Optional[str] = None) ``` Sends a message to the Slack channel asynchronously. # messages Source: https://docs.prefect.io/integrations/prefect-slack/api-ref/prefect_slack-messages # `prefect_slack.messages` Tasks for sending Slack messages. ## Functions ### `send_chat_message` ```python theme={null} send_chat_message(channel: str, slack_credentials: SlackCredentials, text: Optional[str] = None, attachments: Optional[Sequence[Union[Dict, 'slack_sdk.models.attachments.Attachment']]] = None, slack_blocks: Optional[Sequence[Union[Dict, 'slack_sdk.models.blocks.Block']]] = None) -> Dict ``` Sends a message to a Slack channel. **Args:** * `channel`: The name of the channel in which to post the chat message (e.g. #general). * `slack_credentials`: Instance of `SlackCredentials` initialized with a Slack bot token. * `text`: Contents of the message. It's a best practice to always provide a `text` argument when posting a message. The `text` argument is used in places where content cannot be rendered such as: system push notifications, assistive technology such as screen readers, etc. * `attachments`: List of objects defining secondary context in the posted Slack message. The [Slack API docs](https://api.slack.com/messaging/composing/layouts#building-attachments) provide guidance on building attachments. * `slack_blocks`: List of objects defining the layout and formatting of the posted message. The [Slack API docs](https://api.slack.com/block-kit/building) provide guidance on building messages with blocks. **Returns:** * Response from the Slack API. Example response structures can be found in the [Slack API docs](https://api.slack.com/methods/chat.postMessage#examples). ### `send_incoming_webhook_message` ```python theme={null} send_incoming_webhook_message(slack_webhook: SlackWebhook, text: Optional[str] = None, attachments: Optional[Sequence[Union[Dict, 'slack_sdk.models.attachments.Attachment']]] = None, slack_blocks: Optional[Sequence[Union[Dict, 'slack_sdk.models.blocks.Block']]] = None) -> None ``` Sends a message via an incoming webhook. **Args:** * `slack_webhook`: Instance of `SlackWebhook` initialized with a Slack webhook URL. * `text`: Contents of the message. It's a best practice to always provide a `text` argument when posting a message. The `text` argument is used in places where content cannot be rendered such as: system push notifications, assistive technology such as screen readers, etc. * `attachments`: List of objects defining secondary context in the posted Slack message. The [Slack API docs](https://api.slack.com/messaging/composing/layouts#building-attachments) provide guidance on building attachments. * `slack_blocks`: List of objects defining the layout and formatting of the posted message. The [Slack API docs](https://api.slack.com/block-kit/building) provide guidance on building messages with blocks. # prefect-slack Source: https://docs.prefect.io/integrations/prefect-slack/index ## Welcome! `prefect-slack` is a collection of prebuilt Prefect tasks and blocks that can be used to quickly send Slack messages in your Prefect flows. ## Getting started ### Prerequisites A Slack account with permissions to create a Slack app and install it in your workspace. ### Installation The following command will install a version of `prefect-slack` compatible with your installed version of `prefect`. If you don't already have `prefect` installed, it will install the newest version of `prefect` as well. ```bash theme={null} pip install "prefect[slack]" ``` Upgrade to the latest versions of `prefect` and `prefect-slack`: ```bash theme={null} pip install -U "prefect[slack]" ``` ### Slack setup To use tasks in the package, create a Slack app and install it in your Slack workspace. You can create a Slack app by navigating to the [apps page](https://api.slack.com/apps) for your Slack account and selecting 'Create New App'. For tasks that require a Bot user OAuth token, you can get a token for your app by navigating to your app's **OAuth & Permissions** page. For tasks that require a Webhook URL, you can generate a new Webhook URL by navigating to you apps **Incoming Webhooks** page. Slack's [Basic app setup](https://api.slack.com/authentication/basics) guide provides additional details on setting up a Slack app. ### Write and run a flow ```python sync theme={null} import asyncio from prefect import flow from prefect.context import get_run_context from prefect_slack import SlackCredentials from prefect_slack.messages import send_chat_message @flow def example_send_message_flow(): context = get_run_context() # Run other tasks or flows here token = "xoxb-your-bot-token-here" asyncio.run( send_chat_message( slack_credentials=SlackCredentials(token), channel="#prefect", text=f"Flow run {context.flow_run.name} completed :tada:" ) ) if __name__ == "__main__": example_send_message_flow() ``` ```python async theme={null} from prefect import flow from prefect.context import get_run_context from prefect_slack import SlackCredentials from prefect_slack.messages import send_chat_message @flow async def example_send_message_flow(): context = get_run_context() # Run other tasks or flows here token = "xoxb-your-bot-token-here" await send_chat_message( slack_credentials=SlackCredentials(token), channel="#prefect", text=f"Flow run {context.flow_run.name} completed :tada:" ) if __name__ == "__main__": asyncio.run(example_send_message_flow()) ``` ## Resources Refer to the `prefect-slack` [SDK documentation](/integrations/prefect-slack/api-ref/prefect_slack-credentials) to explore all the capabilities of the `prefect-slack` library. For further assistance developing with Slack, consult the [Slack documentation](https://api.slack.com/). ### Comparing SlackWebhook blocks Prefect includes a built-in `SlackWebhook` block in `prefect.blocks.notifications` that requires no extra dependencies. The two blocks have different capabilities: | Block | Block type slug | Backend | Features | | ------------------------------------------- | ------------------------ | --------- | ----------------------------------------------------------- | | `prefect.blocks.notifications.SlackWebhook` | `slack-webhook` | Apprise | `notify_type`, `allow_private_urls`, Slack GovCloud support | | `prefect_slack.SlackWebhook` | `slack-incoming-webhook` | Slack SDK | `get_client()` for advanced SDK access | These are separate block types with different slugs, so a block created with one class cannot be loaded with the other. # credentials Source: https://docs.prefect.io/integrations/prefect-snowflake/api-ref/prefect_snowflake-credentials # `prefect_snowflake.credentials` Credentials block for authenticating with Snowflake. ## Classes ### `InvalidPemFormat` Invalid PEM Format Certificate ### `SnowflakeCredentials` Block used to manage authentication with Snowflake. **Args:** * `account`: The snowflake account name. * `user`: The user name used to authenticate. * `password`: The password used to authenticate. * `private_key`: The PEM used to authenticate. * `authenticator`: The type of authenticator to use for initializing connection (oauth, externalbrowser, etc); refer to [Snowflake documentation](https://docs.snowflake.com/en/user-guide/python-connector-api.html#connect) for details, and note that `externalbrowser` will only work in an environment where a browser is available. * `workload_identity_provider`: The workload identity provider to use when authenticator is set to `workload_identity`. * `token`: The OAuth or JWT Token to provide when authenticator is set to `oauth`, or workload\_identity\_provider is set to `oidc`. * `endpoint`: The Okta endpoint to use when authenticator is set to `okta_endpoint`, e.g. `https\://.okta.com`. * `role`: The name of the default role to use. * `autocommit`: Whether to automatically commit. **Methods:** #### `get_client` ```python theme={null} get_client(self, **connect_kwargs: Any) -> snowflake.connector.SnowflakeConnection ``` Returns an authenticated connection that can be used to query Snowflake databases. Any additional arguments passed to this method will be used to configure the SnowflakeConnection. For available parameters, please refer to the [Snowflake Python connector documentation](https://docs.snowflake.com/en/user-guide/python-connector-api.html#connect). **Args:** * `**connect_kwargs`: Additional arguments to pass to `snowflake.connector.connect`. **Returns:** * An authenticated Snowflake connection. #### `resolve_private_key` ```python theme={null} resolve_private_key(self) -> bytes | None ``` Converts a PEM encoded private key into a DER binary key. **Returns:** * DER encoded key if private\_key has been provided otherwise returns None. **Raises:** * `InvalidPemFormat`: If private key is not in PEM format. # database Source: https://docs.prefect.io/integrations/prefect-snowflake/api-ref/prefect_snowflake-database # `prefect_snowflake.database` Module for querying against Snowflake databases. ## Functions ### `snowflake_query` ```python theme={null} snowflake_query(query: str, snowflake_connector: SnowflakeConnector, params: Union[Tuple[Any], Dict[str, Any]] = None, cursor_type: Type[SnowflakeCursor] = SnowflakeCursor, poll_frequency_seconds: int = 1) -> List[Tuple[Any]] ``` Executes a query against a Snowflake database. **Args:** * `query`: The query to execute against the database. * `params`: The params to replace the placeholders in the query. * `snowflake_connector`: The credentials to use to authenticate. * `cursor_type`: The type of database cursor to use for the query. * `poll_frequency_seconds`: Number of seconds to wait in between checks for run completion. **Returns:** * The output of `response.fetchall()`. **Examples:** Query Snowflake table with the ID value parameterized. ```python theme={null} from prefect import flow from prefect_snowflake.credentials import SnowflakeCredentials from prefect_snowflake.database import SnowflakeConnector, snowflake_query @flow def snowflake_query_flow(): snowflake_credentials = SnowflakeCredentials( account="account", user="user", password="password", ) snowflake_connector = SnowflakeConnector( database="database", warehouse="warehouse", schema="schema", credentials=snowflake_credentials ) result = snowflake_query( "SELECT * FROM table WHERE id=%{id_param}s LIMIT 8;", snowflake_connector, params={"id_param": 1} ) return result snowflake_query_flow() ``` ### `snowflake_query_async` ```python theme={null} snowflake_query_async(query: str, snowflake_connector: SnowflakeConnector, params: Union[Tuple[Any], Dict[str, Any]] = None, cursor_type: Type[SnowflakeCursor] = SnowflakeCursor, poll_frequency_seconds: int = 1) -> List[Tuple[Any]] ``` Executes a query against a Snowflake database. **Args:** * `query`: The query to execute against the database. * `params`: The params to replace the placeholders in the query. * `snowflake_connector`: The credentials to use to authenticate. * `cursor_type`: The type of database cursor to use for the query. * `poll_frequency_seconds`: Number of seconds to wait in between checks for run completion. **Returns:** * The output of `response.fetchall()`. **Examples:** Query Snowflake table with the ID value parameterized. ```python theme={null} from prefect import flow from prefect_snowflake.credentials import SnowflakeCredentials from prefect_snowflake.database import SnowflakeConnector, snowflake_query @flow def snowflake_query_flow(): snowflake_credentials = SnowflakeCredentials( account="account", user="user", password="password", ) snowflake_connector = SnowflakeConnector( database="database", warehouse="warehouse", schema="schema", credentials=snowflake_credentials ) result = snowflake_query( "SELECT * FROM table WHERE id=%{id_param}s LIMIT 8;", snowflake_connector, params={"id_param": 1} ) return result snowflake_query_flow() ``` ### `snowflake_multiquery` ```python theme={null} snowflake_multiquery(queries: List[str], snowflake_connector: SnowflakeConnector, params: Union[Tuple[Any], Dict[str, Any]] = None, cursor_type: Type[SnowflakeCursor] = SnowflakeCursor, as_transaction: bool = False, return_transaction_control_results: bool = False, poll_frequency_seconds: int = 1) -> List[List[Tuple[Any]]] ``` Executes multiple queries against a Snowflake database in a shared session. Allows execution in a transaction. **Args:** * `queries`: The list of queries to execute against the database. * `params`: The params to replace the placeholders in the query. * `snowflake_connector`: The credentials to use to authenticate. * `cursor_type`: The type of database cursor to use for the query. * `as_transaction`: If True, queries are executed in a transaction. * `return_transaction_control_results`: Determines if the results of queries controlling the transaction (BEGIN/COMMIT) should be returned. * `poll_frequency_seconds`: Number of seconds to wait in between checks for run completion. **Returns:** * List of the outputs of `response.fetchall()` for each query. **Examples:** Query Snowflake table with the ID value parameterized. ```python theme={null} from prefect import flow from prefect_snowflake.credentials import SnowflakeCredentials from prefect_snowflake.database import SnowflakeConnector, snowflake_multiquery @flow def snowflake_multiquery_flow(): snowflake_credentials = SnowflakeCredentials( account="account", user="user", password="password", ) snowflake_connector = SnowflakeConnector( database="database", warehouse="warehouse", schema="schema", credentials=snowflake_credentials ) result = snowflake_multiquery( ["SELECT * FROM table WHERE id=%{id_param}s LIMIT 8;", "SELECT 1,2"], snowflake_connector, params={"id_param": 1}, as_transaction=True ) return result snowflake_multiquery_flow() ``` ### `snowflake_multiquery_async` ```python theme={null} snowflake_multiquery_async(queries: List[str], snowflake_connector: SnowflakeConnector, params: Union[Tuple[Any], Dict[str, Any]] = None, cursor_type: Type[SnowflakeCursor] = SnowflakeCursor, as_transaction: bool = False, return_transaction_control_results: bool = False, poll_frequency_seconds: int = 1) -> List[List[Tuple[Any]]] ``` Executes multiple queries against a Snowflake database in a shared session. Allows execution in a transaction. **Args:** * `queries`: The list of queries to execute against the database. * `params`: The params to replace the placeholders in the query. * `snowflake_connector`: The credentials to use to authenticate. * `cursor_type`: The type of database cursor to use for the query. * `as_transaction`: If True, queries are executed in a transaction. * `return_transaction_control_results`: Determines if the results of queries controlling the transaction (BEGIN/COMMIT) should be returned. * `poll_frequency_seconds`: Number of seconds to wait in between checks for run completion. **Returns:** * List of the outputs of `response.fetchall()` for each query. **Examples:** Query Snowflake table with the ID value parameterized. ```python theme={null} from prefect import flow from prefect_snowflake.credentials import SnowflakeCredentials from prefect_snowflake.database import SnowflakeConnector, snowflake_multiquery @flow def snowflake_multiquery_flow(): snowflake_credentials = SnowflakeCredentials( account="account", user="user", password="password", ) snowflake_connector = SnowflakeConnector( database="database", warehouse="warehouse", schema="schema", credentials=snowflake_credentials ) result = snowflake_multiquery( ["SELECT * FROM table WHERE id=%{id_param}s LIMIT 8;", "SELECT 1,2"], snowflake_connector, params={"id_param": 1}, as_transaction=True ) return result snowflake_multiquery_flow() ``` ### `snowflake_query_sync` ```python theme={null} snowflake_query_sync(query: str, snowflake_connector: SnowflakeConnector, params: Union[Tuple[Any], Dict[str, Any]] = None, cursor_type: Type[SnowflakeCursor] = SnowflakeCursor) -> List[Tuple[Any]] ``` Executes a query in sync mode against a Snowflake database. **Args:** * `query`: The query to execute against the database. * `params`: The params to replace the placeholders in the query. * `snowflake_connector`: The credentials to use to authenticate. * `cursor_type`: The type of database cursor to use for the query. **Returns:** * The output of `response.fetchall()`. **Examples:** Execute a put statement. ```python theme={null} from prefect import flow from prefect_snowflake.credentials import SnowflakeCredentials from prefect_snowflake.database import SnowflakeConnector, snowflake_query @flow def snowflake_query_sync_flow(): snowflake_credentials = SnowflakeCredentials( account="account", user="user", password="password", ) snowflake_connector = SnowflakeConnector( database="database", warehouse="warehouse", schema="schema", credentials=snowflake_credentials ) result = snowflake_query_sync( "put file://a_file.csv @mystage;", snowflake_connector, ) return result snowflake_query_sync_flow() ``` ## Classes ### `SnowflakeConnector` Block used to manage connections with Snowflake. Upon instantiating, a connection is created and maintained for the life of the object until the close method is called. It is recommended to use this block as a context manager, which will automatically close the engine and its connections when the context is exited. It is also recommended that this block is loaded and consumed within a single task or flow because if the block is passed across separate tasks and flows, the state of the block's connection and cursor will be lost. **Args:** * `credentials`: The credentials to authenticate with Snowflake. * `database`: The name of the default database to use. * `warehouse`: The name of the default warehouse to use. * `schema`: The name of the default schema to use; this attribute is accessible through `SnowflakeConnector(...).schema_`. * `fetch_size`: The number of rows to fetch at a time. * `poll_frequency_s`: The number of seconds before checking query. **Examples:** Load stored Snowflake connector as a context manager: ```python theme={null} from prefect_snowflake.database import SnowflakeConnector snowflake_connector = SnowflakeConnector.load("BLOCK_NAME") ``` Insert data into database and fetch results. ```python theme={null} from prefect_snowflake.database import SnowflakeConnector with SnowflakeConnector.load("BLOCK_NAME") as conn: conn.execute( "CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);" ) conn.execute_many( "INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);", seq_of_parameters=[ {"name": "Ford", "address": "Highway 42"}, {"name": "Unknown", "address": "Space"}, {"name": "Me", "address": "Myway 88"}, ], ) results = conn.fetch_all( "SELECT * FROM customers WHERE address = %(address)s", parameters={"address": "Space"} ) print(results) ``` **Methods:** #### `close` ```python theme={null} close(self) ``` Closes connection and its cursors. #### `execute` ```python theme={null} execute(self, operation: str, parameters: Optional[Dict[str, Any]] = None, cursor_type: Type[SnowflakeCursor] = SnowflakeCursor, **execute_kwargs: Any) -> None ``` Executes an operation on the database. This method is intended to be used for operations that do not return data, such as INSERT, UPDATE, or DELETE. Unlike the fetch methods, this method will always execute the operation upon calling. **Args:** * `operation`: The SQL query or other operation to be executed. * `parameters`: The parameters for the operation. * `cursor_type`: The class of the cursor to use when creating a Snowflake cursor. * `**execute_kwargs`: Additional options to pass to `cursor.execute_async`. **Examples:** Create table named customers with two columns, name and address. ```python theme={null} from prefect_snowflake.database import SnowflakeConnector with SnowflakeConnector.load("BLOCK_NAME") as conn: conn.execute( "CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);" ) ``` #### `execute_async` ```python theme={null} execute_async(self, operation: str, parameters: Optional[Dict[str, Any]] = None, cursor_type: Type[SnowflakeCursor] = SnowflakeCursor, **execute_kwargs: Any) -> None ``` Executes an operation on the database. This method is intended to be used for operations that do not return data, such as INSERT, UPDATE, or DELETE. Unlike the fetch methods, this method will always execute the operation upon calling. **Args:** * `operation`: The SQL query or other operation to be executed. * `parameters`: The parameters for the operation. * `cursor_type`: The class of the cursor to use when creating a Snowflake cursor. * `**execute_kwargs`: Additional options to pass to `cursor.execute_async`. **Examples:** Create table named customers with two columns, name and address. ```python theme={null} from prefect_snowflake.database import SnowflakeConnector with SnowflakeConnector.load("BLOCK_NAME") as conn: conn.execute( "CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);" ) ``` #### `execute_many` ```python theme={null} execute_many(self, operation: str, seq_of_parameters: List[Dict[str, Any]]) -> None ``` Executes many operations on the database. This method is intended to be used for operations that do not return data, such as INSERT, UPDATE, or DELETE. Unlike the fetch methods, this method will always execute the operations upon calling. **Args:** * `operation`: The SQL query or other operation to be executed. * `seq_of_parameters`: The sequence of parameters for the operation. **Examples:** Create table and insert three rows into it. ```python theme={null} from prefect_snowflake.database import SnowflakeConnector with SnowflakeConnector.load("BLOCK_NAME") as conn: conn.execute( "CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);" ) conn.execute_many( "INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);", seq_of_parameters=[ {"name": "Marvin", "address": "Highway 42"}, {"name": "Ford", "address": "Highway 42"}, {"name": "Unknown", "address": "Space"}, ], ) ``` #### `execute_many_async` ```python theme={null} execute_many_async(self, operation: str, seq_of_parameters: List[Dict[str, Any]]) -> None ``` Executes many operations on the database. This method is intended to be used for operations that do not return data, such as INSERT, UPDATE, or DELETE. Unlike the fetch methods, this method will always execute the operations upon calling. **Args:** * `operation`: The SQL query or other operation to be executed. * `seq_of_parameters`: The sequence of parameters for the operation. **Examples:** Create table and insert three rows into it. ```python theme={null} from prefect_snowflake.database import SnowflakeConnector with SnowflakeConnector.load("BLOCK_NAME") as conn: conn.execute( "CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);" ) conn.execute_many( "INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);", seq_of_parameters=[ {"name": "Marvin", "address": "Highway 42"}, {"name": "Ford", "address": "Highway 42"}, {"name": "Unknown", "address": "Space"}, ], ) ``` #### `fetch_all` ```python theme={null} fetch_all(self, operation: str, parameters: Optional[Dict[str, Any]] = None, cursor_type: Type[SnowflakeCursor] = SnowflakeCursor, **execute_kwargs: Any) -> List[Tuple[Any]] ``` Fetch all results from the database. Repeated calls using the same inputs to *any* of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset\_cursors method is called. **Args:** * `operation`: The SQL query or other operation to be executed. * `parameters`: The parameters for the operation. * `cursor_type`: The class of the cursor to use when creating a Snowflake cursor. * `**execute_kwargs`: Additional options to pass to `cursor.execute_async`. **Returns:** * A list of tuples containing the data returned by the database, where each row is a tuple and each column is a value in the tuple. #### `fetch_all_async` ```python theme={null} fetch_all_async(self, operation: str, parameters: Optional[Dict[str, Any]] = None, cursor_type: Type[SnowflakeCursor] = SnowflakeCursor, **execute_kwargs: Any) -> List[Tuple[Any]] ``` Fetch all results from the database. Repeated calls using the same inputs to *any* of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset\_cursors method is called. **Args:** * `operation`: The SQL query or other operation to be executed. * `parameters`: The parameters for the operation. * `cursor_type`: The class of the cursor to use when creating a Snowflake cursor. * `**execute_kwargs`: Additional options to pass to `cursor.execute_async`. **Returns:** * A list of tuples containing the data returned by the database, where each row is a tuple and each column is a value in the tuple. **Examples:** Fetch all rows from the database where address is Highway 42. ```python theme={null} from prefect_snowflake.database import SnowflakeConnector with SnowflakeConnector.load("BLOCK_NAME") as conn: await conn.execute_async( "CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);" ) await conn.execute_many_async( "INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);", seq_of_parameters=[ {"name": "Marvin", "address": "Highway 42"}, {"name": "Ford", "address": "Highway 42"}, {"name": "Unknown", "address": "Highway 42"}, {"name": "Me", "address": "Myway 88"}, ], ) result = await conn.fetch_all_async( "SELECT * FROM customers WHERE address = %(address)s", parameters={"address": "Highway 42"}, ) print(result) # Marvin, Ford, Unknown ``` #### `fetch_many` ```python theme={null} fetch_many(self, operation: str, parameters: Optional[Sequence[Dict[str, Any]]] = None, size: Optional[int] = None, cursor_type: Type[SnowflakeCursor] = SnowflakeCursor, **execute_kwargs: Any) -> List[Tuple[Any]] ``` Fetch a limited number of results from the database. Repeated calls using the same inputs to *any* of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset\_cursors method is called. **Args:** * `operation`: The SQL query or other operation to be executed. * `parameters`: The parameters for the operation. * `size`: The number of results to return; if None or 0, uses the value of `fetch_size` configured on the block. * `cursor_type`: The class of the cursor to use when creating a Snowflake cursor. * `**execute_kwargs`: Additional options to pass to `cursor.execute_async`. **Returns:** * A list of tuples containing the data returned by the database, where each row is a tuple and each column is a value in the tuple. **Examples:** Repeatedly fetch two rows from the database where address is Highway 42. ```python theme={null} from prefect_snowflake.database import SnowflakeConnector with SnowflakeConnector.load("BLOCK_NAME") as conn: conn.execute( "CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);" ) conn.execute_many( "INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);", seq_of_parameters=[ {"name": "Marvin", "address": "Highway 42"}, {"name": "Ford", "address": "Highway 42"}, {"name": "Unknown", "address": "Highway 42"}, {"name": "Me", "address": "Highway 42"}, ], ) result = conn.fetch_many( "SELECT * FROM customers WHERE address = %(address)s", parameters={"address": "Highway 42"}, size=2 ) print(result) # Marvin, Ford result = conn.fetch_many( "SELECT * FROM customers WHERE address = %(address)s", parameters={"address": "Highway 42"}, size=2 ) print(result) # Unknown, Me ``` #### `fetch_many_async` ```python theme={null} fetch_many_async(self, operation: str, parameters: Optional[Sequence[Dict[str, Any]]] = None, size: Optional[int] = None, cursor_type: Type[SnowflakeCursor] = SnowflakeCursor, **execute_kwargs: Any) -> List[Tuple[Any]] ``` Fetch a limited number of results from the database asynchronously. Repeated calls using the same inputs to *any* of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset\_cursors method is called. **Args:** * `operation`: The SQL query or other operation to be executed. * `parameters`: The parameters for the operation. * `size`: The number of results to return; if None or 0, uses the value of `fetch_size` configured on the block. * `cursor_type`: The class of the cursor to use when creating a Snowflake cursor. * `**execute_kwargs`: Additional options to pass to `cursor.execute_async`. **Returns:** * A list of tuples containing the data returned by the database, where each row is a tuple and each column is a value in the tuple. **Examples:** Repeatedly fetch two rows from the database where address is Highway 42. ```python theme={null} from prefect_snowflake.database import SnowflakeConnector with SnowflakeConnector.load("BLOCK_NAME") as conn: conn.execute( "CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);" ) conn.execute_many( "INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);", seq_of_parameters=[ {"name": "Marvin", "address": "Highway 42"}, {"name": "Ford", "address": "Highway 42"}, {"name": "Unknown", "address": "Highway 42"}, {"name": "Me", "address": "Highway 42"}, ], ) result = conn.fetch_many( "SELECT * FROM customers WHERE address = %(address)s", parameters={"address": "Highway 42"}, size=2 ) print(result) # Marvin, Ford result = conn.fetch_many( "SELECT * FROM customers WHERE address = %(address)s", parameters={"address": "Highway 42"}, size=2 ) print(result) # Unknown, Me ``` #### `fetch_one` ```python theme={null} fetch_one(self, operation: str, parameters: Optional[Dict[str, Any]] = None, cursor_type: Type[SnowflakeCursor] = SnowflakeCursor, **execute_kwargs: Any) -> Tuple[Any] ``` Fetch a single result from the database. Repeated calls using the same inputs to *any* of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset\_cursors method is called. **Args:** * `operation`: The SQL query or other operation to be executed. * `parameters`: The parameters for the operation. * `cursor_type`: The class of the cursor to use when creating a Snowflake cursor. * `**execute_kwargs`: Additional options to pass to `cursor.execute_async`. **Returns:** * A tuple containing the data returned by the database, where each row is a tuple and each column is a value in the tuple. **Examples:** Fetch one row from the database where address is Space. ```python theme={null} from prefect_snowflake.database import SnowflakeConnector with SnowflakeConnector.load("BLOCK_NAME") as conn: conn.execute( "CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);" ) conn.execute_many( "INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);", seq_of_parameters=[ {"name": "Ford", "address": "Highway 42"}, {"name": "Unknown", "address": "Space"}, {"name": "Me", "address": "Myway 88"}, ], ) result = conn.fetch_one( "SELECT * FROM customers WHERE address = %(address)s", parameters={"address": "Space"} ) print(result) ``` #### `fetch_one_async` ```python theme={null} fetch_one_async(self, operation: str, parameters: Optional[Dict[str, Any]] = None, cursor_type: Type[SnowflakeCursor] = SnowflakeCursor, **execute_kwargs: Any) -> Tuple[Any] ``` Fetch a single result from the database asynchronously. Repeated calls using the same inputs to *any* of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset\_cursors method is called. **Args:** * `operation`: The SQL query or other operation to be executed. * `parameters`: The parameters for the operation. * `cursor_type`: The class of the cursor to use when creating a Snowflake cursor. * `**execute_kwargs`: Additional options to pass to `cursor.execute_async`. **Returns:** * A tuple containing the data returned by the database, where each row is a tuple and each column is a value in the tuple. **Examples:** Fetch one row from the database where address is Space. ```python theme={null} from prefect_snowflake.database import SnowflakeConnector with SnowflakeConnector.load("BLOCK_NAME") as conn: conn.execute( "CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);" ) conn.execute_many( "INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);", seq_of_parameters=[ {"name": "Ford", "address": "Highway 42"}, {"name": "Unknown", "address": "Space"}, {"name": "Me", "address": "Myway 88"}, ], ) result = await conn.fetch_one_async( "SELECT * FROM customers WHERE address = %(address)s", parameters={"address": "Space"} ) print(result) ``` #### `get_connection` ```python theme={null} get_connection(self, **connect_kwargs: Any) -> SnowflakeConnection ``` Returns an authenticated connection that can be used to query from Snowflake databases. **Args:** * `**connect_kwargs`: Additional arguments to pass to `snowflake.connector.connect`. **Returns:** * The authenticated SnowflakeConnection. **Examples:** ```python theme={null} from prefect_snowflake.credentials import SnowflakeCredentials from prefect_snowflake.database import SnowflakeConnector snowflake_credentials = SnowflakeCredentials( account="account", user="user", password="password", ) snowflake_connector = SnowflakeConnector( database="database", warehouse="warehouse", schema="schema", credentials=snowflake_credentials ) with snowflake_connector.get_connection() as connection: ... ``` #### `reset_cursors` ```python theme={null} reset_cursors(self) -> None ``` Tries to close all opened cursors. **Examples:** Reset the cursors to refresh cursor position. ```python theme={null} from prefect_snowflake.database import SnowflakeConnector with SnowflakeConnector.load("BLOCK_NAME") as conn: conn.execute( "CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);" ) conn.execute_many( "INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);", seq_of_parameters=[ {"name": "Ford", "address": "Highway 42"}, {"name": "Unknown", "address": "Space"}, {"name": "Me", "address": "Myway 88"}, ], ) print(conn.fetch_one("SELECT * FROM customers")) # Ford conn.reset_cursors() print(conn.fetch_one("SELECT * FROM customers")) # should be Ford again ``` # prefect-snowflake Source: https://docs.prefect.io/integrations/prefect-snowflake/index The `prefect-snowflake` integration makes it easy to connect to Snowflake in your Prefect flows. You can run queries both synchronously and asynchronously as Prefect flows and tasks. ## Getting started ### Prerequisites * [A Snowflake account](https://www.snowflake.com/en/) and the necessary connection information. ### Installation Install `prefect-snowflake` as a dependency of Prefect. If you don't already have Prefect installed, it will install the newest version of `prefect` as well. ```bash theme={null} pip install "prefect[snowflake]" ``` Upgrade to the latest versions of `prefect` and `prefect-snowflake`: ```bash theme={null} pip install -U "prefect[snowflake]" ``` ### Blocks setup The `prefect-snowflake` integration has two blocks: one for storing credentials and one for storing connection information. Register blocks in this module to view and edit them on Prefect Cloud: ```bash theme={null} prefect block register -m prefect_snowflake ``` #### Create the credentials block Below is a walkthrough on saving a `SnowflakeCredentials` block through code. Log into your Snowflake account to find your credentials. The example below uses a user and password combination, but refer to the [SDK documentation](/integrations/prefect-snowflake/api-ref/prefect_snowflake-credentials) for a full list of authentication and connection options. ```python theme={null} from prefect_snowflake import SnowflakeCredentials credentials = SnowflakeCredentials( account="ACCOUNT-PLACEHOLDER", # resembles nh12345.us-east-2.snowflake user="USER-PLACEHOLDER", password="PASSWORD-PLACEHOLDER" ) credentials.save("CREDENTIALS-BLOCK-NAME-PLACEHOLDER") ``` #### Create the connection block Then, to create a `SnowflakeConnector` block: 1. After logging in, click on any worksheet. 2. On the left side, select a database and schema. 3. On the top right, select a warehouse. 4. Create a short script, replacing the placeholders below. ```python theme={null} from prefect_snowflake import SnowflakeCredentials, SnowflakeConnector credentials = SnowflakeCredentials.load("CREDENTIALS-BLOCK-NAME-PLACEHOLDER") connector = SnowflakeConnector( credentials=credentials, database="DATABASE-PLACEHOLDER", schema="SCHEMA-PLACEHOLDER", warehouse="COMPUTE_WH", ) connector.save("CONNECTOR-BLOCK-NAME-PLACEHOLDER") ``` You can now easily load the saved block, which holds your credentials and connection info: ```python theme={null} from prefect_snowflake import SnowflakeCredentials, SnowflakeConnector SnowflakeConnector.load("CONNECTOR-BLOCK-NAME-PLACEHOLDER") ``` ## Examples To set up a table, use the `execute` and `execute_many` methods. Then, use the `fetch_all` method. If the results are too large to fit into memory, use the `fetch_many` method to retrieve data in chunks. By using the `SnowflakeConnector` as a context manager, you can make sure that the Snowflake connection and cursors are closed properly after you're done with them. ```python theme={null} from prefect import flow, task from prefect_snowflake import SnowflakeConnector @task def setup_table(block_name: str) -> None: with SnowflakeConnector.load(block_name) as connector: connector.execute( "CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);" ) connector.execute_many( "INSERT INTO customers (name, address) VALUES (%(name)s, %(address)s);", seq_of_parameters=[ {"name": "Ford", "address": "Highway 42"}, {"name": "Unknown", "address": "Space"}, {"name": "Me", "address": "Myway 88"}, ], ) @task def fetch_data(block_name: str) -> list: all_rows = [] with SnowflakeConnector.load(block_name) as connector: while True: # Repeated fetch* calls using the same operation will # skip re-executing and instead return the next set of results new_rows = connector.fetch_many("SELECT * FROM customers", size=2) if len(new_rows) == 0: break all_rows.append(new_rows) return all_rows @flow def snowflake_flow(block_name: str) -> list: setup_table(block_name) all_rows = fetch_data(block_name) return all_rows if __name__=="__main__": snowflake_flow() ``` If the native methods of the block don't meet your requirements, don't worry. You have the option to access the underlying Snowflake connection and utilize its built-in methods as well. ```python theme={null} import pandas as pd from prefect import flow from prefect_snowflake.database import SnowflakeConnector from snowflake.connector.pandas_tools import write_pandas @flow def snowflake_write_pandas_flow(): connector = SnowflakeConnector.load("my-block") with connector.get_connection() as connection: table_name = "TABLE_NAME" ddl = "NAME STRING, NUMBER INT" statement = f'CREATE TABLE IF NOT EXISTS {table_name} ({ddl})' with connection.cursor() as cursor: cursor.execute(statement) # case sensitivity matters here! df = pd.DataFrame([('Marvin', 42), ('Ford', 88)], columns=['NAME', 'NUMBER']) success, num_chunks, num_rows, _ = write_pandas( conn=connection, df=df, table_name=table_name, database=snowflake_connector.database, schema=snowflake_connector.schema_ # note the "_" suffix ) ``` ## Resources Refer to the `prefect-snowflake` [SDK documentation](/integrations/prefect-snowflake/api-ref/prefect_snowflake-database) to explore other capabilities of the `prefect-snowflake` library, such as async methods. For further assistance using Snowflake, consult the [Snowflake documentation](https://docs.snowflake.com/) or the [Snowflake Python Connector documentation](https://docs.snowflake.com/en/developer-guide/python-connector/python-connector-example). # credentials Source: https://docs.prefect.io/integrations/prefect-sqlalchemy/api-ref/prefect_sqlalchemy-credentials # `prefect_sqlalchemy.credentials` Credential classes used to perform authenticated interactions with SQLAlchemy ## Classes ### `AsyncDriver` Known dialects with their corresponding async drivers. **Attributes:** * `POSTGRESQL_ASYNCPG`: [postgresql+asyncpg](https://docs.sqlalchemy.org/en/14/dialects/postgresql.html#module-sqlalchemy.dialects.postgresql.asyncpg) * `SQLITE_AIOSQLITE`: [sqlite+aiosqlite](https://docs.sqlalchemy.org/en/14/dialects/sqlite.html#module-sqlalchemy.dialects.sqlite.aiosqlite) * `MYSQL_ASYNCMY`: [mysql+asyncmy](https://docs.sqlalchemy.org/en/14/dialects/mysql.html#module-sqlalchemy.dialects.mysql.asyncmy) * `MYSQL_AIOMYSQL`: [mysql+aiomysql](https://docs.sqlalchemy.org/en/14/dialects/mysql.html#module-sqlalchemy.dialects.mysql.aiomysql) * `ORACLE_ORACLEDB_ASYNC`: [oracle+oracledb\_async](https://docs.sqlalchemy.org/en/20/dialects/oracle.html#module-sqlalchemy.dialects.oracle.oracledb) ### `SyncDriver` Known dialects with their corresponding sync drivers. **Attributes:** * `POSTGRESQL_PSYCOPG2`: [postgresql+psycopg2](https://docs.sqlalchemy.org/en/14/dialects/postgresql.html#module-sqlalchemy.dialects.postgresql.psycopg2) * `POSTGRESQL_PG8000`: [postgresql+pg8000](https://docs.sqlalchemy.org/en/14/dialects/postgresql.html#module-sqlalchemy.dialects.postgresql.pg8000) * `POSTGRESQL_PSYCOPG2CFFI`: [postgresql+psycopg2cffi](https://docs.sqlalchemy.org/en/14/dialects/postgresql.html#module-sqlalchemy.dialects.postgresql.psycopg2cffi) * `POSTGRESQL_PYPOSTGRESQL`: [postgresql+pypostgresql](https://docs.sqlalchemy.org/en/14/dialects/postgresql.html#module-sqlalchemy.dialects.postgresql.pypostgresql) * `POSTGRESQL_PYGRESQL`: [postgresql+pygresql](https://docs.sqlalchemy.org/en/14/dialects/postgresql.html#module-sqlalchemy.dialects.postgresql.pygresql) * `MYSQL_MYSQLDB`: [mysql+mysqldb](https://docs.sqlalchemy.org/en/14/dialects/mysql.html#module-sqlalchemy.dialects.mysql.mysqldb) * `MYSQL_PYMYSQL`: [mysql+pymysql](https://docs.sqlalchemy.org/en/14/dialects/mysql.html#module-sqlalchemy.dialects.mysql.pymysql) * `MYSQL_MYSQLCONNECTOR`: [mysql+mysqlconnector](https://docs.sqlalchemy.org/en/14/dialects/mysql.html#module-sqlalchemy.dialects.mysql.mysqlconnector) * `MYSQL_CYMYSQL`: [mysql+cymysql](https://docs.sqlalchemy.org/en/14/dialects/mysql.html#module-sqlalchemy.dialects.mysql.cymysql) * `MYSQL_OURSQL`: [mysql+oursql](https://docs.sqlalchemy.org/en/14/dialects/mysql.html#module-sqlalchemy.dialects.mysql.oursql) * `MYSQL_PYODBC`: [mysql+pyodbc](https://docs.sqlalchemy.org/en/14/dialects/mysql.html#module-sqlalchemy.dialects.mysql.pyodbc) * `SQLITE_PYSQLITE`: [sqlite+pysqlite](https://docs.sqlalchemy.org/en/14/dialects/sqlite.html#module-sqlalchemy.dialects.sqlite.pysqlite) * `SQLITE_PYSQLCIPHER`: [sqlite+pysqlcipher](https://docs.sqlalchemy.org/en/14/dialects/sqlite.html#module-sqlalchemy.dialects.sqlite.pysqlcipher) * `ORACLE_CX_ORACLE`: [oracle+cx\_oracle](https://docs.sqlalchemy.org/en/14/dialects/oracle.html#module-sqlalchemy.dialects.oracle.cx_oracle) * `ORACLE_ORACLEDB`: [oracle+oracledb](https://docs.sqlalchemy.org/en/20/dialects/oracle.html#module-sqlalchemy.dialects.oracle.oracledb) * `MSSQL_PYODBC`: [mssql+pyodbc](https://docs.sqlalchemy.org/en/14/dialects/mssql.html#module-sqlalchemy.dialects.mssql.pyodbc) * `MSSQL_MXODBC`: [mssql+mxodbc](https://docs.sqlalchemy.org/en/14/dialects/mssql.html#module-sqlalchemy.dialects.mssql.mxodbc) * `MSSQL_PYMSSQL`: [mssql+pymssql](https://docs.sqlalchemy.org/en/14/dialects/mssql.html#module-sqlalchemy.dialects.mssql.pymssql) ### `ConnectionComponents` Parameters to use to create a SQLAlchemy engine URL. **Attributes:** * `driver`: The driver name to use. * `database`: The name of the database to use. * `username`: The user name used to authenticate. * `password`: The password used to authenticate. * `host`: The host address of the database. * `port`: The port to connect to the database. * `query`: A dictionary of string keys to string values to be passed to the dialect and/or the DBAPI upon connect. **Methods:** #### `create_url` ```python theme={null} create_url(self) -> URL ``` Create a fully formed connection URL. **Returns:** * The SQLAlchemy engine URL. # database Source: https://docs.prefect.io/integrations/prefect-sqlalchemy/api-ref/prefect_sqlalchemy-database # `prefect_sqlalchemy.database` Tasks for querying a database with SQLAlchemy ## Functions ### `check_make_url` ```python theme={null} check_make_url(url: str) -> str ``` ## Classes ### `SqlAlchemyConnector` Block used to manage authentication with a database using synchronous drivers. Upon instantiating, an engine is created and maintained for the life of the object until the close method is called. It is recommended to use this block as a context manager, which will automatically close the engine and its connections when the context is exited. It is also recommended that this block is loaded and consumed within a single task or flow because if the block is passed across separate tasks and flows, the state of the block's connection and cursor could be lost. **Attributes:** * `connection_info`: SQLAlchemy URL to create the engine; either create from components or create from a string. * `connect_args`: The options which will be passed directly to the DBAPI's connect() method as additional keyword arguments. * `fetch_size`: The number of rows to fetch at a time. **Methods:** #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` Initializes the engine. #### `close` ```python theme={null} close(self) -> None ``` Closes sync connections and its cursors. #### `execute` ```python theme={null} execute(self, operation: str, parameters: Optional[Dict[str, Any]] = None, **execution_options: Dict[str, Any]) -> CursorResult ``` Executes an operation on the database. This method is intended to be used for operations that do not return data, such as INSERT, UPDATE, or DELETE. Unlike the fetch methods, this method will always execute the operation upon calling. **Args:** * `operation`: The SQL query or other operation to be executed. * `parameters`: The parameters for the operation. * `**execution_options`: Options to pass to `Connection.execution_options`. **Examples:** Create a table and insert one row into it. ```python theme={null} from prefect_sqlalchemy import SqlAlchemyConnector with SqlAlchemyConnector.load("MY_BLOCK") as database: database.execute( "CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);" ) database.execute( "INSERT INTO customers (name, address) VALUES (:name, :address);", parameters={"name": "Marvin", "address": "Highway 42"}, ) ``` #### `execute_many` ```python theme={null} execute_many(self, operation: str, seq_of_parameters: List[Dict[str, Any]], **execution_options: Dict[str, Any]) -> CursorResult ``` Executes many operations on the database. This method is intended to be used for operations that do not return data, such as INSERT, UPDATE, or DELETE. Unlike the fetch methods, this method will always execute the operation upon calling. **Args:** * `operation`: The SQL query or other operation to be executed. * `seq_of_parameters`: The sequence of parameters for the operation. * `**execution_options`: Options to pass to `Connection.execution_options`. **Examples:** Create a table and insert two rows into it. ```python theme={null} from prefect_sqlalchemy import SqlAlchemyConnector with SqlAlchemyConnector.load("MY_BLOCK") as database: database.execute( "CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);" ) database.execute_many( "INSERT INTO customers (name, address) VALUES (:name, :address);", seq_of_parameters=[ {"name": "Ford", "address": "Highway 42"}, {"name": "Unknown", "address": "Space"}, {"name": "Me", "address": "Myway 88"}, ], ) ``` #### `fetch_all` ```python theme={null} fetch_all(self, operation: str, parameters: Optional[Dict[str, Any]] = None, **execution_options: Dict[str, Any]) -> List[Tuple[Any]] ``` Fetch all results from the database. Repeated calls using the same inputs to *any* of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset\_connections method is called. **Args:** * `operation`: The SQL query or other operation to be executed. * `parameters`: The parameters for the operation. * `**execution_options`: Options to pass to `Connection.execution_options`. **Returns:** * A list of tuples containing the data returned by the database, where each row is a tuple and each column is a value in the tuple. **Examples:** Create a table, insert three rows into it, and fetch all where name is 'Me'. ```python theme={null} from prefect_sqlalchemy import SqlAlchemyConnector with SqlAlchemyConnector.load("MY_BLOCK") as database: database.execute( "CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);" ) database.execute_many( "INSERT INTO customers (name, address) VALUES (:name, :address);", seq_of_parameters=[ {"name": "Ford", "address": "Highway 42"}, {"name": "Unknown", "address": "Space"}, {"name": "Me", "address": "Myway 88"}, ], ) results = database.fetch_all( "SELECT * FROM customers WHERE name = :name", parameters={"name": "Me"} ) ``` #### `fetch_many` ```python theme={null} fetch_many(self, operation: str, parameters: Optional[Dict[str, Any]] = None, size: Optional[int] = None, **execution_options: Dict[str, Any]) -> List[Tuple[Any]] ``` Fetch a limited number of results from the database. Repeated calls using the same inputs to *any* of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset\_connections method is called. **Args:** * `operation`: The SQL query or other operation to be executed. * `parameters`: The parameters for the operation. * `size`: The number of results to return; if None or 0, uses the value of `fetch_size` configured on the block. * `**execution_options`: Options to pass to `Connection.execution_options`. **Returns:** * A list of tuples containing the data returned by the database, where each row is a tuple and each column is a value in the tuple. **Examples:** Create a table, insert three rows into it, and fetch two rows repeatedly. ```python theme={null} from prefect_sqlalchemy import SqlAlchemyConnector with SqlAlchemyConnector.load("MY_BLOCK") as database: database.execute( "CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);" ) database.execute_many( "INSERT INTO customers (name, address) VALUES (:name, :address);", seq_of_parameters=[ {"name": "Ford", "address": "Highway 42"}, {"name": "Unknown", "address": "Space"}, {"name": "Me", "address": "Myway 88"}, ], ) results = database.fetch_many("SELECT * FROM customers", size=2) print(results) results = database.fetch_many("SELECT * FROM customers", size=2) print(results) ``` #### `fetch_one` ```python theme={null} fetch_one(self, operation: str, parameters: Optional[Dict[str, Any]] = None, **execution_options: Dict[str, Any]) -> Tuple[Any] ``` Fetch a single result from the database. Repeated calls using the same inputs to *any* of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset\_connections method is called. **Args:** * `operation`: The SQL query or other operation to be executed. * `parameters`: The parameters for the operation. * `**execution_options`: Options to pass to `Connection.execution_options`. **Returns:** * A tuple containing the data returned by the database, where each column is a value in the tuple. **Examples:** Create a table, insert three rows into it, and fetch a row repeatedly. ```python theme={null} from prefect_sqlalchemy import SqlAlchemyConnector with SqlAlchemyConnector.load("MY_BLOCK") as database: database.execute( "CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);" ) database.execute_many( "INSERT INTO customers (name, address) VALUES (:name, :address);", seq_of_parameters=[ {"name": "Ford", "address": "Highway 42"}, {"name": "Unknown", "address": "Space"}, {"name": "Me", "address": "Myway 88"}, ], ) results = True while results: results = database.fetch_one("SELECT * FROM customers") print(results) ``` #### `get_client` ```python theme={null} get_client(self, client_type: Literal['engine', 'connection'], **get_client_kwargs: Dict[str, Any]) -> Union[Engine, Connection] ``` Returns either an engine or connection that can be used to query from databases. **Args:** * `client_type`: Select from either 'engine' or 'connection'. * `**get_client_kwargs`: Additional keyword arguments to pass to either `get_engine` or `get_connection`. **Returns:** * The authenticated SQLAlchemy engine or connection. **Examples:** Create an engine. ```python theme={null} from prefect_sqlalchemy import SqlAlchemyConnector sqlalchemy_connector = SqlAlchemyConnector.load("BLOCK_NAME") engine = sqlalchemy_connector.get_client(client_type="engine") ``` Create a context managed connection. ```python theme={null} from prefect_sqlalchemy import SqlAlchemyConnector sqlalchemy_connector = SqlAlchemyConnector.load("BLOCK_NAME") with sqlalchemy_connector.get_client(client_type="connection") as conn: ... ``` #### `get_connection` ```python theme={null} get_connection(self, begin: bool = True, **connect_kwargs: Dict[str, Any]) -> Connection ``` Returns a connection that can be used to query from databases. **Args:** * `begin`: Whether to begin a transaction on the connection; if True, if any operations fail, the entire transaction will be rolled back. * `**connect_kwargs`: Additional keyword arguments to pass to either `engine.begin` or engine.connect\`. **Returns:** * The SQLAlchemy Connection. **Examples:** Create a synchronous connection as a context-managed transaction. ```python theme={null} from prefect_sqlalchemy import SqlAlchemyConnector sqlalchemy_connector = SqlAlchemyConnector.load("BLOCK_NAME") with sqlalchemy_connector.get_connection(begin=False) as connection: connection.execute("SELECT * FROM table LIMIT 1;") ``` #### `get_engine` ```python theme={null} get_engine(self, **create_engine_kwargs: Dict[str, Any]) -> Engine ``` Returns an authenticated engine that can be used to query from databases. If an existing engine exists, return that one. **Returns:** * The authenticated SQLAlchemy Engine. **Examples:** Create a synchronous engine to PostgreSQL using URL params. ```python theme={null} from prefect import flow from prefect_sqlalchemy import ( SqlAlchemyConnector, ConnectionComponents, SyncDriver ) @flow def sqlalchemy_credentials_flow(): sqlalchemy_credentials = SqlAlchemyConnector( connection_info=ConnectionComponents( driver=SyncDriver.POSTGRESQL_PSYCOPG2, username="prefect", password="prefect_password", database="postgres" ) ) print(sqlalchemy_credentials.get_engine()) sqlalchemy_credentials_flow() ``` #### `reset_connections` ```python theme={null} reset_connections(self) -> None ``` Tries to close all opened connections and their results. **Examples:** Resets connections so `fetch_*` methods return new results. ```python theme={null} from prefect_sqlalchemy import SqlAlchemyConnector with SqlAlchemyConnector.load("MY_BLOCK") as database: results = database.fetch_one("SELECT * FROM customers") database.reset_connections() results = database.fetch_one("SELECT * FROM customers") ``` ### `AsyncSqlAlchemyConnector` Block used to manage authentication with a database using asynchronous drivers. Upon instantiating, an engine is created and maintained for the life of the object until the close method is called. It is recommended to use this block as an async context manager, which will automatically close the engine and its connections when the context is exited. It is also recommended that this block is loaded and consumed within a single task or flow because if the block is passed across separate tasks and flows, the state of the block's connection and cursor could be lost. **Attributes:** * `connection_info`: SQLAlchemy URL to create the engine; either create from components or create from a string. * `connect_args`: The options which will be passed directly to the DBAPI's connect() method as additional keyword arguments. * `fetch_size`: The number of rows to fetch at a time. **Methods:** #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` Initializes the engine. #### `close` ```python theme={null} close(self) -> None ``` Closes async connections and its cursors. #### `execute` ```python theme={null} execute(self, operation: str, parameters: Optional[Dict[str, Any]] = None, **execution_options: Dict[str, Any]) -> CursorResult ``` Executes an operation on the database. This method is intended to be used for operations that do not return data, such as INSERT, UPDATE, or DELETE. Unlike the fetch methods, this method will always execute the operation upon calling. **Args:** * `operation`: The SQL query or other operation to be executed. * `parameters`: The parameters for the operation. * `**execution_options`: Options to pass to `Connection.execution_options`. **Examples:** Create a table and insert one row into it. ```python theme={null} import asyncio from prefect_sqlalchemy import AsyncSqlAlchemyConnector async def example_run(): async with AsyncSqlAlchemyConnector.load("MY_BLOCK") as database: await database.execute( "CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);" ) await database.execute( "INSERT INTO customers (name, address) VALUES (:name, :address);", parameters={"name": "Marvin", "address": "Highway 42"}, ) asyncio.run(example_run()) ``` #### `execute_many` ```python theme={null} execute_many(self, operation: str, seq_of_parameters: List[Dict[str, Any]], **execution_options: Dict[str, Any]) -> CursorResult ``` Executes many operations on the database. This method is intended to be used for operations that do not return data, such as INSERT, UPDATE, or DELETE. Unlike the fetch methods, this method will always execute the operation upon calling. **Args:** * `operation`: The SQL query or other operation to be executed. * `seq_of_parameters`: The sequence of parameters for the operation. * `**execution_options`: Options to pass to `Connection.execution_options`. **Examples:** Create a table and insert two rows into it. ```python theme={null} import asyncio from prefect_sqlalchemy import AsyncSqlAlchemyConnector async def example_run(): async with AsyncSqlAlchemyConnector.load("MY_BLOCK") as database: await database.execute( "CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);" ) await database.execute_many( "INSERT INTO customers (name, address) VALUES (:name, :address);", seq_of_parameters=[ {"name": "Ford", "address": "Highway 42"}, {"name": "Unknown", "address": "Space"}, {"name": "Me", "address": "Myway 88"}, ], ) asyncio.run(example_run()) ``` #### `fetch_all` ```python theme={null} fetch_all(self, operation: str, parameters: Optional[Dict[str, Any]] = None, **execution_options: Dict[str, Any]) -> List[Tuple[Any]] ``` Fetch all results from the database. Repeated calls using the same inputs to *any* of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset\_connections method is called. **Args:** * `operation`: The SQL query or other operation to be executed. * `parameters`: The parameters for the operation. * `**execution_options`: Options to pass to `Connection.execution_options`. **Returns:** * A list of tuples containing the data returned by the database, where each row is a tuple and each column is a value in the tuple. **Examples:** Create a table, insert three rows into it, and fetch all where name is 'Me'. ```python theme={null} import asyncio from prefect_sqlalchemy import AsyncSqlAlchemyConnector async def example_run(): async with AsyncSqlAlchemyConnector.load("MY_BLOCK") as database: await database.execute( "CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);" ) await database.execute_many( "INSERT INTO customers (name, address) VALUES (:name, :address);", seq_of_parameters=[ {"name": "Ford", "address": "Highway 42"}, {"name": "Unknown", "address": "Space"}, {"name": "Me", "address": "Myway 88"}, ], ) results = await database.fetch_all( "SELECT * FROM customers WHERE name = :name", parameters={"name": "Me"} ) asyncio.run(example_run()) ``` #### `fetch_many` ```python theme={null} fetch_many(self, operation: str, parameters: Optional[Dict[str, Any]] = None, size: Optional[int] = None, **execution_options: Dict[str, Any]) -> List[Tuple[Any]] ``` Fetch a limited number of results from the database. Repeated calls using the same inputs to *any* of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset\_connections method is called. **Args:** * `operation`: The SQL query or other operation to be executed. * `parameters`: The parameters for the operation. * `size`: The number of results to return; if None or 0, uses the value of `fetch_size` configured on the block. * `**execution_options`: Options to pass to `Connection.execution_options`. **Returns:** * A list of tuples containing the data returned by the database, where each row is a tuple and each column is a value in the tuple. **Examples:** Create a table, insert three rows into it, and fetch two rows repeatedly. ```python theme={null} import asyncio from prefect_sqlalchemy import AsyncSqlAlchemyConnector async def example_run(): async with AsyncSqlAlchemyConnector.load("MY_BLOCK") as database: await database.execute( "CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);" ) await database.execute_many( "INSERT INTO customers (name, address) VALUES (:name, :address);", seq_of_parameters=[ {"name": "Ford", "address": "Highway 42"}, {"name": "Unknown", "address": "Space"}, {"name": "Me", "address": "Myway 88"}, ], ) results = await database.fetch_many("SELECT * FROM customers", size=2) print(results) results = await database.fetch_many("SELECT * FROM customers", size=2) print(results) asyncio.run(example_run()) ``` #### `fetch_one` ```python theme={null} fetch_one(self, operation: str, parameters: Optional[Dict[str, Any]] = None, **execution_options: Dict[str, Any]) -> Tuple[Any] ``` Fetch a single result from the database. Repeated calls using the same inputs to *any* of the fetch methods of this block will skip executing the operation again, and instead, return the next set of results from the previous execution, until the reset\_connections method is called. **Args:** * `operation`: The SQL query or other operation to be executed. * `parameters`: The parameters for the operation. * `**execution_options`: Options to pass to `Connection.execution_options`. **Returns:** * A tuple containing the data returned by the database, where each column is a value in the tuple. **Examples:** Create a table, insert three rows into it, and fetch a row repeatedly. ```python theme={null} import asyncio from prefect_sqlalchemy import AsyncSqlAlchemyConnector async def example_run(): async with AsyncSqlAlchemyConnector.load("MY_BLOCK") as database: await database.execute( "CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);" ) await database.execute_many( "INSERT INTO customers (name, address) VALUES (:name, :address);", seq_of_parameters=[ {"name": "Ford", "address": "Highway 42"}, {"name": "Unknown", "address": "Space"}, {"name": "Me", "address": "Myway 88"}, ], ) results = True while results: results = await database.fetch_one("SELECT * FROM customers") print(results) asyncio.run(example_run()) ``` #### `get_client` ```python theme={null} get_client(self, client_type: Literal['engine', 'connection'], **get_client_kwargs: Dict[str, Any]) -> Union[AsyncEngine, AsyncConnection] ``` Returns either an engine or connection that can be used to query from databases. **Args:** * `client_type`: Select from either 'engine' or 'connection'. * `**get_client_kwargs`: Additional keyword arguments to pass to either `get_engine` or `get_connection`. **Returns:** * The authenticated SQLAlchemy engine or connection. **Examples:** Create an engine. ```python theme={null} from prefect_sqlalchemy import AsyncSqlAlchemyConnector sqlalchemy_connector = AsyncSqlAlchemyConnector.load("BLOCK_NAME") engine = sqlalchemy_connector.get_client(client_type="engine") ``` Create a context managed connection. ```python theme={null} from prefect_sqlalchemy import AsyncSqlAlchemyConnector sqlalchemy_connector = AsyncSqlAlchemyConnector.load("BLOCK_NAME") async with sqlalchemy_connector.get_client(client_type="connection") as conn: ... ``` #### `get_connection` ```python theme={null} get_connection(self, begin: bool = True, **connect_kwargs: Dict[str, Any]) -> AsyncConnection ``` Returns a connection that can be used to query from databases. **Args:** * `begin`: Whether to begin a transaction on the connection; if True, if any operations fail, the entire transaction will be rolled back. * `**connect_kwargs`: Additional keyword arguments to pass to either `engine.begin` or engine.connect\`. **Returns:** * The SQLAlchemy AsyncConnection. **Examples:** Create an asynchronous connection as a context-managed transaction. ```python theme={null} import asyncio from prefect_sqlalchemy import AsyncSqlAlchemyConnector async def main(): sqlalchemy_connector = await AsyncSqlAlchemyConnector.load("BLOCK_NAME") async with sqlalchemy_connector.get_connection(begin=False) as connection: await connection.execute("SELECT * FROM table LIMIT 1;") asyncio.run(main()) ``` #### `get_engine` ```python theme={null} get_engine(self, **create_engine_kwargs: Dict[str, Any]) -> AsyncEngine ``` Returns an authenticated engine that can be used to query from databases. If an existing engine exists, return that one. **Returns:** * The authenticated SQLAlchemy AsyncEngine. **Examples:** Create an asynchronous engine to PostgreSQL using URL params. ```python theme={null} from prefect import flow from prefect_sqlalchemy import ( AsyncSqlAlchemyConnector, ConnectionComponents, AsyncDriver ) @flow async def sqlalchemy_credentials_flow(): sqlalchemy_credentials = AsyncSqlAlchemyConnector( connection_info=ConnectionComponents( driver=AsyncDriver.POSTGRESQL_ASYNCPG, username="prefect", password="prefect_password", database="postgres" ) ) print(sqlalchemy_credentials.get_engine()) asyncio.run(sqlalchemy_credentials_flow()) ``` #### `reset_connections` ```python theme={null} reset_connections(self) -> None ``` Tries to close all opened connections and their results. **Examples:** Resets connections so `fetch_*` methods return new results. ```python theme={null} import asyncio from prefect_sqlalchemy import AsyncSqlAlchemyConnector async def example_run(): async with AsyncSqlAlchemyConnector.load("MY_BLOCK") as database: results = await database.fetch_one("SELECT * FROM customers") await database.reset_connections() results = await database.fetch_one("SELECT * FROM customers") asyncio.run(example_run()) ``` # prefect-sqlalchemy Source: https://docs.prefect.io/integrations/prefect-sqlalchemy/index # Welcome! `prefect-sqlalchemy` helps you connect to a database in your Prefect flows. ## Getting started ### Install `prefect-sqlalchemy` The following command will install a version of `prefect-sqlalchemy` compatible with your installed version of `prefect`. If you don't already have `prefect` installed, it will install the newest version of `prefect` as well. ```bash theme={null} pip install "prefect[sqlalchemy]" ``` Upgrade to the latest versions of `prefect` and `prefect-sqlalchemy`: ```bash theme={null} pip install -U "prefect[sqlalchemy]" ``` ### Register newly installed block types Register the block types in the `prefect-sqlalchemy` module to make them available for use. ```bash theme={null} prefect block register -m prefect_sqlalchemy ``` ## Examples ### Save credentials to a block To use the `load` method on Blocks, you must have a block saved through code or saved through the UI. ```python theme={null} from prefect_sqlalchemy import SqlAlchemyConnector, ConnectionComponents, SyncDriver connector = SqlAlchemyConnector( connection_info=ConnectionComponents( driver=SyncDriver.POSTGRESQL_PSYCOPG2, username="USERNAME-PLACEHOLDER", password="PASSWORD-PLACEHOLDER", host="localhost", port=5432, database="DATABASE-PLACEHOLDER", ) ) connector.save("BLOCK_NAME-PLACEHOLDER") ``` Load the saved block that holds your credentials: ```python theme={null} from prefect_sqlalchemy import SqlAlchemyConnector SqlAlchemyConnector.load("BLOCK_NAME-PLACEHOLDER") ``` The required arguments depend upon the desired driver. For example, SQLite requires only the `driver` and `database` arguments: ```python theme={null} from prefect_sqlalchemy import SqlAlchemyConnector, ConnectionComponents, SyncDriver connector = SqlAlchemyConnector( connection_info=ConnectionComponents( driver=SyncDriver.SQLITE_PYSQLITE, database="DATABASE-PLACEHOLDER.db" ) ) connector.save("BLOCK_NAME-PLACEHOLDER") ``` ### Work with databases in a flow To set up a table, use the `execute` and `execute_many` methods. Use the `fetch_many` method to retrieve data in a stream until there's no more data. Use the `SqlAlchemyConnector` as a context manager, to ensure that the SQLAlchemy engine and any connected resources are closed properly after you're done with them. **Async support** For async workflows with async database drivers (like `AsyncDriver.SQLITE_AIOSQLITE` or `AsyncDriver.POSTGRESQL_ASYNCPG`), use `AsyncSqlAlchemyConnector` instead of `SqlAlchemyConnector`. See the **Async** tab below for a complete example. ```python theme={null} from prefect import flow, task from prefect_sqlalchemy import SqlAlchemyConnector @task def setup_table(block_name: str) -> None: with SqlAlchemyConnector.load(block_name) as connector: connector.execute( "CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);" ) connector.execute( "INSERT INTO customers (name, address) VALUES (:name, :address);", parameters={"name": "Marvin", "address": "Highway 42"}, ) connector.execute_many( "INSERT INTO customers (name, address) VALUES (:name, :address);", seq_of_parameters=[ {"name": "Ford", "address": "Highway 42"}, {"name": "Unknown", "address": "Highway 42"}, ], ) @task def fetch_data(block_name: str) -> list: all_rows = [] with SqlAlchemyConnector.load(block_name) as connector: while True: # Repeated fetch* calls using the same operation will # skip re-executing and instead return the next set of results new_rows = connector.fetch_many("SELECT * FROM customers", size=2) if len(new_rows) == 0: break all_rows.append(new_rows) return all_rows @flow def sqlalchemy_flow(block_name: str) -> list: setup_table(block_name) all_rows = fetch_data(block_name) return all_rows if __name__ == "__main__": sqlalchemy_flow("BLOCK-NAME-PLACEHOLDER") ``` ```python theme={null} from prefect import flow, task from prefect_sqlalchemy import AsyncSqlAlchemyConnector import asyncio @task async def setup_table(block_name: str) -> None: async with await AsyncSqlAlchemyConnector.load(block_name) as connector: await connector.execute( "CREATE TABLE IF NOT EXISTS customers (name varchar, address varchar);" ) await connector.execute( "INSERT INTO customers (name, address) VALUES (:name, :address);", parameters={"name": "Marvin", "address": "Highway 42"}, ) await connector.execute_many( "INSERT INTO customers (name, address) VALUES (:name, :address);", seq_of_parameters=[ {"name": "Ford", "address": "Highway 42"}, {"name": "Unknown", "address": "Highway 42"}, ], ) @task async def fetch_data(block_name: str) -> list: all_rows = [] async with await AsyncSqlAlchemyConnector.load(block_name) as connector: while True: # Repeated fetch* calls using the same operation will # skip re-executing and instead return the next set of results new_rows = await connector.fetch_many("SELECT * FROM customers", size=2) if len(new_rows) == 0: break all_rows.append(new_rows) return all_rows @flow async def sqlalchemy_flow(block_name: str) -> list: await setup_table(block_name) all_rows = await fetch_data(block_name) return all_rows if __name__ == "__main__": asyncio.run(sqlalchemy_flow("BLOCK-NAME-PLACEHOLDER")) ``` ## Resources Refer to the `prefect-sqlalchemy` [SDK documentation](/integrations/prefect-sqlalchemy/api-ref/prefect_sqlalchemy-credentials) to explore all the capabilities of the `prefect-sqlalchemy` library. For assistance using SQLAlchemy, consult the [SQLAlchemy documentation](https://www.sqlalchemy.org/). # Use integrations Source: https://docs.prefect.io/integrations/use-integrations Prefect integrations are PyPI packages you can install to help you build and integrate your workflows with third parties. ## Install an integration package Install an integration package with `pip`. For example, to install `prefect-aws` you can: * install the package directly: ```bash theme={null} pip install prefect-aws ``` * install the corresponding extra: ```bash theme={null} pip install 'prefect[aws]' ``` See [the `project.optional-dependencies` section of `pyproject.toml`](https://github.com/PrefectHQ/prefect/blob/main/pyproject.toml) for the full list of extras and the versions they specify. ## Register blocks from an integration Once the package is installed, [register the blocks](/v3/develop/blocks/#registering-blocks-for-use-in-the-prefect-ui) within the integration to view them in the Prefect Cloud UI: For example, to register the blocks available in `prefect-aws`: ```bash theme={null} prefect block register -m prefect_aws ``` To use a block's `load` method, you must have a block [saved](/v3/develop/blocks/#saving-blocks). [Learn more about blocks](/v3/develop/blocks). ## Use tasks and flows from an Integration Integrations may contain pre-built tasks and flows that can be imported and called within your code. For example, read a secret from AWS Secrets Manager with the `read_secret` task with the following code: ```python theme={null} from prefect import flow from prefect_aws import AwsCredentials from prefect_aws.secrets_manager import read_secret @flow def connect_to_database(): aws_credentials = AwsCredentials.load("MY_BLOCK_NAME") secret_value = read_secret( secret_name="db_password", aws_credentials=aws_credentials ) # Then, use secret_value to connect to a database ``` ## Customize tasks and flows from an integration To customize pre-configured tasks or flows, use `with_options`. For example, configure retries for dbt Cloud jobs: ```python theme={null} from prefect import flow from prefect_dbt.cloud import DbtCloudCredentials from prefect_dbt.cloud.jobs import trigger_dbt_cloud_job_run_and_wait_for_completion custom_run_dbt_cloud_job = trigger_dbt_cloud_job_run_and_wait_for_completion.with_options( name="Run My DBT Cloud Job", retries=2, retry_delay_seconds=10 ) @flow def run_dbt_job_flow(): run_result = custom_run_dbt_cloud_job( dbt_cloud_credentials=DbtCloudCredentials.load("my-dbt-cloud-credentials"), job_id=1 ) if __name__ == "__main__": run_dbt_job_flow() ``` # How to use and configure the API client Source: https://docs.prefect.io/v3/advanced/api-client ## Overview The [`PrefectClient`](https://reference.prefect.io/prefect/client/) offers methods to simplify common operations against Prefect's REST API that may not be abstracted away by the SDK. For example, to [reschedule flow runs](/v3/develop/interact-with-api/#reschedule-late-flow-runs), one might use methods like: * `read_flow_runs` with a `FlowRunFilter` to read certain flow runs * `create_flow_run_from_deployment` to schedule new flow runs * `delete_flow_run` to delete a very `Late` flow run ### Getting a client By default, `get_client()` returns an asynchronous client to be used as a context manager, but you may also use a synchronous client. ```python async theme={null} from prefect import get_client async with get_client() as client: response = await client.hello() print(response.json()) # 👋 ``` You can also use a synchronous client: ```python sync theme={null} from prefect import get_client with get_client(sync_client=True) as client: response = client.hello() print(response.json()) # 👋 ``` ## Pagination and default query limits Client methods that accept `limit` and `offset` parameters - such as `read_flow_runs`, `read_deployments`, and `read_task_runs` — are subject to a server-side maximum. When `limit` is `None` (the default), the server applies `PREFECT_API_DEFAULT_LIMIT`, which defaults to `200`. To retrieve all matching records, paginate with `offset`: ```python theme={null} from prefect import get_client async def read_all_deployments(page_size: int = 200): all_deployments = [] offset = 0 async with get_client() as client: while True: page = await client.read_deployments( limit=page_size, offset=offset ) if not page: break all_deployments.extend(page) if len(page) < page_size: break offset += page_size return all_deployments ``` ## Configure custom headers You can configure custom HTTP headers to be sent with every API request by setting the `PREFECT_CLIENT_CUSTOM_HEADERS` setting. This is useful for adding authentication headers, API keys, or other custom headers required by proxies, CDNs, or security systems. ### Setting custom headers Custom headers can be configured via environment variables or settings. The headers are specified as key-value pairs in JSON format. ```bash Environment variable theme={null} export PREFECT_CLIENT_CUSTOM_HEADERS='{"CF-Access-Client-Id": "your-client-id", "CF-Access-Client-Secret": "your-secret"}' ``` ```bash CLI theme={null} prefect config set PREFECT_CLIENT_CUSTOM_HEADERS='{"CF-Access-Client-Id": "your-client-id", "CF-Access-Client-Secret": "your-secret"}' ``` ```toml prefect.toml theme={null} [client] custom_headers = '''{ "CF-Access-Client-Id": "your-client-id", "CF-Access-Client-Secret": "your-secret", "X-API-Key": "your-api-key" }''' ``` **Protected headers** Certain headers are protected and cannot be overridden by custom headers for security reasons: * `User-Agent` - Managed by Prefect to identify client version * `Prefect-Csrf-Token` - Used for CSRF protection * `Prefect-Csrf-Client` - Used for CSRF protection If you attempt to override these headers, Prefect will log a warning and ignore the custom header value. ## Examples These examples are meant to illustrate how one might develop their own utilities for interacting with the API. If you believe a client method is missing, or you'd like to see a specific pattern better represented in the SDK generally, please [open an issue](https://github.com/PrefectHQ/prefect/issues/new/choose). ### Reschedule late flow runs To bulk reschedule flow runs that are late, delete the late flow runs and create new ones in a `Scheduled` state with a delay. This is useful if you accidentally scheduled many flow runs of a deployment to an inactive work pool, for example. The following example reschedules the last three late flow runs of a deployment named `healthcheck-storage-test` to run six hours later than their original expected start time. It also deletes any remaining late flow runs of that deployment. First, define the rescheduling function: ```python theme={null} async def reschedule_late_flow_runs( deployment_name: str, delay: timedelta, most_recent_n: int, delete_remaining: bool = True, states: list[str] | None = None ) -> list[FlowRun]: states = states or ["Late"] async with get_client() as client: flow_runs = await client.read_flow_runs( flow_run_filter=FlowRunFilter( state=dict(name=dict(any_=states)), expected_start_time=dict(before_=datetime.now(timezone.utc)), ), deployment_filter=DeploymentFilter(name={'like_': deployment_name}), sort=FlowRunSort.START_TIME_DESC, limit=most_recent_n if not delete_remaining else None ) rescheduled_flow_runs: list[FlowRun] = [] for i, run in enumerate(flow_runs): await client.delete_flow_run(flow_run_id=run.id) if i < most_recent_n: new_run = await client.create_flow_run_from_deployment( deployment_id=run.deployment_id, state=Scheduled(scheduled_time=run.expected_start_time + delay), ) rescheduled_flow_runs.append(new_run) return rescheduled_flow_runs ``` Then use it to reschedule flows: ```python theme={null} rescheduled_flow_runs = asyncio.run( reschedule_late_flow_runs( deployment_name="healthcheck-storage-test", delay=timedelta(hours=6), most_recent_n=3, ) ) ``` ```python reschedule_late_flows.py theme={null} from __future__ import annotations import asyncio from datetime import datetime, timedelta, timezone from prefect import get_client from prefect.client.schemas.filters import DeploymentFilter, FlowRunFilter from prefect.client.schemas.objects import FlowRun from prefect.client.schemas.sorting import FlowRunSort from prefect.states import Scheduled async def reschedule_late_flow_runs( deployment_name: str, delay: timedelta, most_recent_n: int, delete_remaining: bool = True, states: list[str] | None = None ) -> list[FlowRun]: states = states or ["Late"] async with get_client() as client: flow_runs = await client.read_flow_runs( flow_run_filter=FlowRunFilter( state=dict(name=dict(any_=states)), expected_start_time=dict(before_=datetime.now(timezone.utc)), ), deployment_filter=DeploymentFilter(name={'like_': deployment_name}), sort=FlowRunSort.START_TIME_DESC, limit=most_recent_n if not delete_remaining else None ) if not flow_runs: print(f"No flow runs found in states: {states!r}") return [] rescheduled_flow_runs: list[FlowRun] = [] for i, run in enumerate(flow_runs): await client.delete_flow_run(flow_run_id=run.id) if i < most_recent_n: new_run = await client.create_flow_run_from_deployment( deployment_id=run.deployment_id, state=Scheduled(scheduled_time=run.expected_start_time + delay), ) rescheduled_flow_runs.append(new_run) return rescheduled_flow_runs if __name__ == "__main__": rescheduled_flow_runs = asyncio.run( reschedule_late_flow_runs( deployment_name="healthcheck-storage-test", delay=timedelta(hours=6), most_recent_n=3, ) ) print(f"Rescheduled {len(rescheduled_flow_runs)} flow runs") assert all(run.state.is_scheduled() for run in rescheduled_flow_runs) assert all( run.expected_start_time > datetime.now(timezone.utc) for run in rescheduled_flow_runs ) ``` ### Get the last `N` completed flow runs from your workspace To get the last `N` completed flow runs from your workspace, use `read_flow_runs` and `prefect.client.schemas`. This example gets the last three completed flow runs from your workspace: ```python theme={null} async def get_most_recent_flow_runs( n: int, states: list[str] | None = None ) -> list[FlowRun]: async with get_client() as client: return await client.read_flow_runs( flow_run_filter=FlowRunFilter( state={'type': {'any_': states or ["COMPLETED"]}} ), sort=FlowRunSort.END_TIME_DESC, limit=n, ) ``` Use it to get the last 3 completed runs: ```python theme={null} flow_runs: list[FlowRun] = asyncio.run( get_most_recent_flow_runs(n=3) ) ``` ```python get_recent_flows.py theme={null} from __future__ import annotations import asyncio from prefect import get_client from prefect.client.schemas.filters import FlowRunFilter from prefect.client.schemas.objects import FlowRun from prefect.client.schemas.sorting import FlowRunSort async def get_most_recent_flow_runs( n: int, states: list[str] | None = None ) -> list[FlowRun]: async with get_client() as client: return await client.read_flow_runs( flow_run_filter=FlowRunFilter( state={'type': {'any_': states or ["COMPLETED"]}} ), sort=FlowRunSort.END_TIME_DESC, limit=n, ) if __name__ == "__main__": flow_runs: list[FlowRun] = asyncio.run( get_most_recent_flow_runs(n=3) ) assert len(flow_runs) == 3 assert all( run.state.is_completed() for run in flow_runs ) assert ( end_times := [run.end_time for run in flow_runs] ) == sorted(end_times, reverse=True) ``` Instead of the last three from the whole workspace, you can also use the `DeploymentFilter` to get the last three completed flow runs of a specific deployment. ### Transition all running flows to cancelled through the Client Use `get_client`to set multiple runs to a `Cancelled` state. This example cancels all flow runs that are in `Pending`, `Running`, `Scheduled`, or `Late` states when the script is run. ```python theme={null} async def list_flow_runs_with_states(states: list[str]) -> list[FlowRun]: async with get_client() as client: return await client.read_flow_runs( flow_run_filter=FlowRunFilter( state=FlowRunFilterState( name=FlowRunFilterStateName(any_=states) ) ) ) async def cancel_flow_runs(flow_runs: list[FlowRun]): async with get_client() as client: for flow_run in flow_runs: state = flow_run.state.copy( update={"name": "Cancelled", "type": StateType.CANCELLED} ) await client.set_flow_run_state(flow_run.id, state, force=True) ``` Cancel all pending, running, scheduled or late flows: ```python theme={null} async def bulk_cancel_flow_runs(): states = ["Pending", "Running", "Scheduled", "Late"] flow_runs = await list_flow_runs_with_states(states) while flow_runs: print(f"Cancelling {len(flow_runs)} flow runs") await cancel_flow_runs(flow_runs) flow_runs = await list_flow_runs_with_states(states) asyncio.run(bulk_cancel_flow_runs()) ``` ```python cancel_flows.py theme={null} import asyncio from prefect import get_client from prefect.client.schemas.filters import FlowRunFilter, FlowRunFilterState, FlowRunFilterStateName from prefect.client.schemas.objects import FlowRun, StateType async def list_flow_runs_with_states(states: list[str]) -> list[FlowRun]: async with get_client() as client: return await client.read_flow_runs( flow_run_filter=FlowRunFilter( state=FlowRunFilterState( name=FlowRunFilterStateName(any_=states) ) ) ) async def cancel_flow_runs(flow_runs: list[FlowRun]): async with get_client() as client: for idx, flow_run in enumerate(flow_runs): print(f"[{idx + 1}] Cancelling flow run '{flow_run.name}' with ID '{flow_run.id}'") state_updates: dict[str, str] = {} state_updates.setdefault("name", "Cancelled") state_updates.setdefault("type", StateType.CANCELLED) state = flow_run.state.copy(update=state_updates) await client.set_flow_run_state(flow_run.id, state, force=True) async def bulk_cancel_flow_runs(): states = ["Pending", "Running", "Scheduled", "Late"] flow_runs = await list_flow_runs_with_states(states) while len(flow_runs) > 0: print(f"Cancelling {len(flow_runs)} flow runs\n") await cancel_flow_runs(flow_runs) flow_runs = await list_flow_runs_with_states(states) print("Done!") if __name__ == "__main__": asyncio.run(bulk_cancel_flow_runs()) ``` ### Query events with pagination Query historical events from the Prefect API with support for filtering and pagination. This is useful for analyzing past activity, debugging issues, or building custom monitoring tools. The following example queries events from the last hour and demonstrates how to paginate through results: ```python theme={null} from datetime import datetime, timedelta, timezone from prefect import get_client from prefect.events.filters import EventFilter, EventOccurredFilter async def query_recent_events(): async with get_client() as client: # query events from the last hour now = datetime.now(timezone.utc) event_filter = EventFilter( occurred=EventOccurredFilter( since=now - timedelta(hours=1), until=now, ) ) # get first page page = await client.read_events(filter=event_filter, limit=10) print(f"Total events: {page.total}") # iterate through all pages while page: for event in page.events: print(f"{event.occurred} - {event.event}") page = await page.get_next_page(client) ``` ```python query_events.py theme={null} import asyncio from datetime import datetime, timedelta, timezone from prefect import get_client from prefect.events.filters import EventFilter, EventOccurredFilter async def query_recent_events(): async with get_client() as client: # query events from the last hour now = datetime.now(timezone.utc) event_filter = EventFilter( occurred=EventOccurredFilter( since=now - timedelta(hours=1), until=now, ) ) # get first page with small limit to demonstrate pagination print("=== first page ===") event_page = await client.read_events(filter=event_filter, limit=5) print(f"total events: {event_page.total}") print(f"events on this page: {len(event_page.events)}") for event in event_page.events: print(f" {event.occurred} - {event.event}") print() # if there are more pages, fetch the next one second_page = await event_page.get_next_page(client) if second_page: print("=== second page ===") print(f"events on this page: {len(second_page.events)}") for event in second_page.events: print(f" {event.occurred} - {event.event}") print() # demonstrate iterating through all pages print("=== collecting all events ===") all_events = [] page = await client.read_events(filter=event_filter, limit=5) page_count = 0 while page: all_events.extend(page.events) page_count += 1 page = await page.get_next_page(client) print(f"collected {len(all_events)} events across {page_count} pages") if __name__ == "__main__": asyncio.run(query_recent_events()) ``` ### Create, read, or delete artifacts Create, read, or delete artifacts programmatically through the [Prefect REST API](/v3/api-ref/rest-api/). With the Artifacts API, you can automate the creation and management of artifacts as part of your workflow. For example, to read the five most recently created Markdown, table, and link artifacts, you can run the following: ```python fixture:mock_post_200 theme={null} import requests PREFECT_API_URL="https://api.prefect.cloud/api/accounts/abc/workspaces/xyz" PREFECT_API_KEY="pnu_ghijk" data = { "sort": "CREATED_DESC", "limit": 5, "artifacts": { "key": { "exists_": True } } } headers = {"Authorization": f"Bearer {PREFECT_API_KEY}"} endpoint = f"{PREFECT_API_URL}/artifacts/filter" response = requests.post(endpoint, headers=headers, json=data) assert response.status_code == 200 for artifact in response.json(): print(artifact) ``` If you don't specify a key or that a key must exist, you will also return results, which are a type of key-less artifact. See the [Prefect REST API documentation](/v3/api-ref/rest-api/) on artifacts for more information. # How to customize asset metadata Source: https://docs.prefect.io/v3/advanced/assets This guide covers how to enhance your assets with rich metadata, custom properties, and explicit dependency management beyond what's automatically inferred from your task graph. Both `@materialize` and `asset_deps` accept either a string referencing an asset key or a full `Asset` class instance. Using the `Asset` class is the way to provide additional metadata like names, descriptions, and ownership information beyond just the key. ## The Asset class While you can use simple string keys with the `@materialize` decorator, the `Asset` class provides more control over asset properties and metadata. Create `Asset` instances to add organizational context and improve discoverability. ### Asset initialization fields The `Asset` class accepts the following parameters: * **`key`** (required): A valid URI that uniquely identifies your asset. This is the only required field. * **`properties`** (optional): An `AssetProperties` instance containing metadata about the asset. ```python theme={null} from prefect.assets import Asset, AssetProperties # Simple asset with just a key basic_asset = Asset(key="s3://my-bucket/data.csv") # Asset with full properties detailed_asset = Asset( key="s3://my-bucket/processed-data.csv", properties=AssetProperties( name="Processed Customer Data", description="Clean customer data with PII removed", owners=["data-team@company.com", "alice@company.com"], url="https://dashboard.company.com/datasets/customer-data" ) ) ``` Each of these fields will be displayed alongside the asset in the UI. **Interactive fields** The owners field can optionally reference user emails, as well as user and team handles within your Prefect Cloud workspace. The URL field becomes a clickable link when this asset is displayed. **Updates occur at runtime** Updates to asset metadata are always performed at workflow runtime whenever a materializing task is executed that references the asset. ## Using assets in materializations Once you've defined your `Asset` instances, use them directly with the `@materialize` decorator: ```python theme={null} from prefect import flow from prefect.assets import Asset, AssetProperties, materialize detailed_asset = Asset( key="s3://my-bucket/processed-data.csv", properties=AssetProperties( name="Processed Customer Data", description="Clean customer data with PII removed", owners=["data-team@company.com", "alice@company.com"], url="https://dashboard.company.com/datasets/customer-data" ) ) properties = AssetProperties( name="Sales Analytics Dataset", description="This dataset contains daily sales figures by region along with customer segmentation data", owners=["analytics-team", "john.doe@company.com"], url="https://analytics.company.com/sales-dashboard" ) sales_asset = Asset( key="snowflake://warehouse/analytics/sales_summary", properties=properties ) @materialize(detailed_asset) def process_customer_data(): # Your processing logic here pass @materialize(sales_asset) def generate_sales_report(): # Your reporting logic here pass @flow def analytics_pipeline(): process_customer_data() generate_sales_report() ``` ## Adding runtime metadata Beyond static properties, you can add dynamic metadata during task execution. This is useful for tracking runtime information like row counts, processing times, or data quality metrics. ### Using `Asset.add_metadata()` The preferred approach is to use the `add_metadata()` method on your `Asset` instances. This prevents typos in asset keys and provides better IDE support: ```python theme={null} from prefect.assets import Asset, AssetProperties, materialize customer_data = Asset( key="s3://my-bucket/customer-data.csv", properties=AssetProperties( name="Customer Data", owners=["data-team@company.com"] ) ) @materialize(customer_data) def process_customers(): # Your processing logic result = perform_customer_processing() # Add runtime metadata customer_data.add_metadata({ "record_count": len(result), "processing_duration_seconds": 45.2, "data_quality_score": 0.95, "last_updated": "2024-01-15T10:30:00Z" }) return result ``` ### Using `add_asset_metadata()` utility Alternatively, you can use the `add_asset_metadata()` function, which requires specifying the asset key: ```python theme={null} from prefect.assets import materialize, add_asset_metadata @materialize("s3://my-bucket/processed-data.csv") def process_data(): result = perform_processing() add_asset_metadata( "s3://my-bucket/processed-data.csv", {"rows_processed": len(result), "processing_time": "2.5s"} ) return result ``` ### Accumulating metadata You can call metadata methods multiple times to accumulate information: ```python theme={null} @materialize(customer_data) def comprehensive_processing(): # First processing step raw_data = extract_data() customer_data.add_metadata({"raw_records": len(raw_data)}) # Second processing step cleaned_data = clean_data(raw_data) customer_data.add_metadata({ "cleaned_records": len(cleaned_data), "records_removed": len(raw_data) - len(cleaned_data) }) # Final step final_data = enrich_data(cleaned_data) customer_data.add_metadata({ "final_records": len(final_data), "enrichment_success_rate": 0.92 }) return final_data ``` ## Explicit asset dependencies While Prefect automatically infers dependencies from your task graph, you can explicitly declare asset relationships using the `asset_deps` parameter. This is useful when: * The task graph doesn't fully capture your data dependencies due to dynamic execution rules * You need to reference assets that aren't directly passed between tasks * You want to be explicit about critical dependencies for documentation purposes ### Hard-coding dependencies Use `asset_deps` to explicitly declare which assets your materialization depends on. You can reference assets by key string or by full `Asset` instance: ```python theme={null} from prefect import flow from prefect.assets import Asset, AssetProperties, materialize # Define your assets raw_data_asset = Asset(key="s3://my-bucket/raw-data.csv") config_asset = Asset(key="s3://my-bucket/processing-config.json") processed_asset = Asset(key="s3://my-bucket/processed-data.csv") @materialize(raw_data_asset) def extract_raw_data(): pass @materialize( processed_asset, asset_deps=[raw_data_asset, config_asset] # Explicit dependencies ) def process_data(): # This function depends on both raw data and config # even if they're not directly passed as parameters pass @flow def explicit_dependencies_flow(): extract_raw_data() process_data() # Explicitly depends on raw_data_asset and config_asset ``` ### Mixing inferred and explicit dependencies You can combine task graph inference with explicit dependencies: ```python theme={null} from prefect import flow from prefect.assets import Asset, materialize upstream_asset = Asset(key="s3://my-bucket/upstream.csv") config_asset = Asset(key="s3://my-bucket/config.json") downstream_asset = Asset(key="s3://my-bucket/downstream.csv") @materialize(upstream_asset) def create_upstream(): return "upstream_data" @materialize( downstream_asset, asset_deps=[config_asset] # Explicit dependency on config ) def create_downstream(upstream_data): # Inferred dependency on upstream_asset # This asset depends on: # 1. upstream_asset (inferred from task graph) # 2. config_asset (explicit via asset_deps) pass @flow def mixed_dependencies(): data = create_upstream() create_downstream(data) ``` ### Best practices for `asset_deps` references Use *string keys* when referencing assets that are materialized by other Prefect workflows. This avoids duplicate metadata definitions and lets the materializing workflow be the source of truth: ```python theme={null} from prefect.assets import materialize # Good: Reference by key when another workflow materializes the asset @materialize( "s3://my-bucket/final-report.csv", asset_deps=["s3://my-bucket/data-from-other-workflow.csv"] # String key ) def create_report(): pass ``` Use *full `Asset` instances* when referencing assets that are completely external to Prefect. This provides metadata about external systems that Prefect wouldn't otherwise know about: ```python theme={null} from prefect.assets import Asset, AssetProperties, materialize # Good: Full Asset for external systems external_database = Asset( key="postgres://prod-db/public/users", properties=AssetProperties( name="Production Users Table", description="Main user database maintained by the platform team", owners=["platform-team@company.com"], url="https://internal-db-dashboard.com/users" ) ) @materialize( "s3://my-bucket/user-analytics.csv", asset_deps=[external_database] # Full Asset for external system ) def analyze_users(): pass ``` ## Updating asset properties Asset properties should have one source of truth to avoid conflicts. When you materialize an asset with properties, those properties perform a complete overwrite of all metadata fields for that asset. **Important** Any `Asset` instance with properties will completely replace all existing metadata. Partial updates are not supported - you must provide all the metadata you want to preserve. ```python theme={null} from prefect.assets import Asset, AssetProperties, materialize # Initial materialization with full properties initial_asset = Asset( key="s3://my-bucket/evolving-data.csv", properties=AssetProperties( name="Evolving Dataset", description="Initial description", owners=["team-a@company.com"], url="https://dashboard.company.com/evolving-data" ) ) @materialize(initial_asset) def initial_creation(): pass # Later materialization - OVERWRITES all properties updated_asset = Asset( key="s3://my-bucket/evolving-data.csv", properties=AssetProperties( name="Evolving Dataset", # Must include to preserve description="Updated description with new insights", # Updated owners=["team-a@company.com"], # Must include to preserve # url is now None because it wasn't included ) ) @materialize(updated_asset) def update_dataset(): pass # The final asset will have: # - name: "Evolving Dataset" (preserved) # - description: "Updated description with new insights" (updated) # - owners: ["team-a@company.com"] (preserved) # - url: None (lost because not included in update) ``` **Best practice** Designate one workflow as the authoritative source for each asset's metadata. Other workflows that reference the asset should use string keys only to avoid conflicting metadata definitions. ## Further Reading * [Learn about asset health and asset events](/v3/concepts/assets) # How to deploy a web application powered by background tasks Source: https://docs.prefect.io/v3/advanced/background-tasks Learn how to background heavy tasks from a web application to dedicated infrastructure. This example demonstrates how to use [background tasks](/v3/concepts/tasks#background-tasks) in the context of a web application using Prefect for task submission, execution, monitoring, and result storage. We'll build out an application using FastAPI to offer API endpoints to our clients, and task workers to execute the background tasks these endpoints defer. Refer to the [examples repository](https://github.com/PrefectHQ/examples/tree/main/apps/background-tasks) for the complete example's source code. This pattern is useful when you need to perform operations that are too long for a standard web request-response cycle, such as data processing, sending emails, or interacting with external APIs that might be slow. ## Overview This example will build out: * `@prefect.task` definitions representing the work you want to run in the background * A `fastapi` application providing API endpoints to: * Receive task parameters via `POST` request and submit the task to Prefect with `.delay()` * Allow polling for the task's status via a `GET` request using its `task_run_id` * A `Dockerfile` to build a multi-stage image for the web app, Prefect server and task worker(s) * A `compose.yaml` to manage lifecycles of the web app, Prefect server and task worker(s) ```bash theme={null} ├── Dockerfile ├── README.md ├── compose.yaml ├── pyproject.toml ├── src │ └── foo │ ├── __init__.py │ ├── _internal/*.py │ ├── api.py │ └── task.py ``` You can follow along by cloning the [examples repository](https://github.com/PrefectHQ/examples) or instead use [`uv`](https://docs.astral.sh/uv/getting-started/installation/) to bootstrap a your own new project: ```bash theme={null} uv init --lib foo uv add prefect marvin ``` This example application is structured as a library with a `src/foo` directory for portability and organization. This example does ***not*** require: * Prefect Cloud * creating a Prefect Deployment * creating a work pool ## Useful things to remember * You can call any Python code from your task definitions (including other flows and tasks!) * Prefect [Results](/v3/concepts/caching) allow you to save/serialize the `return` value of your task definitions to your result storage (e.g. a local directory, S3, GCS, etc), enabling [caching](/v3/concepts/caching) and [idempotency](/v3/advanced/transactions). ## Defining the background task The core of the background processing is a Python function decorated with `@prefect.task`. This marks the function as a unit of work that Prefect can manage (e.g. observe, cache, retry, etc.) ```python src/foo/task.py theme={null} from typing import Any, TypeVar import marvin from prefect import task, Task from prefect.cache_policies import INPUTS, TASK_SOURCE from prefect.states import State from prefect.task_worker import serve from prefect.client.schemas.objects import TaskRun T = TypeVar("T") def _print_output(task: Task, task_run: TaskRun, state: State[T]): result = state.result() print(f"result type: {type(result)}") print(f"result: {result!r}") @task(cache_policy=INPUTS + TASK_SOURCE, on_completion=[_print_output]) async def create_structured_output(data: Any, target: type[T], instructions: str) -> T: return await marvin.cast_async( data, target=target, instructions=instructions, ) def main(): serve(create_structured_output) if __name__ == "__main__": main() ``` Key details: * `@task`: Decorator to define our task we want to run in the background. * `cache_policy`: Caching based on `INPUTS` and `TASK_SOURCE`. * `serve(create_structured_output)`: This function starts a task worker subscribed to newly `delay()`ed task runs. ## Building the FastAPI application The FastAPI application provides API endpoints to trigger the background task and check its status. ```python src/foo/api.py theme={null} import logging from uuid import UUID from fastapi import Depends, FastAPI, Response from fastapi.responses import JSONResponse from foo._internal import get_form_data, get_task_result, StructuredOutputRequest from foo.task import create_structured_output logger = logging.getLogger(__name__) app = FastAPI() @app.post("/tasks", status_code=202) async def submit_task( form_data: StructuredOutputRequest = Depends(get_form_data), ) -> JSONResponse: """Submit a task to Prefect for background execution.""" future = create_structured_output.delay( form_data.payload, target=form_data.target_type, instructions=form_data.instructions, ) logger.info(f"Submitted task run: {future.task_run_id}") return {"task_run_id": str(future.task_run_id)} @app.get("/tasks/{task_run_id}/status") async def get_task_status_api(task_run_id: UUID) -> Response: """Checks the status of a submitted task run.""" status, data = await get_task_result(task_run_id) response_data = {"task_run_id": str(task_run_id), "status": status} http_status_code = 200 if status == "completed": response_data["result"] = data elif status == "error": response_data["message"] = data # Optionally set a different HTTP status for errors return JSONResponse(response_data, status_code=http_status_code) ``` The `get_task_result` helper function (in `src/foo/_internal/_prefect.py`) uses the Prefect Python client to interact with the Prefect API: ```python src/foo/_internal/_prefect.py theme={null} from typing import Any, Literal, cast from uuid import UUID from prefect.client.orchestration import get_client from prefect.client.schemas.objects import TaskRun from prefect.logging import get_logger logger = get_logger(__name__) Status = Literal["completed", "pending", "error"] def _any_task_run_result(task_run: TaskRun) -> Any: try: return cast(Any, task_run.state.result(_sync=True)) # type: ignore except Exception as e: logger.warning(f"Could not retrieve result for task run {task_run.id}: {e}") return None async def get_task_result(task_run_id: UUID) -> tuple[Status, Any]: """Get task result or status. Returns: tuple: (status, data) status: "completed", "pending", or "error" data: the result if completed, error message if error, None if pending """ try: async with get_client() as client: task_run = await client.read_task_run(task_run_id) if not task_run.state: return "pending", None if task_run.state.is_completed(): try: result = _any_task_run_result(task_run) return "completed", result except Exception as e: logger.warning( f"Could not retrieve result for completed task run {task_run_id}: {e}" ) return "completed", "" elif task_run.state.is_failed(): try: error_result = _any_task_run_result(task_run) error_message = ( str(error_result) if error_result else "Task failed without specific error message." ) return "error", error_message except Exception as e: logger.warning( f"Could not retrieve error result for failed task run {task_run_id}: {e}" ) return "error", "" else: return "pending", None except Exception as e: logger.error(f"Error checking task status for {task_run_id}: {e}") return "error", f"Failed to check task status: {str(e)}" ``` This function fetches the `TaskRun` object from the API and checks its `state` to determine if it's `Completed`, `Failed`, or still `Pending`/`Running`. If completed, it attempts to retrieve the result using `task_run.state.result()`. If failed, it tries to get the error message. ## Building the Docker Image A multi-stage `Dockerfile` is used to create optimized images for each service (Prefect server, task worker, and web API). This approach helps keep image sizes small and separates build dependencies from runtime dependencies. ```dockerfile Dockerfile theme={null} # Stage 1: Base image with Python and uv FROM --platform=linux/amd64 ghcr.io/astral-sh/uv:python3.12-bookworm-slim as base WORKDIR /app ENV UV_SYSTEM_PYTHON=1 ENV PATH="/root/.local/bin:$PATH" COPY pyproject.toml uv.lock* ./ # Note: We install all dependencies needed for all stages here. # A more optimized approach might separate dependencies per stage. RUN --mount=type=cache,target=/root/.cache/uv \ uv pip install --system -r pyproject.toml COPY src/ /app/src FROM base as server CMD ["prefect", "server", "start"] # --- Task Worker Stage --- # FROM base as task # Command to start the task worker by running the task script # This script should call `prefect.task_worker.serve(...)` CMD ["python", "src/foo/task.py"] # --- API Stage --- # FROM base as api # Command to start the FastAPI server using uvicorn CMD ["uvicorn", "src.foo.api:app", "--host", "0.0.0.0", "--port", "8000"] ``` * **Base Stage (`base`)**: Sets up Python, `uv`, installs all dependencies from `pyproject.toml` into a base layer to make use of Docker caching, and copies the source code. * **Server Stage (`server`)**: Builds upon the `base` stage. Sets the default command (`CMD`) to start the Prefect server. * **Task Worker Stage (`task`)**: Builds upon the `base` stage. Sets the `CMD` to run the `src/foo/task.py` script, which is expected to contain the `serve()` call for the task(s). * **API Stage (`api`)**: Builds upon the `base` stage. Sets the `CMD` to start the FastAPI application using `uvicorn`. The `compose.yaml` file then uses the `target` build argument to specify which of these final stages (`server`, `task`, `api`) to use for each service container. ## Declaring the application services We use `compose.yaml` to define and run the multi-container application, managing the lifecycles of the FastAPI web server, the Prefect API server, database and task worker(s). ```yaml compose.yaml theme={null} services: prefect-server: build: context: . target: server ports: - "4200:4200" volumes: - prefect-data:/root/.prefect # Persist Prefect DB environment: # Allow connections from other containers PREFECT_SERVER_API_HOST: 0.0.0.0 # Task Worker task: build: context: . target: deploy: replicas: 1 # task workers are safely horizontally scalable (think redis stream consumer groups) volumes: # Mount storage for results - ./task-storage:/task-storage depends_on: prefect-server: condition: service_started environment: PREFECT_API_URL: http://prefect-server:4200/api PREFECT_LOCAL_STORAGE_PATH: /task-storage PREFECT_LOGGING_LOG_PRINTS: "true" PREFECT_RESULTS_PERSIST_BY_DEFAULT: "true" MARVIN_ENABLE_DEFAULT_PRINT_HANDLER: "false" OPENAI_API_KEY: ${OPENAI_API_KEY} develop: # Optionally watch for code changes for development watch: - action: sync path: . target: /app ignore: - .venv/ - task-storage/ - action: rebuild path: uv.lock api: build: context: . target: api volumes: # Mount storage for results - ./task-storage:/task-storage ports: - "8000:8000" depends_on: task: condition: service_started prefect-server: condition: service_started environment: PREFECT_API_URL: http://prefect-server:4200/api PREFECT_LOCAL_STORAGE_PATH: /task-storage develop: # Optionally watch for code changes for development watch: - action: sync path: . target: /app ignore: - .venv/ - task-storage/ - action: rebuild path: uv.lock volumes: # Named volumes for data persistence prefect-data: {} task-storage: {} ``` In a production use-case, you'd likely want to: * write a `Dockerfile` for each service * add a `postgres` service and [configure it as the Prefect database](/v3/manage/server/index#quickstart%3A-configure-a-postgresql-database-with-docker). * remove the hot-reloading configuration in the `develop` section - **`prefect-server`**: Runs the Prefect API server and UI. * `build`: Uses a multi-stage `Dockerfile` (not shown here, but present in the example repo) targeting the `server` stage. * `ports`: Exposes the Prefect API/UI on port `4200`. * `volumes`: Uses a named volume `prefect-data` to persist the Prefect SQLite database (`/root/.prefect/prefect.db`) across container restarts. * `PREFECT_SERVER_API_HOST=0.0.0.0`: Makes the API server listen on all interfaces within the Docker network, allowing the `task` and `api` services to connect. - **`task`**: Runs the Prefect task worker process (executing `python src/foo/task.py` which calls `serve`). * `build`: Uses the `task` stage from the `Dockerfile`. * `depends_on`: Ensures the `prefect-server` service is started before this service attempts to connect. * `PREFECT_API_URL`: Crucial setting that tells the worker where to find the Prefect API to poll for submitted task runs. * `PREFECT_LOCAL_STORAGE_PATH=/task-storage`: Configures the worker to store task run results in the `/task-storage` directory inside the container. This path is mounted to the host using the `task-storage` named volume via `volumes: - ./task-storage:/task-storage` (or just `task-storage:` if using a named volume without a host path binding). * `PREFECT_RESULTS_PERSIST_BY_DEFAULT=true`: Tells Prefect tasks to automatically save their results using the configured storage (defined by `PREFECT_LOCAL_STORAGE_PATH` in this case). * `PREFECT_LOGGING_LOG_PRINTS=true`: Configures the Prefect logger to capture output from `print()` statements within tasks. * `OPENAI_API_KEY=${OPENAI_API_KEY}`: Passes secrets needed by the task code from the host environment (via a `.env` file loaded by Docker Compose) into the container's environment. - **`api`**: Runs the FastAPI web application. * `build`: Uses the `api` stage from the `Dockerfile`. * `depends_on`: Waits for the `prefect-server` (required for submitting tasks and checking status) and optionally the `task` worker. * `PREFECT_API_URL`: Tells the FastAPI application where to send `.delay()` calls and status check requests. * `PREFECT_LOCAL_STORAGE_PATH`: May be needed if the API itself needs to directly read result files (though typically fetching results via `task_run.state.result()` is preferred). - **`volumes`**: Defines named volumes (`prefect-data`, `task-storage`) to persist data generated by the containers. ## Running this example Assuming you have obtained the code (either by cloning the repository or using `uv init` as described previously) and are in the project directory: 1. **Prerequisites:** Ensure Docker Desktop (or equivalent) with `docker compose` support is running. 2. **Build and Run Services:** This example's task uses [marvin](https://github.com/PrefectHQ/marvin), which (by default) requires an OpenAI API key. Provide it as an environment variable when starting the services: ```bash theme={null} OPENAI_API_KEY= docker compose up --build --watch ``` This command will: * `--build`: Build the container images if they don't exist or if the Dockerfile/context has changed. * `--watch`: Watch for changes in the project source code and automatically sync/rebuild services (useful for development). * Add `--detach` or `-d` to run the containers in the background. 3. **Access Services:** * If you cloned the existing example, check out the basic [htmx](https://htmx.org/) UI at [http://localhost:8000](http://localhost:8000) * FastAPI docs: [http://localhost:8000/docs](http://localhost:8000/docs) * Prefect UI (for observing task runs): [http://localhost:4200](http://localhost:4200) ### Cleaning up ```bash theme={null} docker compose down # also remove the named volumes docker compose down -v ``` ## Next Steps This example provides a repeatable pattern for integrating Prefect-managed background tasks with any python web application. You can: * Explore the [background tasks examples repository](https://github.com/PrefectHQ/prefect-background-task-examples) for more examples. * Adapt `src/**/*.py` to define and submit your specific web app and background tasks. * Configure Prefect settings (environment variables in `compose.yaml`) further, for example, using different result storage or logging levels. * Deploy these services to cloud infrastructure using managed container services. # How to customize caching behavior Source: https://docs.prefect.io/v3/advanced/caching ### Separate cache key storage from result storage To store cache records separately from the cached value, you can configure a cache policy to use a custom storage location. Here's an example of a cache policy configured to store cache records in a local directory: ```python theme={null} from prefect import task from prefect.cache_policies import TASK_SOURCE, INPUTS cache_policy = (TASK_SOURCE + INPUTS).configure(key_storage="/path/to/cache/storage") @task(cache_policy=cache_policy) def my_cached_task(x: int): return x + 42 ``` Cache records will be stored in the specified directory while the persisted results will continue to be stored in `~/prefect/storage`. To store cache records in a remote object store such as S3, pass a storage block instead: ```python theme={null} from prefect import task from prefect.cache_policies import TASK_SOURCE, INPUTS from prefect_aws import S3Bucket, AwsCredentials s3_bucket = S3Bucket( credentials=AwsCredentials( aws_access_key_id="my-access-key-id", aws_secret_access_key="my-secret-access-key", ), bucket_name="my-bucket", ) # save the block to ensure it is available across machines s3_bucket.save("my-cache-records-bucket") cache_policy = (TASK_SOURCE + INPUTS).configure(key_storage=s3_bucket) @task(cache_policy=cache_policy) def my_cached_task(x: int): return x + 42 ``` Storing cache records in a remote object store allows you to share cache records across multiple machines. ### Isolate cache access You can control concurrent access to cache records by setting the `isolation_level` parameter on the cache policy. Prefect supports two isolation levels: `READ_COMMITTED` and `SERIALIZABLE`. By default, cache records operate with a `READ_COMMITTED` isolation level. This guarantees that reading a cache record will see the latest committed cache value, but allows multiple executions of the same task to occur simultaneously. Consider the following example: ```python theme={null} from prefect import task from prefect.cache_policies import INPUTS import threading cache_policy = INPUTS @task(cache_policy=cache_policy) def my_task_version_1(x: int): print("my_task_version_1 running") return x + 42 @task(cache_policy=cache_policy) def my_task_version_2(x: int): print("my_task_version_2 running") return x + 43 if __name__ == "__main__": thread_1 = threading.Thread(target=my_task_version_1, args=(1,)) thread_2 = threading.Thread(target=my_task_version_2, args=(1,)) thread_1.start() thread_2.start() thread_1.join() thread_2.join() ``` When running this script, both tasks will execute in parallel and perform work despite both tasks using the same cache key. For stricter isolation, you can use the `SERIALIZABLE` isolation level. This ensures that only one execution of a task occurs at a time for a given cache record via a locking mechanism. When setting `isolation_level` to `SERIALIZABLE`, you must also provide a `lock_manager` that implements locking logic for your system. Here's an updated version of the previous example that uses `SERIALIZABLE` isolation: ```python theme={null} import threading from prefect import task from prefect.cache_policies import INPUTS from prefect.locking.memory import MemoryLockManager from prefect.transactions import IsolationLevel cache_policy = INPUTS.configure( isolation_level=IsolationLevel.SERIALIZABLE, lock_manager=MemoryLockManager(), ) @task(cache_policy=cache_policy) def my_task_version_1(x: int): print("my_task_version_1 running") return x + 42 @task(cache_policy=cache_policy) def my_task_version_2(x: int): print("my_task_version_2 running") return x + 43 if __name__ == "__main__": thread_1 = threading.Thread(target=my_task_version_1, args=(2,)) thread_2 = threading.Thread(target=my_task_version_2, args=(2,)) thread_1.start() thread_2.start() thread_1.join() thread_2.join() ``` In this example, only one of the tasks will run and the other will use the cached value. **Locking in a distributed setting** To manage locks in a distributed setting, you will need to use a storage system for locks that is accessible by all of your execution infrastructure. We recommend using the `RedisLockManager` provided by `prefect-redis` in conjunction with a shared Redis instance: ```python theme={null} from prefect import task from prefect.cache_policies import TASK_SOURCE, INPUTS from prefect.transactions import IsolationLevel from prefect_redis import RedisLockManager cache_policy = (INPUTS + TASK_SOURCE).configure( isolation_level=IsolationLevel.SERIALIZABLE, lock_manager=RedisLockManager(host="my-redis-host"), ) @task(cache_policy=cache_policy) def my_cached_task(x: int): return x + 42 ``` ### Coordinate caching across multiple tasks To coordinate cache writes across tasks, you can run multiple tasks within a single [*transaction*](/v3/develop/transactions). ```python theme={null} from prefect import task, flow from prefect.transactions import transaction @task(cache_key_fn=lambda *args, **kwargs: "static-key-1") def load_data(): return "some-data" @task(cache_key_fn=lambda *args, **kwargs: "static-key-2") def process_data(data, fail): if fail: raise RuntimeError("Error! Abort!") return len(data) @flow def multi_task_cache(fail: bool = True): with transaction(): data = load_data() process_data(data=data, fail=fail) ``` When this flow is run with the default parameter values it will fail on the `process_data` task after the `load_data` task has succeeded. However, because caches are only written to when a transaction is *committed*, the `load_data` task will *not* write a result to its cache key location until the `process_data` task succeeds as well. On a subsequent run with `fail=False`, both tasks will be re-executed and the results will be cached. ### Handling Non-Serializable Objects You may have task inputs that can't (or shouldn't) be serialized as part of the cache key. There are two direct approaches to handle this, both of which based on the same idea. You can **adjust the serialization logic** to only serialize certain properties of an input: 1. Using a custom cache key function: ```python theme={null} from prefect import flow, task from prefect.cache_policies import CacheKeyFnPolicy, RUN_ID from prefect.context import TaskRunContext from pydantic import BaseModel, ConfigDict class NotSerializable: def __getstate__(self): raise TypeError("NooOoOOo! I will not be serialized!") class ContainsNonSerializableObject(BaseModel): model_config = ConfigDict(arbitrary_types_allowed=True) name: str bad_object: NotSerializable def custom_cache_key_fn(context: TaskRunContext, parameters: dict) -> str: return parameters["some_object"].name @task(cache_policy=CacheKeyFnPolicy(cache_key_fn=custom_cache_key_fn) + RUN_ID) def use_object(some_object: ContainsNonSerializableObject) -> str: return f"Used {some_object.name}" @flow def demo_flow(): obj = ContainsNonSerializableObject(name="test", bad_object=NotSerializable()) state = use_object(obj, return_state=True) # Not cached! assert state.name == "Completed" other_state = use_object(obj, return_state=True) # Cached! assert other_state.name == "Cached" assert state.result() == other_state.result() ``` 2. Using Pydantic's [custom serialization](https://docs.pydantic.dev/latest/concepts/serialization/#custom-serializers) on your input types: ```python theme={null} from pydantic import BaseModel, ConfigDict, model_serializer from prefect import flow, task from prefect.cache_policies import INPUTS, RUN_ID class NotSerializable: def __getstate__(self): raise TypeError("NooOoOOo! I will not be serialized!") class ContainsNonSerializableObject(BaseModel): model_config = ConfigDict(arbitrary_types_allowed=True) name: str bad_object: NotSerializable @model_serializer def ser_model(self) -> dict: """Only serialize the name, not the problematic object""" return {"name": self.name} @task(cache_policy=INPUTS + RUN_ID) def use_object(some_object: ContainsNonSerializableObject) -> str: return f"Used {some_object.name}" @flow def demo_flow(): some_object = ContainsNonSerializableObject( name="test", bad_object=NotSerializable() ) state = use_object(some_object, return_state=True) # Not cached! assert state.name == "Completed" other_state = use_object(some_object, return_state=True) # Cached! assert other_state.name == "Cached" assert state.result() == other_state.result() ``` Choose the approach that best fits your needs: * Use Pydantic models when you want consistent serialization across your application * Use custom cache key functions when you need different caching logic for different tasks # How to cancel running workflows Source: https://docs.prefect.io/v3/advanced/cancel-workflows You can cancel a scheduled or in-progress flow run from the CLI, UI, REST API, or Python client. When requesting cancellation, the flow run moves to a "Cancelling" state. If the deployment is associated with a work pool, then the worker monitors the state of flow runs and detects that cancellation is requested. The worker then sends a signal to the flow run infrastructure, requesting termination of the run. If the run does not terminate after a grace period (default of 30 seconds), the infrastructure is killed, ensuring the flow run exits. **A deployment is required** Flow run cancellation requires that the flow run is associated with a deployment. A monitoring process must be running to enforce the cancellation. Inline nested flow runs (those created without `run_deployment`), cannot be cancelled without cancelling the parent flow run. To cancel a nested flow run independent of its parent flow run, we recommend deploying it separately and starting it using the [run\_deployment](/v3/deploy/index) function. Cancellation is resilient to restarts of Prefect workers. To enable this, we attach metadata about the created infrastructure to the flow run. Internally, this is referred to as the `infrastructure_pid` or infrastructure identifier. Generally, this is composed of two parts: * Scope: identifying where the infrastructure is running. * ID: a unique identifier for the infrastructure within the scope. The scope ensures that Prefect does not kill the wrong infrastructure. For example, workers running on multiple machines may have overlapping process IDs but should not have a matching scope. The identifiers for infrastructure types are: * Processes: The machine hostname and the PID. * Docker Containers: The Docker API URL and container ID. * Kubernetes Jobs: The Kubernetes cluster name and the job name. While the cancellation process is robust, there are a few issues than can occur: * If the infrastructure for the flow run does not support cancellation, cancellation will not work. * If the identifier scope does not match when attempting to cancel a flow run, the worker cannot cancel the flow run. Another worker may attempt cancellation. * If the infrastructure associated with the run cannot be found or has already been killed, the worker marks the flow run as cancelled. * If the `infrastructre_pid` is missing, the flow run is marked as cancelled but cancellation cannot be enforced. * If the worker runs into an unexpected error during cancellation, the flow run may or may not be cancelled depending on where the error occurred. The worker will try again to cancel the flow run. Another worker may attempt cancellation. ### Cancel through the CLI From the command line in your execution environment, you can cancel a flow run by using the `prefect flow-run cancel` CLI command, passing the ID of the flow run. ```bash theme={null} prefect flow-run cancel 'a55a4804-9e3c-4042-8b59-b3b6b7618736' ``` ### Cancel through the UI Navigate to the flow run's detail page and click `Cancel` in the upper right corner. Prefect UI # How to configure network access for restricted environments Source: https://docs.prefect.io/v3/advanced/configure-network-access Learn which endpoints and ports to allow for Prefect Cloud and self-hosted Prefect server in restricted network environments. If your execution environment restricts outbound network access, you must allow traffic to specific endpoints for Prefect to function. This page lists the required and optional endpoints for both Prefect Cloud and self-hosted Prefect server deployments. ## Prefect Cloud endpoints Workers, flow runs, and the Prefect CLI need outbound HTTPS access (TCP port 443) to communicate with Prefect Cloud. ### Required endpoints | Endpoint | Purpose | | ------------------- | ------------------------------------------------ | | `api.prefect.cloud` | Prefect Cloud REST API and WebSocket connections | | `app.prefect.cloud` | Prefect Cloud UI (browser access for users) | | `auth.workos.com` | Authentication provider for login and SSO | The IP addresses behind `api.prefect.cloud` are dynamic. Configure firewall rules by domain name (FQDN) rather than by IP address. If your firewall only supports IP-based rules, route traffic through a proxy or use [PrivateLink](/v3/how-to-guides/cloud/manage-users/secure-access-by-private-link) instead. ### Optional endpoints | Endpoint | Purpose | How to disable | | ------------------------------- | ------------------------------------------------ | ---------------------------------------------------------- | | `api2.amplitude.com` | SDK anonymous usage telemetry | Set `DO_NOT_TRACK=1` on the client | | `sens-o-matic.prefect.io` | Self-hosted server anonymous telemetry heartbeat | Set `PREFECT_SERVER_ANALYTICS_ENABLED=false` on the server | | `api.github.com` / `github.com` | Authentication via GitHub social login | Not needed if you use SSO or email-based login | | `ocsp.pki.goog` | TLS certificate revocation checks (OCSP) | Cannot be disabled; required by TLS libraries | Blocking optional telemetry endpoints may produce warning messages in logs but does not affect operation. See [Telemetry](/v3/concepts/telemetry) for details on what data is collected and how to opt out. ### Additional endpoints for your workflows Depending on your deployment, workers, and flow runs may also need access to: * **Code storage**: GitHub, GitLab, Bitbucket, S3, GCS, or Azure Blob Storage endpoints where your flow code is stored * **Container registries**: Docker Hub, Amazon ECR, Google Artifact Registry, or other registries if your workers pull container images * **Infrastructure APIs**: AWS, GCP, Azure, or Kubernetes API endpoints if your workers provision cloud infrastructure * **PyPI or private package indexes**: If your flows install Python dependencies at runtime ## Self-hosted Prefect server endpoints When running a self-hosted Prefect server, workers, and the CLI need access to the server's API endpoint. No external Prefect-hosted endpoints are required for core operation. | Endpoint | Purpose | | ----------------------------------------------------------------------------------------- | ------------------------------------ | | Your server's `PREFECT_API_URL` (for example, `https://prefect.internal.example.com/api`) | Prefect server REST API | | `sens-o-matic.prefect.io` (optional) | Anonymous server telemetry heartbeat | | `api2.amplitude.com` (optional) | SDK anonymous usage telemetry | Set `PREFECT_SERVER_ANALYTICS_ENABLED=false` on the server to disable the server heartbeat, and `DO_NOT_TRACK=1` on client processes to disable SDK telemetry. ## Configure a proxy The Prefect client uses [`httpx`](https://www.python-httpx.org/) for HTTP requests. `httpx` respects standard proxy environment variables, so you can route Prefect traffic through a corporate proxy: ```bash theme={null} export HTTPS_PROXY=http://proxy.example.com:8080 export SSL_CERT_FILE=/path/to/corporate-ca-bundle.crt ``` See the [GitHub Discussion on using Prefect Cloud with proxies](https://github.com/PrefectHQ/prefect/discussions/16175) for additional configuration examples. ## Verify connectivity To confirm that your environment can reach Prefect Cloud, run: ```bash theme={null} curl -s https://api.prefect.cloud/api/health ``` A successful response returns a health check JSON payload. If the request times out or is refused, check your firewall rules and proxy configuration. You can also verify your full Prefect configuration with: ```bash theme={null} prefect config view prefect cloud login ``` ## Enterprise options for strict environments For environments with strict egress controls, Prefect Cloud offers additional options: * **[PrivateLink](/v3/how-to-guides/cloud/manage-users/secure-access-by-private-link)**: Route API traffic through AWS or GCP private networking so it never traverses the public internet. * **[IP allowlisting](/v3/how-to-guides/cloud/manage-users/secure-access-by-ip-address)**: Restrict inbound access to Prefect Cloud APIs and UI to specific IP addresses or CIDR ranges. Contact your account manager or [sales@prefect.io](mailto:sales@prefect.io) for details on enterprise networking options. ## Next steps * [Connect to Prefect Cloud](/v3/how-to-guides/cloud/connect-to-cloud) * [Telemetry](/v3/concepts/telemetry) * [Secure access over PrivateLink](/v3/how-to-guides/cloud/manage-users/secure-access-by-private-link) * [Troubleshoot Prefect Cloud](/v3/how-to-guides/cloud/troubleshoot-cloud) # How to create custom blocks Source: https://docs.prefect.io/v3/advanced/custom-blocks ### Create a new block type To create a custom block type, define a class that subclasses `Block`. The `Block` base class builds on Pydantic's `BaseModel`, so you can declare custom fields just like a [Pydantic model](https://pydantic-docs.helpmanual.io/usage/models/#basic-model-usage). We've already seen an example of a `Cube` block that represents a cube and holds information about the length of each edge in inches: ```python theme={null} from prefect.blocks.core import Block class Cube(Block): edge_length_inches: float Cube.register_type_and_schema() ``` ### Register custom blocks In addition to the `register_type_and_schema` method shown above, you can register blocks from a Python module with a CLI command: ```bash theme={null} prefect block register --module prefect_aws.credentials ``` This command is useful for registering all blocks found within a module in a [Prefect Integration library](/integrations/). Alternatively, if a custom block was created in a `.py` file, you can register the block with the CLI command: ```bash theme={null} prefect block register --file my_block.py ``` Block documents can now be created with the registered block schema. ### Secret fields All block values are encrypted before being stored. If you have values that you would not like visible in the UI or in logs, use the `SecretStr` field type provided by Pydantic to automatically obfuscate those values. You can use this capability for fields that store credentials such as passwords and API tokens. Here's an example of an `AwsCredentials` block that uses `SecretStr`: ```python theme={null} from typing import Optional from prefect.blocks.core import Block from pydantic import SecretStr class AwsCredentials(Block): aws_access_key_id: Optional[str] = None aws_secret_access_key: Optional[SecretStr] = None aws_session_token: Optional[str] = None profile_name: Optional[str] = None region_name: Optional[str] = None ``` Since `aws_secret_access_key` has the `SecretStr` type hint assigned to it, the value of that field is not exposed if the object is logged: ```python theme={null} aws_credentials_block = AwsCredentials( aws_access_key_id="AKIAJKLJKLJKLJKLJKLJK", aws_secret_access_key="secret_access_key" ) print(aws_credentials_block) # aws_access_key_id='AKIAJKLJKLJKLJKLJKLJK' aws_secret_access_key=SecretStr('**********') aws_session_token=None profile_name=None region_name=None ``` Prefect's `SecretDict` field type allows you to add a dictionary field to your block that automatically obfuscates values at all levels in the UI or in logs. This capability is useful for blocks where typing or structure of secret fields is not known until configuration time. Here's an example of a block that uses `SecretDict`: ```python theme={null} from prefect.blocks.core import Block from prefect.blocks.fields import SecretDict class SystemConfiguration(Block): system_secrets: SecretDict system_variables: dict system_configuration_block = SystemConfiguration( system_secrets={ "password": "p@ssw0rd", "api_token": "token_123456789", "private_key": "", }, system_variables={ "self_destruct_countdown_seconds": 60, "self_destruct_countdown_stop_time": 7, }, ) ``` `system_secrets` is obfuscated when `system_configuration_block` is displayed, but `system_variables` show up in plain-text: ```python theme={null} print(system_configuration_block) # SystemConfiguration( # system_secrets=SecretDict('{'password': '**********', 'api_token': '**********', 'private_key': '**********'}'), # system_variables={'self_destruct_countdown_seconds': 60, 'self_destruct_countdown_stop_time': 7} # ) ``` ### Customize a block's display You can set metadata fields on a block type's subclass to control how a block displays. Available metadata fields include: | Property | Description | | ------------------ | --------------------------------------------------------------------------------------------------------------------------------------------- | | `_block_type_name` | Display name of the block in the UI. Defaults to the class name. | | `_block_type_slug` | Unique slug used to reference the block type in the API. Defaults to a lowercase, dash-delimited version of the block type name. | | `_logo_url` | URL pointing to an image that should be displayed for the block type in the UI. Default to `None`. | | `_description` | Short description of block type. Defaults to docstring, if provided. | | `_code_example` | Short code snippet shown in UI for how to load/use block type. Defaults to first example provided in the docstring of the class, if provided. | ### Update custom `Block` types Here's an example of how to add a `bucket_folder` field to your custom `S3Bucket` block; it represents the default path to read and write objects from (this field exists on [our implementation](https://github.com/PrefectHQ/prefect/blob/main/src/integrations/prefect-aws/prefect_aws/s3.py)). Add the new field to the class definition: ```python theme={null} from typing import Optional from prefect.blocks.core import Block class S3Bucket(Block): bucket_name: str credentials: AwsCredentials bucket_folder: Optional[str] = None ... ``` Then [register the updated block type](#register-blocks) with either your Prefect Cloud account or your self-hosted Prefect server instance. If you have any existing blocks of this type that were created before the update and you'd prefer to not re-create them, migrate them to the new version of your block type by adding the missing values: ```python theme={null} # Bypass Pydantic validation to allow your local Block class to load the old block version my_s3_bucket_block = S3Bucket.load("my-s3-bucket", validate=False) # Set the new field to an appropriate value my_s3_bucket_block.bucket_path = "my-default-bucket-path" # Overwrite the old block values and update the expected fields on the block my_s3_bucket_block.save("my-s3-bucket", overwrite=True) ``` # Customizing Base Job Templates Source: https://docs.prefect.io/v3/advanced/customize-base-job-templates Learn how to customize Kubernetes base job templates for work pools This guide provides comprehensive examples for customizing the Kubernetes base job template. These examples demonstrate common configuration patterns for environment variables, secrets, resource limits, and image pull secrets. ## Understanding the Base Job Template Structure The base job template uses a two-part structure: 1. **variables**: Define configurable parameters with defaults and descriptions 2. **job\_configuration**: Reference variables using `{{ variable_name }}` syntax to apply them to the Kubernetes job manifest Variables defined in the `variables` section must be explicitly referenced in `job_configuration` using `{{ variable_name }}` syntax to take effect. If you customize the template and remove a variable reference from `job_configuration`, that variable's value will not be passed to the worker, even if it's defined in `variables`. ## Accessing the Base Job Template You can customize the base job template in two ways: 1. **Through the UI**: Navigate to your work pool → **Advanced** tab → Edit the JSON representation 2. **Through the CLI**: Get the default template to use as a starting point: ```bash theme={null} prefect work-pool get-default-base-job-template --type kubernetes ``` ## Common Configuration Patterns ### Environment Variables Configure environment variables to pass configuration to your flow runs: ```json theme={null} { "variables": { "env": { "title": "Environment Variables", "description": "Environment variables to set in the container", "default": {}, "type": "object", "additionalProperties": {"type": "string"} } }, "job_configuration": { "job_manifest": { "spec": { "template": { "spec": { "containers": [ { "name": "prefect-job", "env": "{{ env }}" } ] } } } } } } ``` ### Secret References Reference Kubernetes secrets to inject sensitive data: ```json theme={null} { "variables": { "secret_name": { "title": "Secret Name", "description": "Name of the Kubernetes secret containing credentials", "default": null, "type": "string" } }, "job_configuration": { "job_manifest": { "spec": { "template": { "spec": { "containers": [ { "name": "prefect-job", "envFrom": [ { "secretRef": { "name": "{{ secret_name }}" } } ] } ] } } } } } } ``` ### Image Pull Secrets Configure authentication for private container registries: ```json theme={null} { "variables": { "image_pull_secrets": { "title": "Image Pull Secrets", "description": "Names of Kubernetes secrets for pulling images from private registries", "default": [], "type": "array", "items": {"type": "string"} } }, "job_configuration": { "job_manifest": { "spec": { "template": { "spec": { "imagePullSecrets": "{{ image_pull_secrets }}" } } } } } } ``` ### Resource Limits and Requests Set CPU and memory resource constraints: ```json theme={null} { "variables": { "cpu_request": { "title": "CPU Request", "description": "CPU allocation to request for this pod", "default": "100m", "type": "string" }, "cpu_limit": { "title": "CPU Limit", "description": "Maximum CPU allocation for this pod", "default": "1000m", "type": "string" }, "memory_request": { "title": "Memory Request", "description": "Memory allocation to request for this pod", "default": "256Mi", "type": "string" }, "memory_limit": { "title": "Memory Limit", "description": "Maximum memory allocation for this pod", "default": "1Gi", "type": "string" } }, "job_configuration": { "job_manifest": { "spec": { "template": { "spec": { "containers": [ { "name": "prefect-job", "resources": { "requests": { "cpu": "{{ cpu_request }}", "memory": "{{ memory_request }}" }, "limits": { "cpu": "{{ cpu_limit }}", "memory": "{{ memory_limit }}" } } } ] } } } } } } ``` ## Combining Multiple Configurations These examples show individual configurations. In practice, you'll combine multiple configurations in a single base job template. Remember that any modifications replace the entire default configuration, so include all necessary fields when customizing. When combining configurations, merge the `variables` and `job_configuration` sections. For example, to combine environment variables with resource limits: ```json theme={null} { "variables": { "env": { "title": "Environment Variables", "description": "Environment variables to set in the container", "default": {}, "type": "object", "additionalProperties": {"type": "string"} }, "cpu_request": { "title": "CPU Request", "description": "CPU allocation to request for this pod", "default": "100m", "type": "string" }, "memory_request": { "title": "Memory Request", "description": "Memory allocation to request for this pod", "default": "256Mi", "type": "string" } }, "job_configuration": { "job_manifest": { "spec": { "template": { "spec": { "containers": [ { "name": "prefect-job", "env": "{{ env }}", "resources": { "requests": { "cpu": "{{ cpu_request }}", "memory": "{{ memory_request }}" } } } ] } } } } } } ``` ## Next Steps * Learn more about [Kubernetes work pools](/v3/deploy/infrastructure-concepts/work-pools/) * See [how to run flows on Kubernetes](/v3/how-to-guides/deployment_infra/kubernetes/) * Explore [overriding job variables](/v3/deploy/infrastructure-concepts/customize/) # How to daemonize worker processes Source: https://docs.prefect.io/v3/advanced/daemonize-processes Learn how Prefect flow deployments enable configuring flows for scheduled and remote execution with workers. When running workflow applications, it's helpful to create long-running processes that run at startup and are resilient to failure. This guide shows you how to set up a systemd service to create long-running Prefect processes that poll for scheduled flow runs, including how to: * create a Linux user * install and configure Prefect * set up a systemd service for the Prefect worker or `.serve` process ## Prerequisites * An environment with a linux operating system with [systemd](https://systemd.io/) and Python 3.10 or later. * A superuser account that can run `sudo` commands. * A Prefect Cloud account, or an instance of a Prefect server running on your network. If using an [AWS t2-micro EC2 instance](https://aws.amazon.com/ec2/instance-types/t2/) with an AWS Linux image, you can install Python and pip with `sudo yum install -y python3 python3-pip`. ## Steps A systemd service is ideal for running a long-lived process on a Linux VM or physical Linux server. You will use systemd and learn how to automatically start a [Prefect worker](/v3/deploy/infrastructure-concepts/workers/) or long-lived [`serve` process](/v3/how-to-guides/deployment_infra/run-flows-in-local-processes) when Linux starts. This approach provides resilience by automatically restarting the process if it crashes. ### Step 1: Add a user Create a user account on your Linux system for the Prefect process. You can run a worker or serve process as root, but it's best practice to create a dedicated user. In a terminal, run: ```bash theme={null} sudo useradd -m prefect sudo passwd prefect ``` When prompted, enter a password for the `prefect` account. Next, log in to the `prefect` account by running: ```bash theme={null} sudo su prefect ``` ### Step 2: Install Prefect Run: ```bash theme={null} pip3 install prefect ``` This guide assumes you are installing Prefect globally, rather than a virtual environment. If running a systemd service in a virtual environment, change the ExecPath. For example, if using [venv](https://docs.python.org/3/library/venv.html), change the ExecPath to target the `prefect` application in the `bin` subdirectory of your virtual environment. Next, set up your environment so the Prefect client knows which server to connect to. If connecting to Prefect Cloud, follow [the instructions](v3/how-to-guides/cloud/connect-to-cloud) to obtain an API key, and then run the following: ```bash theme={null} prefect cloud login -k YOUR_API_KEY ``` When prompted, choose the Prefect workspace to log in to. If connecting to a self-hosted Prefect server instance instead of a Prefect Cloud account, run the following command, substituting the IP address of your server: ```bash theme={null} prefect config set PREFECT_API_URL=http://your-prefect-server-IP:4200 ``` Run the `exit` command to sign out of the `prefect` Linux account. This command switches you back to your sudo-enabled account where you can run the commands in the next section. ### Step 3: Set up a systemd service See the section below if you are setting up a Prefect worker. Skip to the [next section](#setting-up-a-systemd-service-for-serve) if you are setting up a Prefect `.serve` process. #### Setting up a systemd service for a Prefect worker Move into the `/etc/systemd/system` folder and open a file for editing. We use the Vim text editor below. ```bash theme={null} cd /etc/systemd/system sudo vim my-prefect-service.service ``` ```txt my-prefect-service.service theme={null} [Unit] Description=Prefect worker [Service] User=prefect WorkingDirectory=/home ExecStart=prefect worker start --pool YOUR_WORK_POOL_NAME Restart=always [Install] WantedBy=multi-user.target ``` Make sure you substitute your own work pool name. #### Setting up a systemd service for `.serve` Copy your flow entrypoint Python file and any other files needed for your flow to run into the `/home` directory (or the directory of your choice). Here's a basic example flow: ```python my_file.py theme={null} from prefect import flow @flow(log_prints=True) def say_hi(): print("Hello!") if __name__=="__main__": say_hi.serve(name="served and daemonized deployment") ``` To make changes to your flow code without restarting your process, push your code to git-based cloud storage (GitHub, BitBucket, GitLab) and use `flow.from_source().serve()`, as in the example below. ```python my_remote_flow_code_file.py theme={null} if __name__ == "__main__": flow.from_source( source="https://github.com/org/repo.git", entrypoint="path/to/my_remote_flow_code_file.py:say_hi", ).serve(name="deployment-with-github-storage") ``` Make sure you substitute your own flow code entrypoint path. If you change the flow entrypoint parameters, you must restart the process. Move into the `/etc/systemd/system` folder and open a file for editing. This example below uses Vim. ```bash theme={null} cd /etc/systemd/system sudo vim my-prefect-service.service ``` ```txt my-prefect-service.service theme={null} [Unit] Description=Prefect serve [Service] User=prefect WorkingDirectory=/home ExecStart=python3 my_file.py Restart=always [Install] WantedBy=multi-user.target ``` ### Step 4: Save, enable, and start the service To save the file and exit Vim, hit the escape key, type `:wq!`, then press the return key. Next, make systemd aware of your new service by running: ```bash theme={null} sudo systemctl daemon-reload ``` Then, enable the service by running: ```bash theme={null} sudo systemctl enable my-prefect-service ``` This command ensures it runs when your system boots. Next, start the service: ```bash theme={null} sudo systemctl start my-prefect-service ``` Run your deployment from the UI and check the logs on the **Flow Runs** page. You can see if your daemonized Prefect worker or serve process is running, and the Prefect logs with `systemctl status my-prefect-service`. You now have a systemd service that starts when your system boots, which will restart if it ever crashes. # How to maintain your Prefect database Source: https://docs.prefect.io/v3/advanced/database-maintenance Monitor and maintain your PostgreSQL database for self-hosted Prefect deployments Self-hosted Prefect deployments require database maintenance to ensure optimal performance and manage disk usage. This guide provides monitoring queries and maintenance strategies for PostgreSQL databases. This guide is for advanced users managing production deployments. Always test maintenance operations in a non-production environment first, if possible. Exact numbers included in this guide will vary based on your workload and installation. ## Quick reference **Daily tasks:** * Check disk space and table sizes * Monitor bloat levels (> 50% requires action) * Verify the [database vacuum service](#database-vacuum-service) is running (if enabled) **Weekly tasks:** * Review autovacuum performance * Check index usage and bloat * Analyze high-traffic tables **Red flags requiring immediate action:** * Disk usage > 80% * Table bloat > 100% * Connection count approaching limit * Autovacuum hasn't run in 24+ hours ## Database growth monitoring Prefect stores entities like events, flow runs, task runs, and logs that accumulate over time. Monitor your database regularly to understand growth patterns specific to your usage. ### Check table sizes ```sql theme={null} -- Total database size SELECT pg_size_pretty(pg_database_size('prefect')) AS database_size; -- Table sizes with row counts SELECT schemaname, relname AS tablename, pg_size_pretty(pg_total_relation_size(schemaname||'.'||relname)) AS total_size, to_char(n_live_tup, 'FM999,999,999') AS row_count FROM pg_stat_user_tables WHERE schemaname = 'public' ORDER BY pg_total_relation_size(schemaname||'.'||relname) DESC LIMIT 20; ``` ### Monitor disk space Track overall disk usage to prevent outages: ```sql theme={null} -- Check database disk usage SELECT current_setting('data_directory') AS data_directory, pg_size_pretty(pg_database_size('prefect')) AS database_size, pg_size_pretty(pg_total_relation_size('public.events')) AS events_table_size, pg_size_pretty(pg_total_relation_size('public.log')) AS log_table_size; -- Check available disk space (requires pg_stat_disk extension or shell access) -- Run from shell: df -h /path/to/postgresql/data ``` Common large tables in Prefect databases: * `events` - Automatically generated for all state changes (often the largest table) * `log` - Flow and task run logs * `flow_run` and `task_run` - Execution records * `flow_run_state` and `task_run_state` - State history ### Monitor table bloat PostgreSQL tables can accumulate "dead tuples" from updates and deletes. Monitor bloat percentage to identify tables needing maintenance: ```sql theme={null} SELECT schemaname, relname AS tablename, n_live_tup AS live_tuples, n_dead_tup AS dead_tuples, CASE WHEN n_live_tup > 0 THEN round(100.0 * n_dead_tup / n_live_tup, 2) ELSE 0 END AS bloat_percent, last_vacuum, last_autovacuum FROM pg_stat_user_tables WHERE schemaname = 'public' AND n_dead_tup > 1000 ORDER BY bloat_percent DESC; ``` ### Monitor index bloat Indexes can also bloat and impact performance: ```sql theme={null} -- Check index sizes and bloat SELECT schemaname, relname AS tablename, indexrelname AS indexname, pg_size_pretty(pg_relation_size(indexrelid)) AS index_size, idx_scan AS index_scans, idx_tup_read AS tuples_read, idx_tup_fetch AS tuples_fetched FROM pg_stat_user_indexes WHERE schemaname = 'public' ORDER BY pg_relation_size(indexrelid) DESC LIMIT 20; ``` ## PostgreSQL VACUUM VACUUM reclaims storage occupied by dead tuples. While PostgreSQL runs autovacuum automatically, you may need manual intervention for heavily updated tables. ### Manual VACUUM For tables with high bloat percentages: ```sql theme={null} -- Standard VACUUM (doesn't lock table) VACUUM ANALYZE flow_run; VACUUM ANALYZE task_run; VACUUM ANALYZE log; -- VACUUM FULL (rebuilds table, requires exclusive lock) -- WARNING: This COMPLETELY LOCKS the table - no reads or writes! -- Can take HOURS on large tables. Only use as last resort. VACUUM FULL flow_run; -- Better alternative: pg_repack (if installed) -- Rebuilds tables online without blocking -- pg_repack -t flow_run -d prefect ``` ### Monitor autovacuum Check if autovacuum is keeping up with your workload: ```sql theme={null} -- Show autovacuum settings SHOW autovacuum; SHOW autovacuum_vacuum_scale_factor; SHOW autovacuum_vacuum_threshold; -- Check when tables were last vacuumed SELECT schemaname, relname AS tablename, last_vacuum, last_autovacuum, vacuum_count, autovacuum_count FROM pg_stat_user_tables WHERE schemaname = 'public' ORDER BY last_autovacuum NULLS FIRST; ``` ### Tune autovacuum for Prefect workloads Depending on your workload, your write patterns may require more aggressive autovacuum settings than defaults: ```sql theme={null} -- For high-volume events table (INSERT/DELETE heavy) ALTER TABLE events SET ( autovacuum_vacuum_scale_factor = 0.05, -- Default is 0.2 autovacuum_vacuum_threshold = 1000, autovacuum_analyze_scale_factor = 0.02 -- Keep stats current ); -- For state tables (INSERT-heavy) ALTER TABLE flow_run_state SET ( autovacuum_vacuum_scale_factor = 0.1, autovacuum_analyze_scale_factor = 0.05 ); -- For frequently updated tables ALTER TABLE flow_run SET ( autovacuum_vacuum_scale_factor = 0.1, autovacuum_vacuum_threshold = 500 ); ``` ### When to take action **Bloat thresholds:** * **\< 20% bloat**: Normal, autovacuum should handle * **20-50% bloat**: Monitor closely, consider manual VACUUM * **> 50% bloat**: Manual VACUUM recommended * **> 100% bloat**: Significant performance impact, urgent action needed **Warning signs:** * Autovacuum hasn't run in > 24 hours on active tables * Query performance degrading over time * Disk space usage growing faster than data volume ## Database vacuum service Prefect server includes a built-in database vacuum service that automatically cleans up old data. The service runs as a background process alongside Prefect server and handles deletion of: * Old top-level flow runs that have reached a terminal state (completed, failed, cancelled, or crashed) * Orphaned logs (logs referencing flow runs that no longer exist) * Orphaned artifacts (artifacts referencing flow runs that no longer exist) * Stale artifact collections (collections whose latest artifact has been deleted) * Old events and event resources past the event retention period The flow run vacuum permanently deletes data. Back up your database before enabling it, and test in a non-production environment first. ### Enable the vacuum service The vacuum service has two independent components controlled by `PREFECT_SERVER_SERVICES_DB_VACUUM_ENABLED`: * **Event vacuum** (`events`): Cleans up old events and event resources. Enabled by default. * **Flow run vacuum** (`flow_runs`): Cleans up old flow runs, orphaned logs, orphaned artifacts, and stale artifact collections. Disabled by default. To enable flow run cleanup in addition to the default event cleanup: ```bash theme={null} # Enable both event and flow run vacuum export PREFECT_SERVER_SERVICES_DB_VACUUM_ENABLED="events,flow_runs" # Or use prefect config prefect config set PREFECT_SERVER_SERVICES_DB_VACUUM_ENABLED="events,flow_runs" ``` The vacuum service runs as part of Prefect server's background services. With `prefect server start` (single-process mode), background services run automatically. If you use `--no-services` or `--workers > 1`, or run a [scaled deployment](/v3/advanced/self-hosted), start background services separately with `prefect server services start` to ensure the vacuum service runs. ### Configure the vacuum service The following settings control vacuum behavior: | Setting | Default | Description | | ------------------------------------------------------------- | ---------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `PREFECT_SERVER_SERVICES_DB_VACUUM_ENABLED` | `events` | Comma-separated set of vacuum types to enable. Valid values: `events`, `flow_runs`. | | `PREFECT_SERVER_SERVICES_DB_VACUUM_LOOP_SECONDS` | `3600` (1 hour) | How often the vacuum cycle runs, in seconds. | | `PREFECT_SERVER_SERVICES_DB_VACUUM_RETENTION_PERIOD` | `7776000` (90 days) | How old a flow run must be (based on end time) before it is eligible for deletion. Accepts seconds. Must be greater than 1 hour. | | `PREFECT_SERVER_SERVICES_DB_VACUUM_BATCH_SIZE` | `200` | Number of records to delete per database transaction. | | `PREFECT_SERVER_SERVICES_DB_VACUUM_EVENT_RETENTION_OVERRIDES` | `{"prefect.flow-run.heartbeat": 604800}` | Per-event-type retention period overrides in seconds. Event types not listed fall back to `PREFECT_EVENTS_RETENTION_PERIOD`. Each override is capped by the global events retention period. | Example configuration for a weekly vacuum cycle with a 30-day retention period: ```bash theme={null} export PREFECT_SERVER_SERVICES_DB_VACUUM_ENABLED="events,flow_runs" export PREFECT_SERVER_SERVICES_DB_VACUUM_LOOP_SECONDS=604800 export PREFECT_SERVER_SERVICES_DB_VACUUM_RETENTION_PERIOD=2592000 export PREFECT_SERVER_SERVICES_DB_VACUUM_BATCH_SIZE=200 ``` ### How the vacuum service works Each vacuum cycle schedules independent cleanup tasks: ### Flow run vacuum (when `flow_runs` is enabled) 1. **Delete orphaned logs**: Removes log entries whose associated flow run no longer exists in the database. 2. **Delete orphaned artifacts**: Removes artifacts whose associated flow run no longer exists. 3. **Reconcile stale artifact collections**: For collections pointing to a deleted artifact, re-points to the next most recent artifact version. If no versions remain, deletes the collection. 4. **Delete old flow runs**: Removes top-level flow runs (not subflows) that are in a terminal state and whose end time is older than the configured retention period. Subflows are not cascade-deleted when their parent is removed; they are cleaned up independently by later vacuum cycles once they also meet the top-level, terminal state, and age criteria. ### Event vacuum (when `events` is enabled) 1. **Delete events with retention overrides**: For each event type listed in `PREFECT_SERVER_SERVICES_DB_VACUUM_EVENT_RETENTION_OVERRIDES`, deletes events and their associated resources older than the configured per-type retention period. 2. **Delete old events**: Removes all events and event resources older than `PREFECT_EVENTS_RETENTION_PERIOD`. The event vacuum only runs when the event persister service is also enabled (`PREFECT_SERVER_SERVICES_EVENT_PERSISTER_ENABLED=true`, which is the default). This prevents unexpected data deletion for deployments that have disabled event processing. All deletions happen in batches (controlled by `PREFECT_SERVER_SERVICES_DB_VACUUM_BATCH_SIZE`) to avoid long-running transactions that could impact database performance. ### Tune the vacuum for your workload * **High-volume deployments** (thousands of runs per day): Consider a shorter retention period (for example, 7-14 days) and a more frequent vacuum cycle (every few hours) to prevent data accumulation. * **Low-volume deployments**: The defaults (90-day retention, hourly cycle) are appropriate for most cases. * **Large batch size**: Increasing `PREFECT_SERVER_SERVICES_DB_VACUUM_BATCH_SIZE` speeds up cleanup but may hold database locks longer. Decrease the batch size if you observe performance impacts during vacuum cycles. * **Scaled deployments**: If you run [separate API and background service processes](/v3/advanced/self-hosted), ensure the background services pod is running to enable the vacuum service. ## Data retention with a custom flow As an alternative to the built-in vacuum service, you can implement custom retention logic using a Prefect flow. This approach gives you more control over which flow runs to delete and allows you to add custom logic such as notifications or conditional retention. Using the Prefect API ensures proper cleanup of all related data, including logs and artifacts. The API handles cascade deletions and triggers necessary background tasks. ```python theme={null} import asyncio from datetime import datetime, timedelta, timezone from prefect import flow, task, get_run_logger from prefect.client.orchestration import get_client from prefect.client.schemas.filters import FlowRunFilter, FlowRunFilterState, FlowRunFilterStateType, FlowRunFilterStartTime from prefect.client.schemas.objects import StateType from prefect.exceptions import ObjectNotFound @task async def delete_old_flow_runs( days_to_keep: int = 30, batch_size: int = 100 ): """Delete completed flow runs older than specified days.""" logger = get_run_logger() async with get_client() as client: cutoff = datetime.now(timezone.utc) - timedelta(days=days_to_keep) # Create filter for old completed flow runs # Note: Using start_time because created time filtering is not available flow_run_filter = FlowRunFilter( start_time=FlowRunFilterStartTime(before_=cutoff), state=FlowRunFilterState( type=FlowRunFilterStateType( any_=[StateType.COMPLETED, StateType.FAILED, StateType.CANCELLED] ) ) ) # Get flow runs to delete flow_runs = await client.read_flow_runs( flow_run_filter=flow_run_filter, limit=batch_size ) deleted_total = 0 while flow_runs: batch_deleted = 0 failed_deletes = [] # Delete each flow run through the API for flow_run in flow_runs: try: await client.delete_flow_run(flow_run.id) deleted_total += 1 batch_deleted += 1 except ObjectNotFound: # Already deleted (e.g., by concurrent cleanup) - treat as success deleted_total += 1 batch_deleted += 1 except Exception as e: logger.warning(f"Failed to delete flow run {flow_run.id}: {e}") failed_deletes.append(flow_run.id) # Rate limiting - adjust based on your API capacity if batch_deleted % 10 == 0: await asyncio.sleep(0.5) logger.info(f"Deleted {batch_deleted}/{len(flow_runs)} flow runs (total: {deleted_total})") if failed_deletes: logger.warning(f"Failed to delete {len(failed_deletes)} flow runs") # Get next batch flow_runs = await client.read_flow_runs( flow_run_filter=flow_run_filter, limit=batch_size ) # Delay between batches to avoid overwhelming the API await asyncio.sleep(1.0) logger.info(f"Retention complete. Total deleted: {deleted_total}") @flow(name="database-retention") async def retention_flow(): """Run database retention tasks.""" await delete_old_flow_runs( days_to_keep=30, batch_size=100 ) ``` ### Direct SQL approach In some cases, you may need to use direct SQL for performance reasons or when the API is unavailable. Be aware that direct deletion bypasses application-level cascade logic and may leave orphaned logs and artifacts: ```python theme={null} # Direct SQL only deletes what's defined by database foreign keys # Logs and artifacts may be orphaned without proper cleanup async with asyncpg.connect(connection_url) as conn: await conn.execute(""" DELETE FROM flow_run WHERE created < $1 AND state_type IN ('COMPLETED', 'FAILED', 'CANCELLED') """, cutoff) ``` If you use direct SQL for flow run deletion, enable the built-in vacuum service with `PREFECT_SERVER_SERVICES_DB_VACUUM_ENABLED="events,flow_runs"` to automatically clean up any orphaned logs and artifacts left behind. ### Important considerations 1. **Filtering limitation**: The custom flow example above filters by `start_time` (when the flow run began execution), not `created` time (when the flow run was created in the database). This means flows that were created but never started are not deleted by this approach. The built-in vacuum service uses `end_time` instead, so it can clean up runs that reached a terminal state without ever entering a running state. 2. **Test first**: Run with `SELECT` instead of `DELETE` to preview what will be removed 3. **Start conservative**: Begin with longer retention periods and adjust based on needs 4. **Monitor performance**: Large deletes can impact database performance 5. **Backup**: Always backup before major cleanup operations ## Event retention Events are automatically generated for all state changes in Prefect and can quickly become the largest table in your database. Prefect includes built-in event retention that automatically removes old events. ### Configure event retention The default retention period is 7 days. For high-volume deployments running many flow runs per minute, this default can lead to rapid database growth. Consider your workload when setting retention: | Workload | Suggested retention | Rationale | | --------------------------------- | ------------------- | ------------------------------- | | Low volume (\< 100 runs/day) | 7 days (default) | Default is appropriate | | Medium volume (100-1000 runs/day) | 3-5 days | Balance history with growth | | High volume (1000+ runs/day) | 1-2 days | Prioritize database performance | ```bash theme={null} # Set retention to 2 days (as environment variable) export PREFECT_EVENTS_RETENTION_PERIOD="2d" # Or in your prefect configuration prefect config set PREFECT_EVENTS_RETENTION_PERIOD="2d" ``` The event trimmer runs automatically as part of the background services. If you're running in a [scaled deployment](/v3/advanced/self-hosted) with separate API servers and background services, ensure the background services pod is running to enable automatic trimming. ### Check event table size Monitor your event table growth: ```sql theme={null} -- Event table size and row count SELECT pg_size_pretty(pg_total_relation_size('public.events')) AS total_size, to_char(count(*), 'FM999,999,999') AS row_count, min(occurred) AS oldest_event, max(occurred) AS newest_event FROM events; ``` Events are used for automations and triggers. Ensure your retention period keeps events long enough for your automation needs. ## Connection monitoring Monitor connection usage to prevent exhaustion: ```sql theme={null} SELECT count(*) AS total_connections, count(*) FILTER (WHERE state = 'active') AS active, count(*) FILTER (WHERE state = 'idle') AS idle, (SELECT setting::int FROM pg_settings WHERE name = 'max_connections') AS max_connections FROM pg_stat_activity; ``` ## Automating database maintenance ### Schedule maintenance tasks Schedule the retention flow to run automatically. See [how to create deployments](/v3/how-to-guides/deployments/create-deployments) for creating scheduled deployments. For example, you could run the retention flow daily at 2 AM to clean up old flow runs. ### Recommended maintenance schedule * **Hourly**: Monitor disk space and connection count * **Daily**: Run retention policies, check bloat levels * **Weekly**: Analyze tables, review autovacuum performance * **Monthly**: REINDEX heavily used indexes, full database backup ## Troubleshooting common issues ### "VACUUM is taking forever" * Check for long-running transactions blocking VACUUM: ```sql theme={null} SELECT pid, age(clock_timestamp(), query_start), usename, query FROM pg_stat_activity WHERE state <> 'idle' AND query NOT ILIKE '%vacuum%' ORDER BY age DESC; ``` * Consider using `pg_repack` instead of `VACUUM FULL` * Run during low-traffic periods ### "Database is growing despite retention policies" * Verify event retention is configured: `prefect config view | grep EVENTS_RETENTION` * Verify the vacuum service is enabled: `prefect config view | grep DB_VACUUM` * Check if autovacuum is running on the events table * Ensure the vacuum service or retention flow is actually executing (check server logs for "Database vacuum" messages) ### "Queries are getting slower over time" * Update table statistics: `ANALYZE;` * Check for missing indexes using `pg_stat_user_tables` * Review query plans with `EXPLAIN ANALYZE` ### "Connection limit reached" * Implement connection pooling immediately * Check for connection leaks: connections in 'idle' state for hours * Reduce Prefect worker connection counts ## Further reading * [PostgreSQL documentation on VACUUM](https://www.postgresql.org/docs/current/sql-vacuum.html) * [PostgreSQL routine maintenance](https://www.postgresql.org/docs/current/routine-vacuuming.html) * [Monitoring PostgreSQL](https://www.postgresql.org/docs/current/monitoring-stats.html) * [pg\_repack extension](https://github.com/reorg/pg_repack) # How to debounce events Source: https://docs.prefect.io/v3/advanced/debouncing-events Learn how to use the within parameter to handle multiple events in quick succession without creating a flow run for each event. Prefect allows you to trigger deployment runs based on events. However, when multiple events occur in quick succession, you may want a single flow run to handle all of them rather than spinning up a separate run for each event. For example, when multiple files are uploaded to an S3 bucket simultaneously, or a flurry of [webhook](/v3/concepts/webhooks#webhooks) events are received by Prefect, you typically want to process all files together in one run. This pattern is called **debouncing**, and you can implement it using reactive triggers with the `within` parameter of the trigger and the `schedule_after` parameter of the `run-deployment` action. ## Why debounce events? Automations fire in response to a single event and can only pass that event's context to the triggered deployment. This creates a challenge when multiple events arrive rapidly: * Each event would trigger a separate flow run * Each run would only have context from one event * You'd have multiple runs processing related work simultaneously Debouncing solves this by: * Preventing multiple flow runs from being created for rapid events * Scheduling a single run after a time window * Enabling a single run to process the work from all events in the burst **Key limitation** Automations can only pass the context from the triggering event to your deployment. Design your flows to query the source system directly (like listing S3 objects) rather than relying on individual event data. ## Use case: Processing S3 file uploads Consider a scenario where you have a webhook configured to receive S3 `ObjectCreated` events. When users upload five files in quick succession: **Without debouncing**: Five separate flow runs are triggered, one for each file. **With debouncing**: One flow run is triggered after all uploads complete, processing all five files together. ## Implementing debouncing Use a reactive trigger with matching `within` and `schedule_after` values: ### Define in `prefect.yaml` ```yaml theme={null} deployments: - name: process-s3-uploads entrypoint: flows/s3_processor.py:process_files work_pool: name: my-work-pool triggers: - type: event enabled: true match: prefect.resource.id: - "s3-bucket-name/*" expect: - "aws:s3:ObjectCreated:*" for_each: - "prefect.resource.id" posture: Reactive threshold: 1 within: 60 # 60 seconds schedule_after: "PT1M" # Wait 1 minute before running ``` ### Define in Python with `.serve` ```python theme={null} from datetime import timedelta from prefect import flow from prefect.events import DeploymentEventTrigger @flow(log_prints=True) def process_files(): """Process all files in the S3 bucket""" # Query S3 directly to find all files # Your flow logic here to list and process all files print("Processing all pending files...") if __name__ == "__main__": process_files.serve( name="process-s3-uploads", triggers=[ DeploymentEventTrigger( enabled=True, match={"prefect.resource.id": "s3-bucket-name/*"}, expect=["aws:s3:ObjectCreated:*"], for_each=["prefect.resource.id"], posture="Reactive", threshold=1, within=60, # 60 seconds schedule_after=timedelta(seconds=60), # Wait 1 minute ) ], ) ``` ## How it works When you configure a reactive trigger with both `within` and `schedule_after`: 1. **First event arrives**: The automation fires and schedules a deployment run 2. **Additional events within the window**: These events are recorded but don't trigger additional runs 3. **Deployment runs after delay**: By the time the run starts (after `schedule_after`), all events from the burst have occurred 4. **Flow processes everything**: Your flow queries the source system and processes all available items The `within` parameter implements eager debouncing: it fires immediately on the first event, then ignores subsequent events for the specified duration. The `schedule_after` parameter delays the actual flow run, ensuring all events in the burst have completed before processing begins. This implements late debouncing. Using both parameters together prevents duplicate runs while ensuring your flow has access to all events from the burst. **Matching time windows** Set `within` and `schedule_after` to the same value. This ensures the deployment run is scheduled after the debounce window closes, so all related work is visible to your flow when it runs. ## Choosing the right time window The appropriate time window depends on your use case: * **Rapid API events**: 30-60 seconds * **Batch file uploads**: 2-5 minutes * **Large file transfers**: 15-30 minutes Test with your actual event patterns to find the optimal window. **Time format requirements** The `schedule_after` parameter accepts: * **ISO 8601 duration format**: `"PT1M"` (1 minute), `"PT30S"` (30 seconds), `"PT2H"` (2 hours) * **Integer seconds**: `60` (60 seconds) * **Python timedelta objects**: `timedelta(minutes=1)` (in code) The `within` parameter accepts integer seconds only. ## Design flows for batch processing Since automations can only pass one event's context, design your flows to discover and process all available work: ```python theme={null} import boto3 from prefect import flow @flow(log_prints=True) def process_s3_files(bucket_name: str = "my-bucket"): """Process all files in the pending prefix of an S3 bucket""" s3 = boto3.client('s3') # List all objects in the pending prefix response = s3.list_objects_v2( Bucket=bucket_name, Prefix='pending/' ) files = response.get('Contents', []) print(f"Found {len(files)} files to process") # Process each file for file in files: key = file['Key'] print(f"Processing {key}") # Your processing logic here # ... # Move to completed prefix s3.copy_object( Bucket=bucket_name, CopySource={'Bucket': bucket_name, 'Key': key}, Key=key.replace('pending/', 'completed/') ) s3.delete_object(Bucket=bucket_name, Key=key) print(f"Processed {len(files)} files") ``` **Key design principles**: * Query the source system directly rather than relying on event data * Process all available items, not just one * Use idempotent operations that can safely handle re-processing ## Combining with concurrency limits For additional control, combine debouncing with deployment concurrency limits to prevent overlapping runs: ```yaml theme={null} deployments: - name: process-s3-uploads entrypoint: flows/s3_processor.py:process_files work_pool: name: my-work-pool concurrency_limit: limit: 1 collision_strategy: CANCEL_NEW triggers: - type: event enabled: true match: prefect.resource.id: - "s3-bucket-name/*" expect: - "aws:s3:ObjectCreated:*" posture: Reactive threshold: 1 within: 60 schedule_after: "PT1M" ``` This ensures: * Only one run executes at a time * New runs are cancelled if one is already running * Events are debounced to prevent excessive run creation ## What happens to subsequent events? Events that arrive during the `within` window are still recorded in Prefect's event system: * You can view them in the Event Feed * They can be queried at the start of the flow run * They're tracked for audit and debugging purposes * They don't trigger additional automation actions The automation system recognizes these as part of the same event burst and doesn't create additional runs. ## Further reading * To learn more about reactive triggers, see the [Events documentation](/v3/concepts/events/) * For details on deployment triggers, see the [Creating deployment triggers guide](/v3/how-to-guides/automations/creating-deployment-triggers/) * For webhook configuration, see the [Webhooks guide](/v3/how-to-guides/cloud/create-a-webhook/) # How to build deployments via CI/CD Source: https://docs.prefect.io/v3/advanced/deploy-ci-cd CI/CD resources for working with Prefect. Many organizations deploy Prefect workflows through their CI/CD process. Each organization has their own unique CI/CD setup, but a common pattern is to use CI/CD to manage Prefect [deployments](/v3/concepts/deployments). Combining Prefect's deployment features with CI/CD tools enables efficient management of flow code updates, scheduling changes, and container builds. This guide uses [GitHub Actions](https://docs.github.com/en/actions) to implement a CI/CD process, but these concepts are generally applicable across many CI/CD tools. Note that Prefect's primary ways for creating deployments, a `.deploy` flow method or a `prefect.yaml` configuration file, are both designed for building and pushing images to a Docker registry. ## Get started with GitHub Actions and Prefect In this example, you'll write a GitHub Actions workflow that runs each time you push to your repository's `main` branch. This workflow builds and pushes a Docker image containing your flow code to Docker Hub, then deploys the flow to Prefect Cloud. ### Repository secrets Your CI/CD process must be able to authenticate with Prefect to deploy flows. Deploy flows securely and non-interactively in your CI/CD process by saving your `PREFECT_API_URL` and `PREFECT_API_KEY` [as secrets in your repository's settings](https://docs.github.com/en/actions/security-guides/using-secrets-in-github-actions). This allows them to be accessed in your CI/CD runner's environment without exposing them in any scripts or configuration files. In this scenario, deploying flows involves building and pushing Docker images, so add `DOCKER_USERNAME` and `DOCKER_PASSWORD` as secrets to your repository as well. Create secrets for GitHub Actions in your repository under **Settings -> Secrets and variables -> Actions -> New repository secret**: Creating a GitHub Actions secret ### Write a GitHub workflow To deploy your flow through GitHub Actions, you need a workflow YAML file. GitHub looks for workflow YAML files in the `.github/workflows/` directory in the root of your repository. In their simplest form, GitHub workflow files are made up of triggers and jobs. The `on:` trigger is set to run the workflow each time a push occurs on the `main` branch of the repository. The `deploy` job is comprised of four `steps`: * **`Checkout`** clones your repository into the GitHub Actions runner so you can reference files or run scripts from your repository in later steps. * **`Log in to Docker Hub`** authenticates to DockerHub so your image can be pushed to the Docker registry in your DockerHub account. [docker/login-action](https://github.com/docker/login-action) is an existing GitHub action maintained by Docker. `with:` passes values into the Action, similar to passing parameters to a function. * **`Setup Python`** installs your selected version of Python. * **`Prefect Deploy`** installs the dependencies used in your flow, then deploys your flow. `env:` makes the `PREFECT_API_KEY` and `PREFECT_API_URL` secrets from your repository available as environment variables during this step's execution. For reference, the examples below live in their respective branches of [this repository](https://github.com/prefecthq/cicd-example). `flow.py` ```python theme={null} from prefect import flow @flow(log_prints=True) def hello(): print("Hello!") if __name__ == "__main__": hello.deploy( name="my-deployment", work_pool_name="my-work-pool", image="my_registry/my_image:my_image_tag", ) ``` `.github/workflows/deploy-prefect-flow.yaml` ```yaml theme={null} name: Deploy Prefect flow on: push: branches: - main jobs: deploy: name: Deploy runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v4 - name: Log in to Docker Hub uses: docker/login-action@v3 with: username: ${{ secrets.DOCKER_USERNAME }} password: ${{ secrets.DOCKER_PASSWORD }} - name: Setup Python uses: actions/setup-python@v5 with: python-version: "3.12" - name: Prefect Deploy env: PREFECT_API_KEY: ${{ secrets.PREFECT_API_KEY }} PREFECT_API_URL: ${{ secrets.PREFECT_API_URL }} run: | pip install -r requirements.txt python flow.py ``` `flow.py` ```python theme={null} from prefect import flow @flow(log_prints=True) def hello(): print("Hello!") ``` `prefect.yaml` ```yaml theme={null} name: cicd-example prefect-version: 3.0.0 build: - prefect_docker.deployments.steps.build_docker_image: id: build-image requires: prefect-docker>=0.7.1 image_name: my_registry/my_image tag: my_image_tag dockerfile: auto push: - prefect_docker.deployments.steps.push_docker_image: requires: prefect-docker>=0.7.1 image_name: "{{ build-image.image_name }}" tag: "{{ build-image.tag }}" pull: null deployments: - name: my-deployment entrypoint: flow.py:hello work_pool: name: my-work-pool work_queue_name: default job_variables: image: "{{ build-image.image }}" ``` `.github/workflows/deploy-prefect-flow.yaml` ```yaml theme={null} name: Deploy Prefect flow on: push: branches: - main jobs: deploy: name: Deploy runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v4 - name: Log in to Docker Hub uses: docker/login-action@v3 with: username: ${{ secrets.DOCKER_USERNAME }} password: ${{ secrets.DOCKER_PASSWORD }} - name: Setup Python uses: actions/setup-python@v5 with: python-version: "3.12" - name: Prefect Deploy env: PREFECT_API_KEY: ${{ secrets.PREFECT_API_KEY }} PREFECT_API_URL: ${{ secrets.PREFECT_API_URL }} run: | pip install -r requirements.txt prefect deploy -n my-deployment ``` ### Run a GitHub workflow After pushing commits to your repository, GitHub automatically triggers a run of your workflow. Monitor the status of running and completed workflows from the **Actions** tab of your repository. A GitHub Action triggered via push View the logs from each workflow step as they run. The `Prefect Deploy` step includes output about your image build and push, and the creation/update of your deployment. ```bash theme={null} Successfully built image '***/cicd-example:latest' Successfully pushed image '***/cicd-example:latest' Successfully created/updated all deployments! Deployments |-----------------------------------------| | Name | Status Details | |---------------------|---------|---------| | hello/my-deployment | applied | | |-----------------------------------------| ``` ## Advanced example In more complex scenarios, CI/CD processes often need to accommodate several additional considerations to enable a smooth development workflow: * Making code available in different environments as it advances through stages of development * Handling independent deployment of distinct groupings of work, as in a monorepo * Efficiently using build time to avoid repeated work This [example repository](https://github.com/prefecthq/cicd-example-workspaces) addresses each of these considerations with a combination of Prefect's and GitHub's capabilities. ### Deploy to multiple workspaces The deployment processes to run are automatically selected when changes are pushed, depending on two conditions: ```yaml theme={null} on: push: branches: - stg - main paths: - "project_1/**" ``` * **`branches:`** - which branch has changed. This ultimately selects which Prefect workspace a deployment is created or updated in. In this example, changes on the `stg` branch deploy flows to a staging workspace, and changes on the `main` branch deploy flows to a production workspace. * **`paths:`** - which project folders' files have changed. Since each project folder contains its own flows, dependencies, and `prefect.yaml`, it represents a complete set of logic and configuration that can deploy independently. Each project in this repository gets its own GitHub Actions workflow YAML file. The `prefect.yaml` file in each project folder depends on environment variables dictated by the selected job in each CI/CD workflow; enabling external code storage for Prefect deployments that is clearly separated across projects and environments. Deployments in this example use S3 for code storage. So it's important that push steps place flow files in separate locations depending upon their respective environment and project—so no deployment overwrites another deployment's files. ### Caching build dependencies Since building Docker images and installing Python dependencies are essential parts of the deployment process, it's useful to rely on caching to skip repeated build steps. The `setup-python` action offers [caching options](https://github.com/actions/setup-python#caching-packages-dependencies) so Python packages do not have to be downloaded on repeat workflow runs. ```yaml theme={null} - name: Setup Python uses: actions/setup-python@v5 with: python-version: "3.12" cache: "pip" ``` The `build-push-action` for building Docker images also offers [caching options for GitHub Actions](https://docs.docker.com/build/cache/backends/gha/). If you are not using GitHub, other remote [cache backends](https://docs.docker.com/build/cache/backends/) are available as well. ```yaml theme={null} - name: Build and push id: build-docker-image env: GITHUB_SHA: ${{ steps.get-commit-hash.outputs.COMMIT_HASH }} uses: docker/build-push-action@v5 with: context: ${{ env.PROJECT_NAME }}/ push: true tags: ${{ secrets.DOCKER_USERNAME }}/${{ env.PROJECT_NAME }}:${{ env.GITHUB_SHA }}-stg cache-from: type=gha cache-to: type=gha,mode=max ``` ``` importing cache manifest from gha:*** DONE 0.1s [internal] load build context transferring context: 70B done DONE 0.0s [2/3] COPY requirements.txt requirements.txt CACHED [3/3] RUN pip install -r requirements.txt CACHED ``` ## Prefect GitHub Actions Prefect provides its own GitHub Action for [deployment creation](https://github.com/PrefectHQ/actions-prefect-deploy). This action simplifies deploying with CI/CD when using `prefect.yaml`, especially in cases where a repository contains flows used in multiple deployments across multiple Prefect Cloud workspaces. Here's an example of integrating these actions into the workflow above: ```yaml theme={null} name: Deploy Prefect flow on: push: branches: - main jobs: deploy: name: Deploy runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v4 - name: Log in to Docker Hub uses: docker/login-action@v3 with: username: ${{ secrets.DOCKER_USERNAME }} password: ${{ secrets.DOCKER_PASSWORD }} - name: Setup Python uses: actions/setup-python@v5 with: python-version: "3.12" - name: Run Prefect Deploy uses: PrefectHQ/actions-prefect-deploy@v4 with: deployment-names: my-deployment requirements-file-paths: requirements.txt deployment-file-path: prefect.yaml env: PREFECT_API_KEY: ${{ secrets.PREFECT_API_KEY }} PREFECT_API_URL: ${{ secrets.PREFECT_API_URL }} ``` ## Authenticate to other Docker image registries The `docker/login-action` GitHub Action supports pushing images to a wide variety of image registries. For example, if you are storing Docker images in AWS Elastic Container Registry, you can add your ECR registry URL to the `registry` key in the `with:` part of the action and use an `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` as your `username` and `password`. ```yaml theme={null} - name: Login to ECR uses: docker/login-action@v3 with: registry: .dkr.ecr..amazonaws.com username: ${{ secrets.AWS_ACCESS_KEY_ID }} password: ${{ secrets.AWS_SECRET_ACCESS_KEY }} ``` ## Further reading # How to detect and respond to zombie flows Source: https://docs.prefect.io/v3/advanced/detect-zombie-flows Learn how to detect and respond to zombie flows. Sudden infrastructure failures (like machine crashes or container evictions) can cause flow runs to become unresponsive and appear stuck in a `Running` state. To mitigate this, flow runs triggered by deployments can emit heartbeats to drive Automations that detect and respond to these "zombie" flow runs, ensuring they are marked as `Crashed` if they stop reporting heartbeats. ### Enable flow run heartbeat events You will need to ensure you're running Prefect version 3.1.8 or greater and set `PREFECT_FLOWS_HEARTBEAT_FREQUENCY` to an integer greater than 30 to emit flow run heartbeat events. ### Create the automation To create an automation that marks zombie flow runs as crashed, run this script: ```python theme={null} from datetime import timedelta from prefect.automations import Automation from prefect.client.schemas.objects import StateType from prefect.events.actions import ChangeFlowRunState from prefect.events.schemas.automations import EventTrigger, Posture from prefect.events.schemas.events import ResourceSpecification my_automation = Automation( name="Crash zombie flows", trigger=EventTrigger( after={"prefect.flow-run.heartbeat"}, expect={ "prefect.flow-run.heartbeat", "prefect.flow-run.Completed", "prefect.flow-run.Failed", "prefect.flow-run.Cancelled", "prefect.flow-run.Crashed", }, match=ResourceSpecification({"prefect.resource.id": ["prefect.flow-run.*"]}), for_each={"prefect.resource.id"}, posture=Posture.Proactive, threshold=1, within=timedelta(seconds=90), ), actions=[ ChangeFlowRunState( state=StateType.CRASHED, message="Flow run marked as crashed due to missing heartbeats.", ) ], ) if __name__ == "__main__": my_automation.create() ``` The trigger definition says that `after` each heartbeat event for a flow run we `expect` to see another heartbeat event or a terminal state event for that same flow run `within` 90 seconds of a heartbeat event. ### Adjusting behavior with settings If `PREFECT_FLOWS_HEARTBEAT_FREQUENCY` is set to `30`, the automation will trigger only after 3 heartbeats have been missed. You can adjust `within` in the trigger definition and `PREFECT_FLOWS_HEARTBEAT_FREQUENCY` to change how quickly the automation will fire after the server stops receiving flow run heartbeats. You can also add additional actions to your automation to send a notification when zombie runs are detected. # How to develop a custom worker Source: https://docs.prefect.io/v3/advanced/developing-a-custom-worker Learn how to create a Prefect worker to run your flows. Prefect workers are responsible for setting up execution infrastructure and starting flow runs on that infrastructure. A list of available workers can be found in the [workers documentation](/v3/concepts/workers#worker-types). What if you want to execute your flow runs on infrastructure that doesn't have an available worker type? This tutorial will walk you through creating a custom worker that can run your flows on your chosen infrastructure. ## Worker Configuration When setting up an execution environment for a flow run, a worker receives configuration for the infrastructure it is designed to work with. Examples of configuration values include memory allocation, CPU allocation, credentials, image name, etc. The worker then uses this configuration to create the execution environment and start the flow run. !!! tip "How are the configuration values populated?" The work pool that a worker polls for flow runs has a [base job template](/v3/how-to-guides/deployment_infra/manage-work-pools#base-job-template) associated with it. The template is the contract for how configuration values populate for each flow run. The keys in the `job_configuration` section of this base job template match the worker's configuration class attributes. The values in the `job_configuration` section of the base job template are used to populate the attributes of the worker's configuration class. The work pool creator gets to decide how they want to populate the values in the `job_configuration` section of the base job template. The values can be hard-coded, templated using placeholders, or a mix of these two approaches. Because you, as the worker developer, don't know how the work pool creator will populate the values, you should set sensible defaults for your configuration class attributes as a matter of best practice. ### Implementing a `BaseJobConfiguration` Subclass A worker developer defines their worker's configuration to function with a class extending [`BaseJobConfiguration`](/v3/api-ref/python/prefect-workers-base#basejobconfiguration). `BaseJobConfiguration` has attributes that are common to all workers: | Attribute | Description | | --------- | ------------------------------------------------------------------------------- | | `name` | The name to assign to the created execution environment. | | `env` | Environment variables to set in the created execution environment. | | `labels` | The labels assigned to the created execution environment for metadata purposes. | | `command` | The command to use when starting a flow run. | Prefect sets values for each attribute before giving the configuration to the worker. If you want to customize the values of these attributes, use the [`prepare_for_flow_run`](/v3/api-ref/python/prefect-workers-base#prepare-for-flow-run) method. Here's an example `prepare_for_flow_run` method that adds a label to the execution environment: ```python theme={null} def prepare_for_flow_run( self, flow_run, deployment = None, flow = None, work_pool = None, worker_name = None ): super().prepare_for_flow_run(flow_run, deployment, flow, work_pool, worker_name) self.labels.append("my-custom-label") ``` A worker configuration class is a [Pydantic model](https://docs.pydantic.dev/usage/models/), so you can add additional attributes to your configuration class as Pydantic fields. For example, if you want to allow memory and CPU requests for your worker, you can do so like this: ```python theme={null} from pydantic import Field from prefect.workers.base import BaseJobConfiguration class MyWorkerConfiguration(BaseJobConfiguration): memory: int = Field( default=1024, description="Memory allocation for the execution environment." ) cpu: int = Field( default=500, description="CPU allocation for the execution environment." ) ``` This configuration class will populate the `job_configuration` section of the resulting base job template. For this example, the base job template would look like this: ```yaml theme={null} job_configuration: name: "{{ name }}" env: "{{ env }}" labels: "{{ labels }}" command: "{{ command }}" memory: "{{ memory }}" cpu: "{{ cpu }}" variables: type: object properties: name: title: Name description: Name given to infrastructure created by a worker. type: string env: title: Environment Variables description: Environment variables to set when starting a flow run. type: object additionalProperties: type: string labels: title: Labels description: Labels applied to infrastructure created by a worker. type: object additionalProperties: type: string command: title: Command description: The command to use when starting a flow run. In most cases, this should be left blank and the command will be automatically generated by the worker. type: string memory: title: Memory description: Memory allocation for the execution environment. type: integer default: 1024 cpu: title: CPU description: CPU allocation for the execution environment. type: integer default: 500 ``` This base job template defines what values can be provided by deployment creators on a per-deployment basis and how those provided values will be translated into the configuration values that the worker will use to create the execution environment. Notice that each attribute for the class was added in the `job_configuration` section with placeholders whose name matches the attribute name. The `variables` section was also populated with the OpenAPI schema for each attribute. If a configuration class is used without explicitly declaring any template variables, the template variables will be inferred from the configuration class attributes. ### Customizing Configuration Attribute Templates You can customize the template for each attribute for situations where the configuration values should use more sophisticated templating. For example, if you want to add units for the `memory` attribute, you can do so like this: ```python theme={null} from pydantic import Field from prefect.workers.base import BaseJobConfiguration class MyWorkerConfiguration(BaseJobConfiguration): memory: str = Field( default="1024Mi", description="Memory allocation for the execution environment.", json_schema_extra=dict(template="{{ memory_request }}Mi") ) cpu: str = Field( default="500m", description="CPU allocation for the execution environment.", json_schema_extra=dict(template="{{ cpu_request }}m") ) ``` Notice that we changed the type of each attribute to `str` to accommodate the units, and we added a new `json_schema_extra` attribute to each attribute. The `template` key in `json_schema_extra` is used to populate the `job_configuration` section of the resulting base job template. For this example, the `job_configuration` section of the resulting base job template would look like this: ```yaml theme={null} job_configuration: name: "{{ name }}" env: "{{ env }}" labels: "{{ labels }}" command: "{{ command }}" memory: "{{ memory_request }}Mi" cpu: "{{ cpu_request }}m" ``` Note that to use custom templates, you will need to declare the template variables used in the template because the names of those variables can no longer be inferred from the configuration class attributes. We will cover how to declare the default variable schema in the [Worker Template Variables](#worker-template-variables) section. ### Rules for Template Variable Interpolation When defining a job configuration model, it's useful to understand how template variables are interpolated into the job configuration. The templating engine follows a few simple rules: 1. If a template variable is the only value for a key in the `job_configuration` section, the key will be replaced with the value template variable. 2. If a template variable is part of a string (i.e., there is text before or after the template variable), the value of the template variable will be interpolated into the string. 3. If a template variable is the only value for a key in the `job_configuration` section and no value is provided for the template variable, the key will be removed from the `job_configuration` section. These rules allow worker developers and work pool maintainers to define template variables that can be complex types like dictionaries and lists. These rules also mean that worker developers should give reasonable default values to job configuration fields whenever possible because values are not guaranteed to be provided if template variables are unset. ### Template Variable Usage Strategies Template variables define the interface that deployment creators interact with to configure the execution environments of their deployments. The complexity of this interface can be controlled via the template variables that are defined for a base job template. This control allows work pool maintainers to find a point along the spectrum of flexibility and simplicity appropriate for their organization. There are two patterns that are represented in current worker implementations: #### Pass-Through In the pass-through pattern, template variables are passed through to the job configuration with little change. This pattern exposes complete control to deployment creators but also requires them to understand the details of the execution environment. This pattern is useful when the execution environment is simple, and the deployment creators are expected to have high technical knowledge. The [Docker worker](https://prefecthq.github.io/prefect-docker/worker/) is an example of a worker that uses this pattern. #### Infrastructure as Code Templating Depending on the infrastructure they interact with, workers can sometimes employ a declarative infrastructure syntax (i.e., infrastructure as code) to create execution environments (e.g., a Kubernetes manifest or an ECS task definition). In the IaC pattern, it's often useful to use template variables to template portions of the declarative syntax which then can be used to generate the declarative syntax into a final form. This approach allows work pool creators to provide a simpler interface to deployment creators while also controlling which portions of infrastructure are configurable by deployment creators. The [Kubernetes worker](https://prefecthq.github.io/prefect-kubernetes/worker/) is an example of a worker that uses this pattern. ### Configuring Credentials When executing flow runs within cloud services, workers will often need credentials to authenticate with those services. For example, a worker that executes flow runs in AWS Fargate will need AWS credentials. As a worker developer, you can use blocks to accept credentials configuration from the user. For example, if you want to allow the user to configure AWS credentials, you can do so like this: ```python theme={null} from prefect.workers.base import BaseJobConfiguration from prefect_aws import AwsCredentials from pydantic import Field class MyWorkerConfiguration(BaseJobConfiguration): aws_credentials: AwsCredentials | None = Field( default=None, description="AWS credentials to use when creating AWS resources." ) ``` Users can create and assign a block to the `aws_credentials` attribute in the UI and the worker will use these credentials when interacting with AWS resources. ## Worker Template Variables Providing template variables for a base job template defines the fields that deployment creators can override per deployment. The work pool creator ultimately defines the template variables for a base job template, but the worker developer is able to define default template variables for the worker to make it easier to use. Default template variables for a worker are defined by implementing the `BaseVariables` class. Like the `BaseJobConfiguration` class, the `BaseVariables` class has attributes that are common to all workers: | Attribute | Description | | --------- | ---------------------------------------------------------------------------- | | `name` | The name to assign to the created execution environment. | | `env` | Environment variables to set in the created execution environment. | | `labels` | The labels assigned the created execution environment for metadata purposes. | | `command` | The command to use when starting a flow run. | Additional attributes can be added to the `BaseVariables` class to define additional template variables. For example, if you want to allow memory and CPU requests for your worker, you can do so like this: ```python theme={null} from pydantic import Field from prefect.workers.base import BaseVariables class MyWorkerTemplateVariables(BaseVariables): memory_request: int = Field( default=1024, description="Memory allocation for the execution environment." ) cpu_request: int = Field( default=500, description="CPU allocation for the execution environment." ) ``` When `MyWorkerTemplateVariables` is used in conjunction with `MyWorkerConfiguration` from the [Customizing Configuration Attribute Templates](#customizing-configuration-attribute-templates) section, the resulting base job template will look like this: ```yaml theme={null} job_configuration: name: "{{ name }}" env: "{{ env }}" labels: "{{ labels }}" command: "{{ command }}" memory: "{{ memory_request }}Mi" cpu: "{{ cpu_request }}m" variables: type: object properties: name: title: Name description: Name given to infrastructure created by a worker. type: string env: title: Environment Variables description: Environment variables to set when starting a flow run. type: object additionalProperties: type: string labels: title: Labels description: Labels applied to infrastructure created by a worker. type: object additionalProperties: type: string command: title: Command description: The command to use when starting a flow run. In most cases, this should be left blank and the command will be automatically generated by the worker. type: string memory_request: title: Memory Request description: Memory allocation for the execution environment. type: integer default: 1024 cpu_request: title: CPU Request description: CPU allocation for the execution environment. type: integer default: 500 ``` Note that template variable classes are never used directly. Instead, they are used to generate a schema that is used to populate the `variables` section of a base job template and validate the template variables provided by the user. We don't recommend using template variable classes within your worker implementation for validation purposes because the work pool creator ultimately defines the template variables. The configuration class should handle any necessary run-time validation. ## Worker Implementation Workers set up execution environments using provided configuration. Workers also observe the execution environment as the flow run executes and report any crashes to the Prefect API. ### Attributes To implement a worker, you must implement the `BaseWorker` class and provide it with the following attributes: | Attribute | Description | Required | | ----------------------------- | -------------------------------------------- | -------- | | `type` | The type of the worker. | Yes | | `job_configuration` | The configuration class for the worker. | Yes | | `job_configuration_variables` | The template variables class for the worker. | No | | `_documentation_url` | Link to documentation for the worker. | No | | `_logo_url` | Link to a logo for the worker. | No | | `_description` | A description of the worker. | No | ### Methods #### `run` In addition to the attributes above, you must also implement a `run` method. The `run` method is called for each flow run the worker receives for execution from the work pool. The `run` method has the following signature: ```python theme={null} import anyio import anyio.abc from prefect.client.schemas.objects import FlowRun from prefect.workers.base import BaseWorkerResult, BaseJobConfiguration async def run( self, flow_run: FlowRun, configuration: BaseJobConfiguration, task_status: anyio.abc.TaskStatus = anyio.TASK_STATUS_IGNORED, ) -> BaseWorkerResult: ... ``` The `run` method is passed: the flow run to execute, the execution environment configuration for the flow run, and a task status object that allows the worker to track whether the flow run was submitted successfully. The `run` method must also return a `BaseWorkerResult` object. The `BaseWorkerResult` object returned contains information about the flow run execution. For the most part, you can implement the `BaseWorkerResult` with no modifications like so: ```python theme={null} from prefect.workers.base import BaseWorkerResult class MyWorkerResult(BaseWorkerResult): """Result returned by the MyWorker.""" ``` If you would like to return more information about a flow run, then additional attributes can be added to the `BaseWorkerResult` class. ### Worker Implementation Example Below is an example of a worker implementation. This example is not intended to be a complete implementation but to illustrate the aforementioned concepts. ```python theme={null} import anyio import anyio.abc from prefect.client.schemas.objects import FlowRun from prefect.workers.base import BaseWorker, BaseWorkerResult, BaseJobConfiguration, BaseVariables from pydantic import Field class MyWorkerConfiguration(BaseJobConfiguration): memory: str = Field( default="1024Mi", description="Memory allocation for the execution environment.", json_schema_extra=dict(template="{{ memory_request }}Mi") ) cpu: str = Field( default="500m", description="CPU allocation for the execution environment.", json_schema_extra=dict(template="{{ cpu_request }}m") ) class MyWorkerTemplateVariables(BaseVariables): memory_request: int = Field( default=1024, description="Memory allocation for the execution environment." ) cpu_request: int = Field( default=500, description="CPU allocation for the execution environment." ) class MyWorkerResult(BaseWorkerResult): """Result returned by the MyWorker.""" class MyWorker(BaseWorker): type = "my-worker" job_configuration = MyWorkerConfiguration job_configuration_variables = MyWorkerTemplateVariables _documentation_url = "https://example.com/docs" _logo_url = "https://example.com/logo" _description = "My worker description." async def run( self, flow_run: FlowRun, configuration: BaseJobConfiguration, task_status: anyio.abc.TaskStatus = anyio.TASK_STATUS_IGNORED, ) -> BaseWorkerResult: # Create the execution environment and start execution job = await self._create_and_start_job(configuration) # Use a unique ID to mark the run as started. This ID is later used to tear down infrastructure # if the flow run is cancelled. task_status.started(job.id) # Monitor the execution job_status = await self._watch_job(job, configuration) exit_code = job_status.exit_code if job_status else -1 # Get result of execution for reporting return MyWorkerResult( status_code=exit_code, identifier=job.id, ) ``` Most of the execution logic is omitted from the example above, but it shows that the typical order of operations in the `run` method is: 1. Create the execution environment and start the flow run execution 2. Mark the flow run as started via the passed `task_status` object 3. Monitor the execution 4. Get the execution's final status from the infrastructure and return a `BaseWorkerResult` object To see other examples of worker implementations, see the [`ProcessWorker`](/v3/api-ref/python/prefect-workers-process) and [`KubernetesWorker`](https://prefecthq.github.io/prefect-kubernetes/worker/) implementations. ### Integrating with the Prefect CLI Workers can be started via the Prefect CLI by providing the `--type` option to the `prefect worker start` CLI command. To make your worker type available via the CLI, it must be available at import time. If your worker is in a package, you can add an entry point to your setup file in the following format: ```python theme={null} entry_points={ "prefect.collections": [ "my_package_name = my_worker_module", ] }, ``` Prefect will discover this entry point and load your work module in the specified module. The entry point will allow the worker to be available via the CLI. # How to write a Prefect plugin Source: https://docs.prefect.io/v3/advanced/experimental-plugins **Experimental Feature** The plugin system is an **experimental feature** under active development. The API is subject to change without notice, and features may be modified or removed in future releases. The Prefect plugin system allows third-party packages to run hooks when Prefect is imported. This enables plugins to configure environment variables, authenticate with external services, or perform other initialization tasks automatically - whether you're running CLI commands, Python scripts, workers, or agents. ## Use Cases The plugin system is designed for scenarios where you need to: * **Obtain short-lived credentials**: Automatically fetch temporary AWS credentials, service tokens, or API keys before workflow execution * **Configure environment variables**: Set up environment-specific configuration based on the execution context * **Initialize external services**: Connect to secret managers, credential stores, or authentication providers * **Prepare execution environments**: Ensure required resources or configurations are available before workflows run ## Quick Start ### Enabling the Plugin System The plugin system is opt-in and disabled by default. Enable it by setting the environment variable: ```bash theme={null} export PREFECT_EXPERIMENTS_PLUGINS_ENABLED=1 ``` Once enabled, plugins will automatically run whenever Prefect is imported - this includes: * Python scripts that import Prefect (`import prefect`) * CLI commands (`prefect deploy`, `prefect server start`, etc.) * Workers and agents starting up * Any process that uses Prefect This ensures your environment is properly configured before any Prefect code executes. ### Example Plugin Here's a minimal example plugin that sets environment variables: ```python theme={null} # my_plugin/__init__.py from prefect._experimental.plugins import register_hook, HookContext, SetupResult PREFECT_PLUGIN_API_REQUIRES = ">=0.1,<1" @register_hook def setup_environment(*, ctx: HookContext) -> SetupResult: """Configure environment before Prefect starts.""" logger = ctx.logger_factory("my-plugin") # Perform authentication or configuration credentials = fetch_credentials() logger.info("Configured credentials") return SetupResult( env={ "MY_SERVICE_TOKEN": credentials.token, "MY_SERVICE_URL": "https://api.example.com", }, note="Configured my-service credentials", required=True # Abort if this plugin fails in strict mode ) def fetch_credentials(): # Your authentication logic here pass ``` Register the plugin in your `pyproject.toml`: ```toml theme={null} [project] name = "my-plugin" version = "0.1.0" dependencies = ["prefect>=3.4"] [project.entry-points."prefect.plugins"] my_plugin = "my_plugin" ``` **Install the plugin:** ```bash theme={null} pip install -e . ``` ## Configuration ### Environment Variables Configure the plugin system with these environment variables: | Variable | Description | Default | | --------------------------------------------------- | -------------------------------------------- | ------------------ | | `PREFECT_EXPERIMENTS_PLUGINS_ENABLED` | Enable/disable the plugin system | `0` (disabled) | | `PREFECT_EXPERIMENTS_PLUGINS_ALLOW` | Comma-separated list of allowed plugin names | None (all allowed) | | `PREFECT_EXPERIMENTS_PLUGINS_DENY` | Comma-separated list of denied plugin names | None (none denied) | | `PREFECT_EXPERIMENTS_PLUGINS_SETUP_TIMEOUT_SECONDS` | Maximum time for all plugins to complete | `20` | | `PREFECT_EXPERIMENTS_PLUGINS_STRICT` | Exit if a required plugin fails | `0` (disabled) | | `PREFECT_EXPERIMENTS_PLUGINS_SAFE_MODE` | Load plugins without executing hooks | `0` (disabled) | ### Examples **Allow only specific plugins:** ```bash theme={null} export PREFECT_EXPERIMENTS_PLUGINS_ALLOW="aws-plugin,gcp-plugin" ``` **Deny problematic plugins:** ```bash theme={null} export PREFECT_EXPERIMENTS_PLUGINS_DENY="legacy-plugin" ``` **Enable strict mode for production:** ```bash theme={null} export PREFECT_EXPERIMENTS_PLUGINS_STRICT=1 ``` **Debug plugins without execution:** ```bash theme={null} export PREFECT_EXPERIMENTS_PLUGINS_SAFE_MODE=1 ``` ## Plugin API ### Hook Specification Plugins implement the `setup_environment` hook to configure the environment or run code on startup. The simplest approach is to use a decorated function: ```python theme={null} from prefect._experimental.plugins import register_hook, HookContext, SetupResult PREFECT_PLUGIN_API_REQUIRES = ">=0.1,<1" @register_hook def setup_environment(*, ctx: HookContext) -> SetupResult | None: """ Prepare process environment for Prefect. Args: ctx: Context with Prefect version, API URL, and logger factory Returns: SetupResult with environment variables, or None for no changes """ logger = ctx.logger_factory("my-plugin") logger.info(f"Running with Prefect {ctx.prefect_version}") # Return None if no setup needed if not should_configure(): return None # Return SetupResult with configuration return SetupResult( env={"KEY": "value"}, note="Short description", required=False # Optional ) ``` **Required Decorator**: The `@register_hook` decorator is **required** to mark your function as a plugin hook implementation. Without it, your plugin will not be discovered by the plugin system. **Entry Point**: When using the function-based pattern, your entry point should reference the module (e.g., `"my_plugin"`) rather than a specific class or instance. **Alternative: Class-Based Plugins** For plugins that need to maintain state, you can also use a class-based approach: ```python theme={null} from prefect._experimental.plugins import register_hook, HookContext, SetupResult class MyPlugin: def __init__(self): self.state = {} @register_hook def setup_environment(self, *, ctx: HookContext) -> SetupResult | None: # Access instance state return SetupResult(env={"KEY": "value"}) # Create instance at module level Plugin = MyPlugin() ``` Entry point: `"my_plugin:Plugin"` ### HookContext The context object passed to plugins: ```python theme={null} from dataclasses import dataclass from typing import Callable import logging @dataclass class HookContext: prefect_version: str # e.g., "3.0.0" api_url: str | None # Configured Prefect API URL logger_factory: Callable[[str], logging.Logger] # Create loggers ``` ### SetupResult The result returned by plugins: ```python theme={null} from dataclasses import dataclass from typing import Mapping from datetime import datetime @dataclass class SetupResult: env: Mapping[str, str] # Environment variables to set note: str | None = None # Human-readable description required: bool = False # Abort in strict mode if fails ``` ## Example: AWS Credentials Plugin Here's a complete example of a plugin that assumes an AWS IAM role and provides temporary credentials: ```python theme={null} # prefect_aws_setup/__init__.py from __future__ import annotations import os from datetime import timezone import botocore.session from prefect._experimental.plugins import register_hook, HookContext, SetupResult PREFECT_PLUGIN_API_REQUIRES = ">=0.1,<1" @register_hook def setup_environment(*, ctx: HookContext) -> SetupResult | None: """Assume AWS IAM role and provide temporary credentials.""" logger = ctx.logger_factory("prefect-aws-setup") role_arn = os.getenv("PREFECT_AWS_SETUP_ROLE_ARN") if not role_arn: logger.debug("PREFECT_AWS_SETUP_ROLE_ARN not set, skipping") return None profile = os.getenv("PREFECT_AWS_SETUP_PROFILE") region = os.getenv("AWS_REGION", "us-east-1") duration = int(os.getenv("PREFECT_AWS_SETUP_DURATION", "3600")) try: # Create AWS session and assume role session = botocore.session.Session(profile=profile) sts = session.create_client("sts", region_name=region) response = sts.assume_role( RoleArn=role_arn, RoleSessionName="prefect-plugin", DurationSeconds=duration ) credentials = response["Credentials"] return SetupResult( env={ "AWS_ACCESS_KEY_ID": credentials["AccessKeyId"], "AWS_SECRET_ACCESS_KEY": credentials["SecretAccessKey"], "AWS_SESSION_TOKEN": credentials["SessionToken"], "AWS_REGION": region, }, note=f"Assumed role {role_arn.split('/')[-1]}", required=bool(os.getenv("PREFECT_AWS_SETUP_REQUIRED")), ) except Exception: logger.exception("Failed to assume AWS role") if os.getenv("PREFECT_AWS_SETUP_REQUIRED"): raise return None ``` **Package configuration:** ```toml theme={null} # pyproject.toml [project] name = "prefect-aws-setup" version = "0.1.0" dependencies = ["prefect>=3.4", "botocore>=1.34"] [project.entry-points."prefect.plugins"] aws_setup = "prefect_aws_setup" ``` **Installation:** ```bash theme={null} pip install -e . ``` **Usage:** ```bash theme={null} # Configure the plugin export PREFECT_AWS_SETUP_ROLE_ARN="arn:aws:iam::123456789012:role/PrefectRole" export PREFECT_AWS_SETUP_PROFILE="my-aws-profile" export AWS_REGION="us-west-2" export PREFECT_AWS_SETUP_REQUIRED=1 # Enable plugins export PREFECT_EXPERIMENTS_PLUGINS_ENABLED=1 # Run Prefect commands - credentials are automatically configured prefect deploy --all ``` ## Diagnostics Use the diagnostic command to troubleshoot plugin issues: ```bash theme={null} prefect experimental plugins diagnose ``` **Example output:** ``` Prefect Experimental Plugin System Diagnostics Enabled: True Timeout: 20.0s Strict mode: False Safe mode: False Allow list: None Deny list: None Discoverable Plugins (entry point group: prefect.plugins) • aws-setup: active Module: prefect_aws_setup:Plugin API requirement: >=0.1,<1 Running Startup Hooks • aws-setup: success Environment variables: 4 AWS_ACCESS_KEY_ID=•••••• AWS_SECRET_ACCESS_KEY=•••••• AWS_SESSION_TOKEN=•••••• AWS_REGION=us-west-2 Note: Assumed role PrefectRole Expires: 2024-01-15 18:30:00+00:00 ``` ## Security Considerations **Security Best Practices** * Only install plugins from trusted sources * Review plugin source code before installation * Use `PREFECT_EXPERIMENTS_PLUGINS_DENY` to quarantine known-bad plugins * Sensitive values are automatically redacted in logs and diagnostics * Plugins run with the same permissions as Prefect ### Secret Redaction The plugin system automatically redacts sensitive environment variables in logs and diagnostics. Variables containing these keywords are redacted: * `SECRET` * `TOKEN` * `PASSWORD` * `KEY` Example: ```python theme={null} # This environment variable will be redacted {"AWS_SECRET_ACCESS_KEY": "••••••"} # This will not {"AWS_REGION": "us-east-1"} ``` ### Permissions Plugins execute with the same permissions as the Prefect process. Be cautious about: * Environment variables can affect all child processes * File system access matches the Prefect process user * Network calls are unrestricted * Plugins can import any installed package ## Behavior & Guarantees ### Import-Time Execution Plugins run **automatically when Prefect is imported**, before any other Prefect code executes. This means: * Environment variables are available to all Prefect operations * Credentials are configured before connecting to APIs * Setup happens once per process, not per CLI command If plugin initialization fails (and strict mode is disabled), Prefect will log the error and continue loading. ### Execution Order * Plugins are discovered via entry points in the `prefect.plugins` group * Execution order is not guaranteed * Multiple plugins may modify the same environment variables (last write wins) ### Error Handling * Plugin errors are isolated and logged * One plugin's failure doesn't affect others * In strict mode with `required=True`, plugin failures abort startup * Timeouts apply globally to all plugins combined ### Async Support Plugins can be either synchronous or asynchronous: ```python theme={null} from prefect._experimental.plugins import register_hook, HookContext, SetupResult # Synchronous plugin @register_hook def setup_environment(*, ctx: HookContext) -> SetupResult: # Synchronous implementation return SetupResult(env={"KEY": "value"}) # Asynchronous plugin @register_hook async def setup_environment(*, ctx: HookContext) -> SetupResult: # Asynchronous implementation data = await fetch_data() return SetupResult(env={"KEY": data}) ``` ### Idempotence Plugins should be idempotent - multiple invocations should produce the same result. Prefect may call plugins multiple times during initialization or in different processes. ## Version Compatibility ### Specifying Version Requirements Plugins should declare their API version compatibility using the `PREFECT_PLUGIN_API_REQUIRES` attribute: ```python theme={null} from prefect._experimental.plugins import register_hook, HookContext # At module level in your plugin PREFECT_PLUGIN_API_REQUIRES = ">=0.1,<1" # Supports API version 0.1.x class Plugin: @register_hook def setup_environment(self, *, ctx: HookContext): # Your plugin implementation pass ``` The version specifier follows [PEP 440](https://peps.python.org/pep-0440/) syntax: * `">=0.1,<1"` - Compatible with 0.1.x releases, incompatible with 1.0+ * `">=0.1"` - Compatible with 0.1 and all future versions * `"==0.1"` - Only compatible with exactly version 0.1 The current plugin API version is `0.1`. This will be incremented when breaking changes are made to the plugin interface. ### Version Validation When Prefect loads plugins, it automatically validates version compatibility: 1. **Compatible versions**: Plugin loads normally and executes 2. **Incompatible versions**: Plugin is skipped with a warning log message 3. **Invalid specifiers**: Warning is logged, but plugin loads anyway (best-effort) 4. **Missing attribute**: Defaults to `">=0.1,<1"` if not specified **Example logs:** ``` # Incompatible version WARNING: Skipping plugin my-plugin: requires API version >=1.0, current version is 0.1 # Invalid specifier DEBUG: Plugin my-plugin has invalid version specifier 'not-valid', ignoring version check ``` ### Checking Compatibility Use the diagnostics command to see plugin version requirements: ```bash theme={null} prefect experimental plugins diagnose ``` Output shows API requirements for each plugin: ``` • my-plugin: active Module: my_plugin:Plugin API requirement: >=0.1,<1 ``` ## Troubleshooting ### Plugin Not Running 1. Verify plugins are enabled: ```bash theme={null} echo $PREFECT_EXPERIMENTS_PLUGINS_ENABLED ``` 2. Check plugin is discoverable: ```bash theme={null} prefect experimental plugins diagnose ``` 3. Verify entry point is registered: ```bash theme={null} pip show your-plugin-package ``` ### Plugin Errors 1. Enable safe mode to test loading without execution: ```bash theme={null} PREFECT_EXPERIMENTS_PLUGINS_SAFE_MODE=1 prefect experimental plugins diagnose ``` 2. Check plugin logs: ```bash theme={null} PREFECT_LOGGING_LEVEL=DEBUG prefect --version ``` 3. Test plugin in isolation: ```bash theme={null} PREFECT_EXPERIMENTS_PLUGINS_ALLOW="your-plugin" prefect experimental plugins diagnose ``` ### Version Compatibility Issues If a plugin isn't loading due to version mismatch: 1. Check the plugin's API requirement: ```bash theme={null} prefect experimental plugins diagnose ``` Look for warnings about incompatible API versions 2. Update the plugin to a version compatible with your Prefect installation, or update Prefect to match the plugin's requirements 3. If you're developing a plugin, adjust `PREFECT_PLUGIN_API_REQUIRES` to match the current API version: ```python theme={null} PREFECT_PLUGIN_API_REQUIRES = ">=0.1,<1" # Current API version is 0.1 ``` 4. Enable debug logging to see detailed version validation: ```bash theme={null} PREFECT_LOGGING_LEVEL=DEBUG prefect --version ``` ### Timeout Issues Increase the timeout for slow operations: ```bash theme={null} export PREFECT_EXPERIMENTS_PLUGINS_SETUP_TIMEOUT_SECONDS=60 ``` Or optimize plugin startup time by deferring expensive operations. # Configure UI forms for validating workflow inputs Source: https://docs.prefect.io/v3/advanced/form-building Learn how to craft validated and user-friendly input forms for workflows. Parameterizing workflows is a critical part of orchestration. It allows you to create contracts between modular workflows in your organization and empower less-technical users to interact with your workflows intuitively. [Pydantic](https://docs.pydantic.dev/) is a powerful library for data validation using Python type annotations, which is used by Prefect to build a parameter schema for your workflow. This allows you to: * check runtime parameter values against the schema (from the UI or the SDK) * build a user-friendly form in the Prefect UI * easily reuse parameter types in similar workflows In this tutorial, we'll craft a workflow signature that the Prefect UI will render as a self-documenting form. ## Motivation Let's say you have a workflow that triggers a marketing email blast which looks like: ```python theme={null} @flow def send_marketing_email( mailing_lists: list[str], subject: str, body: str, test_mode: bool = False, attachments: list[str] | None = None ): """ Send a marketing email blast to the given lists. Args: mailing_lists: A list of lists to email. subject: The subject of the email. body: The body of the email. test_mode: Whether to send a test email. attachments: A list of attachments to include in the email. """ ... ``` When you deploy this flow, Prefect will automatically inspect your function signature and generate a form for you: initial form This is good enough for many cases, but consider these additional constraints that could arise from business needs or tech stack restrictions: * there are only a few valid values for `mailing_lists` * the `subject` must not exceed 30 characters * no more than 5 `attachments` are allowed You *can* simply check these constraints in the body of your flow function: ```python theme={null} @flow def send_marketing_email(...): if len(subject) > 30: raise ValueError("Subject must be less than 30 characters") if mailing_lists not in ["newsletter", "customers", "beta-testers"]: raise ValueError("Invalid list to email") if len(attachments) > 5: raise ValueError("Too many attachments") # etc... ``` but there are several downsides to this: * you have to spin up the infrastructure associated with your flow in order to check the constraints, which is wasteful if it turns out that bad parameters were provided * this might get duplicative, especially if you have similarly constrained parameters for different workflows To improve on this, we will use `pydantic` to build a convenient, self-documenting, and reusable flow signature that the Prefect UI can build a better form from. ## Building a convenient flow signature Let's address the constraints on `mailing_lists`, `subject`, and `attachments`. ### Using `Literal` to restrict valid values > there are only a few valid values for `mailing_lists` Say our valid mailing lists are: `["newsletter", "customers", "beta-testers"]` We can define a `Literal` to specify the valid values for the `mailing_lists` parameter. ```python theme={null} from typing import Literal MailingList = Literal["newsletter", "customers", "beta-testers"] ``` You can use an `Enum` to achieve the same effect. ```python theme={null} from enum import Enum class MailingList(Enum): NEWSLETTER = "newsletter" CUSTOMERS = "customers" BETA_TESTERS = "beta-testers" ``` ### Using a `BaseModel` subclass to group and constrain parameters Both the `subject` and `attachments` parameters have constraints that we want to enforce. > the `subject` must not exceed 30 characters > the `attachments` must not exceed 5 items Additionally, the `subject`, `body`, and `attachments` parameters are all related to the same thing: the content of the email. We can define a `BaseModel` subclass to group these parameters together and apply these constraints. ```python theme={null} from pydantic import BaseModel, Field class EmailContent(BaseModel): subject: str = Field(max_length=30) body: str = Field(default=...) attachments: list[str] = Field(default_factory=list, max_length=5) ``` `pydantic.Field` accepts a `description` kwarg that is displayed in the form above the field input. ```python theme={null} subject: str = Field(description="The subject of the email", max_length=30) ``` field description Similarly, you can: * pass `title` to `Field` to override the field name in the form * define a docstring for `EmailContent` to add a description to this group of parameters in the form ### Rewriting the flow signature Now that we have defined the `MailingList` and `EmailContent` types, we can use them in our flow signature: ```python theme={null} @flow def send_marketing_email( mailing_lists: list[MailingList], content: EmailContent, test_mode: bool = False, ): ... ``` The resulting form looks like this: improved form where the `mailing_lists` parameter renders as a multi-select dropdown that only allows the `Literal` values from our `MailingList` type. multi-select and any constraints you've defined on the `EmailContent` fields will be enforced before the run is submitted. early validation failure toast ```python theme={null} from typing import Literal from prefect import flow from pydantic import BaseModel, Field MailingList = Literal["newsletter", "customers", "beta-testers"] class EmailContent(BaseModel): subject: str = Field(max_length=30) body: str = Field(default=...) attachments: list[str] = Field(default_factory=list, max_length=5) @flow def send_marketing_email( mailing_list: list[MailingList], content: EmailContent, test_mode: bool = False, ): pass if __name__ == "__main__": send_marketing_email.serve() ``` ### Using `json_schema_extra` to order fields in the form By default, your flow parameters are rendered in the order defined by your `@flow` function signature. Within a given `BaseModel` subclass, parameters are rendered in the following order: * parameters with a `default` value are rendered first, alphabetically * parameters without a `default` value are rendered next, alphabetically You can control the order of the parameters within a `BaseModel` subclass by passing `json_schema_extra` to the `Field` constructor with a `position` key. Taking our `EmailContent` model from the previous example, let's enforce that `subject` should be displayed first, then `body`, then `attachments`. ```python theme={null} class EmailContent(BaseModel): subject: str = Field( max_length=30, description="The subject of the email", json_schema_extra=dict(position=0), ) body: str = Field(default=..., json_schema_extra=dict(position=1)) attachments: list[str] = Field( default_factory=list, max_length=5, json_schema_extra=dict(position=2), ) ``` The resulting form looks like this: custom form layout ## Using callable and class parameters If your parameter model includes `Callable` or `Type` fields, Prefect can't serialize them to JSON. The UI shows a placeholder like `` instead of actual values, and automation templates can't access individual fields. Pydantic's [`ImportString`](https://docs.pydantic.dev/latest/api/types/#pydantic.types.ImportString) type solves this. It accepts a dotted import path as a string (e.g. `"mymodule.my_func"`), resolves it to the real Python object at validation time, and serializes back to a string for JSON. For example, an order ingestion flow that needs a different normalizer per vendor: ```python vendors.py theme={null} from datetime import datetime from typing import Any from pydantic import BaseModel class Order(BaseModel): order_id: str customer_email: str total_cents: int currency: str placed_at: datetime class StripeCharge(BaseModel): id: str receipt_email: str amount: int currency: str created: int class ShopifyOrder(BaseModel): name: str email: str total_price: str currency: str created_at: str def normalize_stripe(records: list[dict[str, Any]]) -> list[dict[str, Any]]: return [ { "order_id": r["id"], "customer_email": r["receipt_email"], "total_cents": r["amount"], "currency": r["currency"], "placed_at": datetime.fromtimestamp(r["created"]).isoformat(), } for r in records ] def normalize_shopify(records: list[dict[str, Any]]) -> list[dict[str, Any]]: return [ { "order_id": r["name"], "customer_email": r["email"], "total_cents": int(float(r["total_price"]) * 100), "currency": r["currency"], "placed_at": r["created_at"], } for r in records ] ``` Use `ImportString` in the parameter model so the normalizer and raw schema are editable strings in the UI, but resolve to real Python objects at runtime: ```python pipeline.py theme={null} from typing import Any, Callable, Type from pydantic import BaseModel, ImportString, TypeAdapter from prefect import flow, task class IngestConfig(BaseModel): vendor: str normalizer: ImportString[Callable[[list[dict[str, Any]]], list[dict[str, Any]]]] raw_schema: ImportString[Type[BaseModel]] @task def fetch_raw_records(vendor: str) -> list[dict[str, Any]]: ... @task def validate_raw( records: list[dict[str, Any]], schema: type[BaseModel] ) -> list[BaseModel]: adapter = TypeAdapter(list[schema]) return adapter.validate_python(records) @task def normalize( records: list[dict[str, Any]], normalizer: Callable[[list[dict[str, Any]]], list[dict[str, Any]]], ) -> list[dict[str, Any]]: return normalizer(records) @flow(flow_run_name="ingest-{config.vendor}", log_prints=True) def ingest_orders(config: IngestConfig): raw = fetch_raw_records(config.vendor) validated_raw = validate_raw(raw, config.raw_schema) orders = normalize(raw, config.normalizer) print(f"vendor: {config.vendor}, validated: {len(validated_raw)}, produced: {len(orders)}") if __name__ == "__main__": ingest_orders.serve( name="order-ingestion", parameters={ "config": { "vendor": "stripe", "normalizer": "vendors.normalize_stripe", "raw_schema": "vendors.StripeCharge", } }, ) ``` Anyone can override the vendor config when triggering a run: ```bash theme={null} prefect deployment run 'ingest-orders/order-ingestion' \ -p 'config={ "vendor": "shopify", "normalizer": "vendors.normalize_shopify", "raw_schema": "vendors.ShopifyOrder" }' ``` The server stores clean JSON that the UI and automations can read: ```json theme={null} { "config": { "vendor": "shopify", "normalizer": "vendors.normalize_shopify", "raw_schema": "vendors.ShopifyOrder" } } ``` `ImportString` requires that the referenced object is importable by dotted path. Lambdas, closures, and objects defined in `__main__` won't work — move them to a named module instead. ## Recap We have now embedded the constraints on our parameters in the types that describe our flow signature, which means: * the UI can enforce these constraints before the run is submitted - **less wasted infra cycles** * workflow inputs are **self-documenting**, both in the UI and in the code defining your workflow * the types used in this signature can be **easily reused** for other similar workflows ## Debugging and related resources As you craft a schema for your flow signature, you may want to inspect the raw OpenAPI schema that `pydantic` generates, as it is what the Prefect UI uses to build the form. Call `model_json_schema()` on your `BaseModel` subclass to inspect the raw schema. ```python theme={null} from rich import print as pprint from pydantic import BaseModel, Field class EmailContent(BaseModel): subject: str = Field(max_length=30) body: str = Field(default=...) attachments: list[str] = Field(default_factory=list, max_length=5) pprint(EmailContent.model_json_schema()) ``` ``` { 'properties': { 'subject': {'maxLength': 30, 'title': 'Subject', 'type': 'string'}, 'body': {'title': 'Body', 'type': 'string'}, 'attachments': {'items': {'type': 'string'}, 'maxItems': 5, 'title': 'Attachments', 'type': 'array'} }, 'required': ['subject', 'body'], 'title': 'EmailContent', 'type': 'object' } ``` For more on constrained types and validation features available in `pydantic`, see their documentation on [models](https://docs.pydantic.dev/latest/concepts/models/) and [types](https://docs.pydantic.dev/latest/concepts/types/). # How to generate a custom SDK for your deployments Source: https://docs.prefect.io/v3/advanced/generate-custom-sdk Generate a custom Python SDK from your deployments for IDE autocomplete and type checking. The `prefect sdk generate` command creates a typed Python file from your [deployments](/v3/concepts/deployments). This gives you IDE autocomplete and static type checking when triggering deployment runs programmatically. This feature is in **beta**. APIs may change in future releases. ## Prerequisites * An active Prefect API connection (Prefect Cloud or self-hosted server) * At least one [deployment](/v3/how-to-guides/deployments/create-deployments) in your workspace ## Generate an SDK from the CLI Generate a typed SDK for all deployments in your workspace: ```bash theme={null} prefect sdk generate --output ./my_sdk.py ``` ### Filter to specific flows or deployments Generate an SDK for specific flows: ```bash theme={null} prefect sdk generate --output ./my_sdk.py --flow my-etl-flow ``` Generate an SDK for specific deployments: ```bash theme={null} prefect sdk generate --output ./my_sdk.py --deployment my-flow/production ``` Combine multiple filters: ```bash theme={null} prefect sdk generate --output ./my_sdk.py \ --flow etl-flow \ --flow data-sync \ --deployment analytics/daily ``` ## Run deployments with the generated SDK The generated SDK provides a `deployments.from_name()` method that returns a typed deployment object: ```python theme={null} from my_sdk import deployments # Get a deployment by name deployment = deployments.from_name("my-etl-flow/production") # Run with parameters future = deployment.run( source="s3://my-bucket/data", batch_size=100, ) # Get the flow run ID immediately print(f"Started flow run: {future.flow_run_id}") # Wait for completion and get result result = future.result() ``` ### Configure run options Use `with_options()` to set tags, scheduling, and other run configuration: ```python theme={null} from my_sdk import deployments from datetime import datetime, timedelta future = deployments.from_name("my-etl-flow/production").with_options( tags=["manual", "production"], idempotency_key="daily-run-2024-01-15", scheduled_time=datetime.now() + timedelta(hours=1), flow_run_name="custom-run-name", ).run( source="s3://bucket", ) ``` Available options: * `tags`: Tags to apply to the flow run * `idempotency_key`: Unique key to prevent duplicate runs * `work_queue_name`: Override the work queue * `as_subflow`: Run as a subflow of the current flow * `scheduled_time`: Schedule the run for a future time * `flow_run_name`: Custom name for the flow run ### Override job variables Use `with_infra()` to override work pool job variables: ```python theme={null} from my_sdk import deployments future = deployments.from_name("my-etl-flow/production").with_infra( image="my-registry/my-image:latest", cpu_request="2", memory="8Gi", ).run( source="s3://bucket", ) ``` The available job variables depend on your work pool type. The generated SDK provides type hints for the options available on each deployment's work pool. ### Async usage In an async context, use `run_async()`: ```python theme={null} import asyncio from my_sdk import deployments async def trigger_deployment(): future = await deployments.from_name("my-etl-flow/production").run_async( source="s3://bucket", ) result = await future.result() return result # Run it result = asyncio.run(trigger_deployment()) ``` ### Chain methods together ```python theme={null} from my_sdk import deployments future = ( deployments.from_name("my-etl-flow/production") .with_options(tags=["production"]) .with_infra(memory="8Gi") .run(source="s3://bucket", batch_size=100) ) ``` ## Regenerate the SDK after changes The SDK is generated from server-side metadata. Regenerate it when: * Deployments are added, removed, or renamed * Flow parameter schemas change * Work pool job variable schemas change The `generate` command overwrites the existing file: ```bash theme={null} prefect sdk generate --output ./my_sdk.py ``` Add SDK regeneration to your CI/CD pipeline to keep it in sync with your deployments. ## Further reading * [Create deployments](/v3/how-to-guides/deployments/create-deployments) * [Trigger ad-hoc deployment runs](/v3/how-to-guides/deployments/run-deployments) * [Override job configuration](/v3/how-to-guides/deployments/customize-job-variables) # Advanced Source: https://docs.prefect.io/v3/advanced/index ## Sections Learn advanced workflow patterns and optimization techniques. Learn advanced patterns for working with events, triggers, and automations. Learn advanced infrastructure management and deployment strategies. Learn advanced strategies for managing your data platform. Learn how to scale self-hosted Prefect deployments for high availability. Learn how to extend Prefect with custom blocks and API integrations. # How to manage Prefect resources using Infrastructure as Code Source: https://docs.prefect.io/v3/advanced/infrastructure-as-code Declaratively manage Prefect resources with additional tools You can manage many Prefect resources with tools like [Terraform](https://www.terraform.io/) and [Helm](https://helm.sh/). These options are a viable alternative to Prefect's CLI and UI. ## Terraform This documentation represents all Prefect resources that are supported by Terraform. This Terraform provider is maintained by the Prefect team, and is undergoing active development to reach [parity with the Prefect API](https://github.com/PrefectHQ/terraform-provider-prefect/milestone/1). The Prefect team welcomes contributions, feature requests, and bug reports via our [issue tracker](https://github.com/PrefectHQ/terraform-provider-prefect/issues). ### Terraform Modules Prefect maintains several Terraform modules to help you get started with common infrastructure patterns: * [Bucket Sensors for AWS, Azure, and GCP](https://github.com/PrefectHQ/terraform-prefect-bucket-sensor) * [ECS Worker on AWS Fargate](https://github.com/PrefectHQ/terraform-prefect-ecs-worker) * [ACI Worker on Azure Container Instances](https://github.com/PrefectHQ/terraform-prefect-aci-worker) ## Pulumi Prefect does not maintain an official Pulumi package. However, you can use Pulumi’s terraform-provider to automatically generate a Pulumi SDK from the Prefect Terraform provider. For details, refer to the [Pulumi documentation on Terraform providers](https://www.pulumi.com/registry/packages/terraform-provider/). You will need to be using Pulumi version >= 3.147.0. In this example, we will use Pulumi to deploy a flow to Prefect. Prefect recommends using the `uv` for managing Python dependencies. This example will show you how to set up a Pulumi project using the `uv` toolchain. To create a new Python Pulumi project using the `uv` toolchain, run the following command: ```bash theme={null} pulumi new python \ --yes \ --generate-only \ --name "my-prefect-pulumi-project" \ --description "A Pulumi project to manage Prefect resources" \ --runtime-options toolchain=uv \ --runtime-options virtualenv=.venv ``` Don't name your project any of the following, otherwise you will have package name conflicts: * `pulumi` * `prefect` * `pulumi-prefect` An explanation of the [flags](https://www.pulumi.com/docs/iac/cli/commands/pulumi_new/#options) used: * The `--yes` flag skips the interactive prompts and accepts the defaults. You can omit this flag, or edit the generated `Pulumi.yaml` file later to customize your project settings. * The `--generate-only` just creates a new Pulumi project. It does not create a stack, save config, or install dependencies. * The `--name` and `--description` flags set the name and description of your Pulumi project. * The `--runtime-options toolchain=uv` and `--runtime-options virtualenv=.venv` flags configures the Pulumi project to use the `uv` toolchain instead of the default, `pip`. To finish setting up your new Pulumi project, navigate to the project directory and install the dependencies: ```bash theme={null} pulumi install ``` If you already have a Pulumi project, you can switch to the `uv` toolchain by updating your `pulumi.yaml` file `runtime` settings as shown below: ```yaml Pulumi.yaml theme={null} # other project settings... runtime: name: python options: toolchain: uv virtualenv: .venv ``` This configures your Pulumi project to use the `uv` toolchain and the virtual environment located at `.venv`. Run the following to update Pulumi to use the `uv` toolchain: ```bash theme={null} pulumi install ``` ### Managing Resources with Pulumi To manage resources with Pulumi, add the Prefect Terraform provider to your Pulumi project: ```bash theme={null} pulumi package add terraform-provider prefecthq/prefect ``` Optionally, you can specify a specific version, e.g.: ```bash theme={null} pulumi package add terraform-provider prefecthq/prefect 2.90.0 ``` This will auto-generate a `pulumi-prefect` Python package. The code will be placed in the `sdks/prefect` directory inside the Pulumi project. ### Example: Deploying a Flow with Pulumi This simple example shows you how to deploy a flow to Prefect. ```python theme={null} import json import pulumi import pulumi_prefect as prefect from prefect.utilities.callables import parameter_schema from typing import Callable, Any # Import your flow here from my_flow import test as example_flow def generate_openapi_schema_for_flow(flow_obj: Callable[..., Any]) -> str: """ Utility function to generate an OpenAPI schema for a flow's parameters. This is used to provide type information for deployments created via Pulumi. See also: * `parameter_schema `_ * `model_dump_for_openapi `_ """ return json.dumps(parameter_schema(flow_obj).model_dump_for_openapi()) # Configure the Provider provider = prefect.Provider( "prefect", # endpoint="https://api.prefect.cloud/api/account//workspace/", # api_key="", # or use pulumi.Config to manage secrets ) # Register the Flow flow = prefect.Flow( "example-flow", name="example-flow", tags=["example", "pulumi"], opts=pulumi.ResourceOptions(provider=provider) ) # Create a Deployment resource deployment = prefect.Deployment( "example-deployment", name="example-deployment", flow_id=flow.id, work_pool_name="example-work-pool", work_queue_name="default", parameters=json.dumps({"foo": "bar"}), tags=["example", "pulumi"], enforce_parameter_schema=True, parameter_openapi_schema=generate_openapi_schema_for_flow(example_flow), opts=pulumi.ResourceOptions(provider=provider), # specify how to get the flow code to the worker # the Deployment resource does not have the same support as `prefect.yaml` for automatically packaging flow code # see: https://registry.terraform.io/providers/PrefectHQ/prefect/latest/docs/resources/deployment#deployment-actions # option 1: clone the repo at runtime # pull_steps = [ # { # "type": "git_clone", # "repository": "https://github.com/some/repo", # "branch": "main", # "include_submodules": True, # } # ], # entrypoint="flow.py:hello_flow", # option 2: use a pre-built container image # note: you will need to build this image yourself and push it to a registry # job_variables = json.dumps({ # "image": "example.registry.com/example-repo/example-image:v1" # }) ) # Add a schedule to the deployment to run every minute schedule = prefect.DeploymentSchedule( "example-schedule", deployment_id=deployment.id, active=True, cron="0 * * * *", timezone="UTC", opts=pulumi.ResourceOptions(provider=provider), ) ``` Now you can run `pulumi up` to create the resources in your Prefect workspace. ## Helm Each Helm chart subdirectory contains usage documentation. There are two main charts: * The `prefect-server` chart is used to a deploy a Prefect server. This is an alternative to using [Prefect Cloud](https://app.prefect.cloud/). * The `prefect-worker` chart is used to deploy a [Prefect worker](/v3/deploy/infrastructure-concepts/workers). Finally, there is a `prefect-prometheus-exporter` chart that is used to deploy a Prometheus exporter, exposing Prefect metrics for monitoring and alerting. # How to write interactive workflows Source: https://docs.prefect.io/v3/advanced/interactive Flows can pause or suspend execution and automatically resume when they receive type-checked input in Prefect's UI. Flows can also send and receive type-checked input at any time while running—without pausing or suspending. This guide explains how to use these features to build *interactive workflows*. ## Pause or suspend a flow until it receives input You can pause or suspend a flow until it receives input from a user in Prefect's UI. This is useful when you need to ask for additional information or feedback before resuming a flow. These workflows are often called [human-in-the-loop](https://hai.stanford.edu/news/humans-loop-design-interactive-ai-systems) (HITL) systems. **Human-in-the-loop interactivity** Approval workflows that pause to ask a human to confirm whether a workflow should continue are very common in the business world. Certain types of [machine learning training](https://link.springer.com/article/10.1007/s10462-022-10246-w) and artificial intelligence workflows benefit from incorporating HITL design. ### Wait for input To receive input while paused or suspended use the `wait_for_input` parameter in the `pause_flow_run` or `suspend_flow_run` functions. This parameter accepts one of the following: * A built-in type like `int` or `str`, or a built-in collection like `List[int]` * A `pydantic.BaseModel` subclass * A subclass of `prefect.input.RunInput` When to use a `RunModel` or `BaseModel` instead of a built-in type" There are a few reasons to use a `RunModel` or `BaseModel`. The first is that when you let Prefect automatically create one of these classes for your input type, the field that users see in Prefect's UI when they click "Resume" on a flow run is named `value` and has no help text to suggest what the field is. If you create a `RunInput` or `BaseModel`, you can change details like the field name, help text, and default value, and users see those reflected in the "Resume" form. The simplest way to pause or suspend and wait for input is to pass a built-in type: ```python theme={null} from prefect import flow from prefect.flow_runs import pause_flow_run from prefect.logging import get_run_logger @flow def greet_user(): logger = get_run_logger() user = pause_flow_run(wait_for_input=str) logger.info(f"Hello, {user}!") ``` In this example, the flow run pauses until a user clicks the Resume button in the Prefect UI, enters a name, and submits the form. **Types you can pass for `wait_for_input`** When you pass a built-in type such as `int` as an argument for the `wait_for_input` parameter to `pause_flow_run` or `suspend_flow_run`, Prefect automatically creates a Pydantic model containing one field annotated with the type you specified. This means you can use [any type annotation that Pydantic accepts for model fields](https://docs.pydantic.dev/1.10/usage/types/) with these functions. The auto-generated field is always named `value`. This matters when resuming a paused flow run programmatically with `resume_flow_run()`—you must provide the input as a dictionary with that field name: ```python theme={null} from prefect.flow_runs import resume_flow_run # When the flow was paused with pause_flow_run(wait_for_input=str), resume with: resume_flow_run(flow_run_id, run_input={"value": "Alice"}) ``` To use a different field name, pass a `RunInput` or `BaseModel` class to `wait_for_input` instead of a built-in type. Instead of a built-in type, you can pass in a `pydantic.BaseModel` class. This is useful if you already have a `BaseModel` you want to use: ```python theme={null} from prefect import flow from prefect.flow_runs import pause_flow_run from prefect.logging import get_run_logger from pydantic import BaseModel class User(BaseModel): name: str age: int @flow async def greet_user(): logger = get_run_logger() user = await pause_flow_run(wait_for_input=User) logger.info(f"Hello, {user.name}!") ``` **`BaseModel` classes are upgraded to `RunInput` classes automatically** When you pass a `pydantic.BaseModel` class as the `wait_for_input` argument to `pause_flow_run` or `suspend_flow_run`, Prefect automatically creates a `RunInput` class with the same behavior as your `BaseModel` and uses that instead. `RunInput` classes contain extra logic that allows flows to send and receive them at runtime. You shouldn't notice any difference. For advanced use cases such as overriding how Prefect stores flow run inputs, create a `RunInput` class: ```python theme={null} from prefect import flow from prefect.logging import get_run_logger from prefect.input import RunInput class UserInput(RunInput): name: str age: int # Imagine overridden methods here. def override_something(self, *args, **kwargs): super().override_something(*args, **kwargs) @flow async def greet_user(): logger = get_run_logger() user = await pause_flow_run(wait_for_input=UserInput) logger.info(f"Hello, {user.name}!") ``` ### Provide initial data Set default values for fields in your model with the `with_initial_data` method. This is useful for providing default values for the fields in your own `RunInput` class. Expanding on the example above, you can make the `name` field default to "anonymous": ```python theme={null} from prefect import flow from prefect.logging import get_run_logger from prefect.input import RunInput class UserInput(RunInput): name: str age: int @flow async def greet_user(): logger = get_run_logger() user_input = await pause_flow_run( wait_for_input=UserInput.with_initial_data(name="anonymous") ) if user_input.name == "anonymous": logger.info("Hello, stranger!") else: logger.info(f"Hello, {user_input.name}!") ``` When a user sees the form for this input, the name field contains "anonymous" as the default. ### Provide a description with runtime data You can provide a dynamic, Markdown description that appears in the Prefect UI when the flow run pauses. This feature enables context-specific prompts, enhancing clarity and user interaction. Building on the example above: ```python theme={null} from datetime import datetime from prefect import flow from prefect.flow_runs import pause_flow_run from prefect.logging import get_run_logger from prefect.input import RunInput class UserInput(RunInput): name: str age: int @flow async def greet_user(): logger = get_run_logger() current_date = datetime.now().strftime("%B %d, %Y") description_md = f""" **Welcome to the User Greeting Flow!** Today's Date: {current_date} Please enter your details below: - **Name**: What should we call you? - **Age**: Just a number, nothing more. """ user_input = await pause_flow_run( wait_for_input=UserInput.with_initial_data( description=description_md, name="anonymous" ) ) if user_input.name == "anonymous": logger.info("Hello, stranger!") else: logger.info(f"Hello, {user_input.name}!") ``` When a user sees the form for this input, the given Markdown appears above the input fields. ### Handle custom validation Prefect uses the fields and type hints on your `RunInput` or `BaseModel` class to validate the general structure of input your flow receives. If you require more complex validation, use Pydantic [model\_validators](https://docs.pydantic.dev/latest/concepts/validators/#model-validators). **Calling custom validation runs after the flow resumes** Prefect transforms the type annotations in your `RunInput` or `BaseModel` class to a JSON schema and uses that schema in the UI for client-side validation. However, custom validation requires running *Python* logic defined in your `RunInput` class. Because of this, validation happens *after the flow resumes*, so you should handle it explicitly in your flow. Continue reading for an example best practice. The following is an example `RunInput` class that uses a custom `model_validator`: ```python theme={null} from typing import Literal import pydantic from prefect.input import RunInput class ShirtOrder(RunInput): size: Literal["small", "medium", "large", "xlarge"] color: Literal["red", "green", "black"] @pydantic.model_validator(mode="after") def validate_age(self): if self.color == "green" and self.size == "small": raise ValueError( "Green is only in-stock for medium, large, and XL sizes." ) return self ``` In the example, we use Pydantic's `model_validator` decorator to define custom validation for our `ShirtOrder` class. You can use it in a flow like this: ```python theme={null} from typing import Literal import pydantic from prefect import flow, pause_flow_run from prefect.input import RunInput class ShirtOrder(RunInput): size: Literal["small", "medium", "large", "xlarge"] color: Literal["red", "green", "black"] @pydantic.model_validator(mode="after") def validate_age(self): if self.color == "green" and self.size == "small": raise ValueError( "Green is only in-stock for medium, large, and XL sizes." ) return self @flow def get_shirt_order(): shirt_order = pause_flow_run(wait_for_input=ShirtOrder) ``` If a user chooses any size and color combination other than `small` and `green`, the flow run resumes successfully. However, if the user chooses size `small` and color `green`, the flow run will resume, and `pause_flow_run` raises a `ValidationError` exception. This causes the flow run to fail and log the error. To avoid a flow run failure, use a `while` loop and pause again if the `ValidationError` exception is raised: ```python theme={null} from typing import Literal import pydantic from prefect import flow from prefect.flow_runs import pause_flow_run from prefect.logging import get_run_logger from prefect.input import RunInput class ShirtOrder(RunInput): size: Literal["small", "medium", "large", "xlarge"] color: Literal["red", "green", "black"] @pydantic.model_validator(mode="after") def validate_age(self): if self.color == "green" and self.size == "small": raise ValueError( "Green is only in-stock for medium, large, and XL sizes." ) return self @flow def get_shirt_order(): logger = get_run_logger() shirt_order = None while shirt_order is None: try: shirt_order = pause_flow_run(wait_for_input=ShirtOrder) except pydantic.ValidationError as exc: logger.error(f"Invalid size and color combination: {exc}") logger.info( f"Shirt order: {shirt_order.size}, {shirt_order.color}" ) ``` This code causes the flow run to continually pause until the user enters a valid age. As an additional step, you can use an [automation](/v3/automate/events/automations-triggers) to alert the user to the error. ## Send and receive input at runtime Use the `send_input` and `receive_input` functions to send input to a flow or receive input from a flow at runtime. You don't need to pause or suspend the flow to send or receive input. **Reasons to send or receive input without pausing or suspending** You might want to send or receive input without pausing or suspending in scenarios where the flow run is designed to handle real-time data. For example, in a live monitoring system, you might need to update certain parameters based on the incoming data without interrupting the flow. Another example is having a long-running flow that continually responds to runtime input with low latency. For example, if you're building a chatbot, you could have a flow that starts a GPT Assistant and manages a conversation thread. The most important parameter to the `send_input` and `receive_input` functions is `run_type`, which should be one of the following: * A built-in type such as `int` or `str` * A `pydantic.BaseModel` class * A `prefect.input.RunInput` class **When to use a `BaseModel` or `RunInput` instead of a built-in type** Most built-in types and collections of built-in types should work with `send_input` and `receive_input`, but there is a caveat with nested collection types, such as lists of tuples. For example, `List[Tuple[str, float]])`. In this case, validation may happen after your flow receives the data, so calling `receive_input` may raise a `ValidationError`. You can plan to catch this exception, and consider placing the field in an explicit `BaseModel` or `RunInput` so your flow only receives exact type matches. See examples below of `receive_input`, `send_input`, and the two functions working together. ### Receiving input The following flow uses `receive_input` to continually receive names and print a personalized greeting for each name it receives: ```python theme={null} from prefect import flow from prefect.input.run_input import receive_input @flow async def greeter_flow(): async for name_input in receive_input(str, timeout=None): # Prints "Hello, andrew!" if another flow sent "andrew" print(f"Hello, {name_input}!") ``` When you pass a type such as `str` into `receive_input`, Prefect creates a `RunInput` class to manage your input automatically. When a flow sends input of this type, Prefect uses the `RunInput` class to validate the input. If the validation succeeds, your flow receives the input in the type you specified. In this example, if the flow received a valid string as input, the variable `name_input` contains the string value. If, instead, you pass a `BaseModel`, Prefect upgrades your `BaseModel` to a `RunInput` class, and the variable your flow sees (in this case, `name_input`), is a `RunInput` instance that behaves like a `BaseModel`. If you pass in a `RunInput` class, no upgrade is needed and you'll get a `RunInput` instance. A simpler approach is to pass types such as `str` into `receive_input` . If you need access to the generated `RunInput` that contains the received value, pass `with_metadata=True` to `receive_input`: ```python theme={null} from prefect import flow from prefect.input.run_input import receive_input @flow async def greeter_flow(): async for name_input in receive_input( str, timeout=None, with_metadata=True ): # Input will always be in the field "value" on this object. print(f"Hello, {name_input.value}!") ``` **When to use `with_metadata=True`** The primary uses of accessing the `RunInput` object for a receive input are to respond to the sender with the `RunInput.respond()` function, or to access the unique key for an input. Notice that the printing of `name_input.value`. When Prefect generates a `RunInput` for you from a built-in type, the `RunInput` class has a single field, `value`, that uses a type annotation matching the type you specified. So if you call `receive_input` like this: `receive_input(str, with_metadata=True)`, it's equivalent to manually creating the following `RunInput` class and `receive_input` call: ```python theme={null} from prefect import flow from prefect.input.run_input import RunInput class GreeterInput(RunInput): value: str @flow async def greeter_flow(): async for name_input in receive_input(GreeterInput, timeout=None): print(f"Hello, {name_input.value}!") ``` **The type used in `receive_input` and `send_input` must match** For a flow to receive input, the sender must use the same type that the receiver is receiving. This means that if the receiver is receiving `GreeterInput`, the sender must send `GreeterInput`. If the receiver is receiving `GreeterInput` and the sender sends the `str` input that Prefect automatically upgrades to a `RunInput` class, the types won't match; which means the receiving flow run won't receive the input. However, the input will wait for if the flow ever calls `receive_input(str)`. ### Keep track of inputs you've already seen By default, each time you call `receive_input`, you get an iterator that iterates over all known inputs to a specific flow run, starting with the first received. The iterator keeps track of your current position as you iterate over it, or you can call `next()` to explicitly get the next input. If you're using the iterator in a loop, you should assign it to a variable: ```python theme={null} from prefect import flow, get_client from prefect.deployments import run_deployment from prefect.input.run_input import receive_input, send_input EXIT_SIGNAL = "__EXIT__" @flow async def sender(): greeter_flow_run = await run_deployment( "greeter/send-receive", timeout=0, as_subflow=False ) client = get_client() # Assigning the `receive_input` iterator to a variable # outside of the the `while True` loop allows us to continue # iterating over inputs in subsequent passes through the # while loop without losing our position. receiver = receive_input( str, with_metadata=True, timeout=None, poll_interval=0.1 ) while True: name = input("What is your name? ") if not name: continue if name == "q" or name == "quit": await send_input( EXIT_SIGNAL, flow_run_id=greeter_flow_run.id ) print("Goodbye!") break await send_input(name, flow_run_id=greeter_flow_run.id) # Saving the iterator outside of the while loop and # calling next() on each iteration of the loop ensures # that we're always getting the newest greeting. If we # had instead called `receive_input` here, we would # always get the _first_ greeting this flow received, # print it, and then ask for a new name. greeting = await receiver.next() print(greeting) ``` An iterator helps keep track of the inputs your flow has already received. If you want your flow to suspend and then resume later, save the keys of the inputs you've seen so the flow can read them back out when it resumes. Consider using a [Variable](/v3/concepts/variables/). The following flow receives input for 30 seconds then suspends itself, which exits the flow and tears down infrastructure: ```python theme={null} from prefect import flow from prefect.logging import get_run_logger from prefect.flow_runs import suspend_flow_run from prefect.variables import Variable from prefect.context import get_run_context from prefect.input.run_input import receive_input EXIT_SIGNAL = "__EXIT__" @flow async def greeter(): logger = get_run_logger() run_context = get_run_context() assert run_context.flow_run, "Could not see my flow run ID" variable_name = f"{run_context.flow_run.id}-seen-ids" try: seen_keys = await Variable.get(variable_name) except (ValueError, TypeError): seen_keys = [] try: async for name_input in receive_input( str, with_metadata=True, poll_interval=0.1, timeout=30, exclude_keys=seen_keys ): if name_input.value == EXIT_SIGNAL: print("Goodbye!") return await name_input.respond(f"Hello, {name_input.value}!") seen_keys.append(name_input.metadata.key) await Variable.set( variable_name, seen_keys, overwrite=True ) except TimeoutError: logger.info("Suspending greeter after 30 seconds of idle time") await suspend_flow_run(timeout=10000) ``` As this flow processes name input, it adds the *key* of the flow run input to the list of seen keys. When the flow later suspends and then resumes, it reads the keys it has already seen from the variable and passes them as the `exlude_keys` parameter to `receive_input`. ### Respond to the input's sender When your flow receives input from another flow, Prefect knows the sending flow run ID, so the receiving flow can respond by calling the `respond` method on the `RunInput` instance the flow received. There are a couple of requirements: * Pass in a `BaseModel` or `RunInput`, or use `with_metadata=True`. * The flow you are responding to must receive the same type of input you send to see it. The `respond` method is equivalent to calling `send_input(..., flow_run_id=sending_flow_run.id)`. But with `respond`, your flow doesn't need to know the sending flow run's ID. Next, make the `greeter_flow` respond to name inputs instead of printing them: ```python theme={null} from prefect import flow from prefect.input.run_input import receive_input @flow async def greeter(): async for name_input in receive_input( str, with_metadata=True, timeout=None ): await name_input.respond(f"Hello, {name_input.value}!") ``` However, this flow runs forever unless there's a signal that it should exit. Here's how to make it to look for a special string: ```python theme={null} from prefect import flow from prefect.input.run_input import receive_input EXIT_SIGNAL = "__EXIT__" @flow async def greeter(): async for name_input in receive_input( str, with_metadata=True, poll_interval=0.1, timeout=None ): if name_input.value == EXIT_SIGNAL: print("Goodbye!") return await name_input.respond(f"Hello, {name_input.value}!") ``` With a `greeter` flow in place, create the flow that sends `greeter` names. ### Send input Send input to a flow with the `send_input` function. This works similarly to `receive_input` and, like that function, accepts the same `run_input` argument. This can be a built-in type such as `str`, or else a `BaseModel` or `RunInput` subclass. **When to send input to a flow run** Send input to a flow run as soon as you have the flow run's ID. The flow does not have to be receiving input for you to send input. If you send a flow input before it is receiving, it will see your input when it calls `receive_input` (as long as the types in the `send_input` and `receive_input` calls match). Next, create a `sender` flow that starts a `greeter` flow run and then enters a loop—continuously getting input from the terminal and sending it to the greeter flow: ```python theme={null} from prefect import flow from prefect.deployments import run_deployment @flow async def sender(): greeter_flow_run = await run_deployment( "greeter/send-receive", timeout=0, as_subflow=False ) receiver = receive_input(str, timeout=None, poll_interval=0.1) client = get_client() while True: flow_run = await client.read_flow_run(greeter_flow_run.id) if not flow_run.state or not flow_run.state.is_running(): continue name = input("What is your name? ") if not name: continue if name == "q" or name == "quit": await send_input( EXIT_SIGNAL, flow_run_id=greeter_flow_run.id ) print("Goodbye!") break await send_input(name, flow_run_id=greeter_flow_run.id) greeting = await receiver.next() print(greeting) ``` First, `run_deployment` starts a `greeter` flow run. This requires a deployed flow running in a process. That process begins running `greeter` while `sender` continues to execute. Calling `run_deployment(..., timeout=0)` ensures that `sender` won't wait for the `greeter` flow run to complete, because it's running a loop and only exits when sending `EXIT_SIGNAL`. Next, the iterator returned by `receive_input` as `receiver` is captured. This flow works by entering a loop. On each iteration of the loop, the flow asks for terminal input, sends that to the `greeter` flow, and then runs `receiver.next()` to wait until it receives the response from `greeter`. Next, the terminal user who ran this flow is allowed to exit by entering the string `q` or `quit`. When that happens, the `greeter` flow is sent an exit signal to shut down, too. Finally, the new name is sent to `greeter`. `greeter` sends back a greeting as a string. When you receive the greeting, print it and continue the loop that gets terminal input. ### A complete example For a complete example of using `send_input` and `receive_input`, here is what the `greeter` and `sender` flows look like together: ```python theme={null} import asyncio import sys from prefect import flow, get_client from prefect.variables import Variable from prefect.context import get_run_context from prefect.deployments import run_deployment from prefect.input.run_input import receive_input, send_input EXIT_SIGNAL = "__EXIT__" @flow async def greeter(): run_context = get_run_context() assert run_context.flow_run, "Could not see my flow run ID" variable_name = f"{run_context.flow_run.id}-seen-ids" try: seen_keys = await Variable.get(variable_name) except (ValueError, TypeError): seen_keys = [] async for name_input in receive_input( str, with_metadata=True, poll_interval=0.1, timeout=None ): if name_input.value == EXIT_SIGNAL: print("Goodbye!") return await name_input.respond(f"Hello, {name_input.value}!") seen_keys.append(name_input.metadata.key) await Variable.set( variable_name, seen_keys, overwrite=True ) @flow async def sender(): greeter_flow_run = await run_deployment( "greeter/send-receive", timeout=0, as_subflow=False ) receiver = receive_input(str, timeout=None, poll_interval=0.1) client = get_client() while True: flow_run = await client.read_flow_run(greeter_flow_run.id) if not flow_run.state or not flow_run.state.is_running(): continue name = input("What is your name? ") if not name: continue if name == "q" or name == "quit": await send_input( EXIT_SIGNAL, flow_run_id=greeter_flow_run.id ) print("Goodbye!") break await send_input(name, flow_run_id=greeter_flow_run.id) greeting = await receiver.next() print(greeting) if __name__ == "__main__": if sys.argv[1] == "greeter": asyncio.run(greeter.serve(name="send-receive")) elif sys.argv[1] == "sender": asyncio.run(sender()) ``` To run the example, you need a Python environment with Prefect installed, pointed at either a Prefect Cloud account or a self-hosted Prefect server instance. With your environment set up, start a flow runner in one terminal with the following command: ```bash theme={null} python my_file_name greeter ``` For example, with Prefect Cloud, you should see output like this: ```bash theme={null} ______________________________________________________________________ | Your flow 'greeter' is being served and polling for scheduled runs | | | | To trigger a run for this flow, use the following command: | | | | $ prefect deployment run 'greeter/send-receive' | | | | You can also run your flow via the Prefect UI: | | https://app.prefect.cloud/account/...(a URL for your account) | | | ______________________________________________________________________ ``` Then start the greeter process in another terminal: ```bash theme={null} python my_file_name sender ``` You should see output like this: ```bash theme={null} 11:38:41.800 | INFO | prefect.engine - Created flow run 'gregarious-owl' for flow 'sender' 11:38:41.802 | INFO | Flow run 'gregarious-owl' - View at https://app.prefect.cloud/account/... What is your name? ``` Type a name and press the enter key to see a greeting to see sending and receiving in action: ```bash theme={null} What is your name? andrew Hello, andrew! ``` # How to customize Prefect's logging configuration Source: https://docs.prefect.io/v3/advanced/logging-customization Prefect relies on [the standard Python implementation of logging configuration](https://docs.python.org/3/library/logging.config.html). The full specification of the default logging configuration for any version of Prefect can always be inspected [here](https://github.com/PrefectHQ/prefect/blob/main/src/prefect/logging/logging.yml). The default logging level is `INFO`. ### Customize logging configuration Prefect provides several settings to configure the logging level and individual loggers. Any value in [Prefect's logging configuration file](https://github.com/PrefectHQ/prefect/blob/main/src/prefect/logging/logging.yml) can be overridden through a Prefect setting of the form `PREFECT_LOGGING_[PATH]_[TO]_[KEY]=value` corresponding to the nested address of the field you are configuring. For example, to change the default logging level for flow runs but not task runs, update your profile with: ```bash theme={null} prefect config set PREFECT_LOGGING_LOGGERS_PREFECT_FLOW_RUNS_LEVEL="ERROR" ``` or set the corresponding environment variable: ```bash theme={null} export PREFECT_LOGGING_LOGGERS_PREFECT_FLOW_RUNS_LEVEL="ERROR" ``` You can also configure the "root" Python logger. The root logger receives logs from all loggers unless they explicitly opt out by disabling propagation. By default, the root logger is configured to output `WARNING` level logs to the console. As with other logging settings, you can override this from the environment or in the logging configuration file. For example, you can change the level with the `PREFECT_LOGGING_ROOT_LEVEL` environment variable. In some situations you may want to completely overhaul the Prefect logging configuration by providing your own `logging.yml` file. You can create your own version of `logging.yml` in one of two ways: 1. Create a `logging.yml` file in your `PREFECT_HOME` directory (default is `~/.prefect`). 2. Specify a custom path to your `logging.yml` file using the `PREFECT_LOGGING_SETTINGS_PATH` setting. If Prefect cannot find the `logging.yml` file at the specified location, it will fall back to using the default logging configuration. See the Python [Logging configuration](https://docs.python.org/3/library/logging.config.html#logging.config.dictConfig) documentation for more information about the configuration options and syntax used by `logging.yml`. As with all Prefect settings, logging settings are loaded at runtime. This means that to customize Prefect logging in a remote environment requires setting the appropriate environment variables and/or profile in that environment. ### Formatters Prefect log formatters specify the format of log messages. The default formatting for task and flow run records is `"%(asctime)s.%(msecs)03d | %(levelname)-7s | Task run %(task_run_name)r - %(message)s"` for tasks and similarly `"%(asctime)s.%(msecs)03d | %(levelname)-7s | Flow run %(flow_run_name)r - %(message)s"` for flows. The variables available to interpolate in log messages vary by logger. In addition to the run context, message string, and any keyword arguments, flow and task run loggers have access to additional variables. The flow run logger has the following variables available for formatting: * `flow_run_name` * `flow_run_id` * `flow_name` The task run logger has the following variables available for formatting: * `task_run_id` * `flow_run_id` * `task_run_name` * `task_name` * `flow_run_name` * `flow_name` You can specify custom formatting by setting the relevant environment variable or by modifying the formatter in a custom `logging.yml` file as described earlier. For example, the following changes the formatting for the flow runs formatter: ```bash theme={null} PREFECT_LOGGING_FORMATTERS_STANDARD_FLOW_RUN_FMT="%(asctime)s.%(msecs)03d | %(levelname)-7s | %(flow_run_id)s - %(message)s" ``` The resulting messages, using the flow run ID instead of name, look like this: ```bash theme={null} 10:40:01.211 | INFO | e43a5a80-417a-41c4-a39e-2ef7421ee1fc - Created task run 'othertask-1c085beb-3' for task 'othertask' ``` ### Styles By default, Prefect highlights specific keywords in the console logs with a variety of colors. You can toggle highlighting on/off with the `PREFECT_LOGGING_COLORS` setting: ```bash theme={null} PREFECT_LOGGING_COLORS=False ``` You can also change what gets highlighted and even adjust the colors by updating the styles - see the `styles` section of [the Prefect logging configuration file](https://github.com/PrefectHQ/prefect/blob/main/src/prefect/logging/logging.yml) for available keys. Note that these style settings only impact the display within a terminal, not the Prefect UI. You can even build your own handler with a [custom highlighter](https://rich.readthedocs.io/en/stable/highlighting.html#custom-highlighters). For example, to additionally highlight emails: 1. Copy and paste the following code into `my_package_or_module.py` (rename as needed) in the same directory as the flow run script; or ideally as part of a Python package so it's available in `site-packages` and accessible anywhere within your environment. ```python theme={null} import logging from typing import Dict, Union from rich.highlighter import Highlighter from prefect.logging.handlers import PrefectConsoleHandler from prefect.logging.highlighters import PrefectConsoleHighlighter class CustomConsoleHighlighter(PrefectConsoleHighlighter): base_style = "log." highlights = PrefectConsoleHighlighter.highlights + [ # ?P is naming this expression as `email` r"(?P[\w-]+@([\w-]+\.)+[\w-]+)", ] class CustomConsoleHandler(PrefectConsoleHandler): def __init__( self, highlighter: Highlighter = CustomConsoleHighlighter, styles: Dict[str, str] = None, level: Union[int, str] = logging.NOTSET, ): super().__init__(highlighter=highlighter, styles=styles, level=level) ``` 2. Update `~/.prefect/logging.yml` to use `my_package_or_module.CustomConsoleHandler` and additionally reference the base\_style and named expression: `log.email`. ```yaml theme={null} console_flow_runs: level: 0 class: my_package_or_module.CustomConsoleHandler formatter: flow_runs styles: log.email: magenta # other styles can be appended here, e.g. # log.completed_state: green ``` 3. On your next flow run, text that looks like an email is highlighted. For example, `my@email.com` is colored in magenta below: ```python theme={null} from prefect import flow from prefect.logging import get_run_logger @flow def log_email_flow(): logger = get_run_logger() logger.info("my@email.com") log_email_flow() ``` ### Apply markup in logs To use [Rich's markup](https://rich.readthedocs.io/en/stable/markup.html#console-markup) in Prefect logs, first configure `PREFECT_LOGGING_MARKUP`: ```bash theme={null} PREFECT_LOGGING_MARKUP=True ``` The following will highlight "fancy" in red: ```python theme={null} from prefect import flow from prefect.logging import get_run_logger @flow def my_flow(): logger = get_run_logger() logger.info("This is [bold red]fancy[/]") my_flow() ``` **Inaccurate logs could result** If enabled, strings that contain square brackets may be inaccurately interpreted and lead to incomplete output. For example, `DROP TABLE [dbo].[SomeTable];"` outputs `DROP TABLE .[SomeTable];`. ## Include logs from other libraries By default, Prefect won't capture log statements from libraries that your flows and tasks use. You can tell Prefect to include logs from these libraries with the `PREFECT_LOGGING_EXTRA_LOGGERS` setting. To use this setting, specify one or more Python library names to include, separated by commas. For example, if you want Prefect to capture Dask and SciPy logging statements with your flow and task run logs, use: `PREFECT_LOGGING_EXTRA_LOGGERS=dask,scipy` Configure this setting as an environment variable or in a profile. See [Settings](/v3/develop/settings-and-profiles/) for more details about how to use settings. # How to persist and retrieve workflow results Source: https://docs.prefect.io/v3/advanced/results Results represent the data returned by a flow or a task and enable features such as caching. Results are the bedrock of many Prefect features - most notably [transactions](/v3/develop/transactions) and [caching](/v3/concepts/caching) - and are foundational to the resilient execution paradigm that Prefect enables. Any return value from a task or a flow is a result. By default these results are not persisted and no reference to them is maintained in the API. Enabling result persistence allows you to fully benefit from Prefect's orchestration features. **Turn on persistence globally by default** The simplest way to turn on result persistence globally is through the `PREFECT_RESULTS_PERSIST_BY_DEFAULT` setting: ```bash theme={null} prefect config set PREFECT_RESULTS_PERSIST_BY_DEFAULT=true ``` See [settings](/v3/develop/settings-and-profiles) for more information on how settings are managed. ## Configuring result persistence There are four categories of configuration for result persistence: * [whether to persist results at all](#enabling-result-persistence): this is configured through various keyword arguments, the `PREFECT_RESULTS_PERSIST_BY_DEFAULT` setting, and the `PREFECT_TASKS_DEFAULT_PERSIST_RESULT` setting for tasks specifically. * [what filesystem to persist results to](#result-storage): this is configured through the `result_storage` keyword and the `PREFECT_DEFAULT_RESULT_STORAGE_BLOCK` setting. * [how to serialize and deserialize results](#result-serialization): this is configured through the `result_serializer` keyword and the `PREFECT_RESULTS_DEFAULT_SERIALIZER` setting. * [what filename to use](#result-filenames): this is configured through one of `result_storage_key`, `cache_policy`, or `cache_key_fn`. ### Default persistence configuration Once result persistence is enabled - whether through the `PREFECT_RESULTS_PERSIST_BY_DEFAULT` setting or through any of the mechanisms [described below](#enabling-result-persistence) - Prefect's default result storage configuration is activated. If you enable result persistence and don't specify a filesystem block, your results will be stored locally. By default, results are persisted to `~/.prefect/storage/`. You can configure the location of these results through the `PREFECT_LOCAL_STORAGE_PATH` setting. ```bash theme={null} prefect config set PREFECT_LOCAL_STORAGE_PATH='~/.my-results/' ``` With ephemeral infrastructure such as Kubernetes or Docker, the default local storage location works within a single flow run but does not persist results across runs. When a flow run is retried through the UI, a new pod or container is created and cannot access results saved to the local filesystem of the original container. To persist results across runs on ephemeral infrastructure, configure a remote storage block (such as S3, GCS, or Azure Blob Storage) as your `result_storage`, or use a shared volume such as a Kubernetes `PersistentVolumeClaim`. See [Result storage](#result-storage) for configuration details. ### Enabling result persistence In addition to the `PREFECT_RESULTS_PERSIST_BY_DEFAULT` and `PREFECT_TASKS_DEFAULT_PERSIST_RESULT` settings, result persistence can also be enabled or disabled on both individual flows and individual tasks. Specifying a non-null value for any of the following keywords on the task decorator will enable result persistence for that task: * `persist_result`: a boolean that allows you to explicitly enable or disable result persistence. * `result_storage`: accepts either a string reference to a storage block or a storage block class that specifies where results should be stored. * `result_storage_key`: a string that specifies the filename of the result within the task's result storage. * `result_serializer`: a string or serializer that configures how the data should be serialized and deserialized. * `cache_policy`: a [cache policy](/v3/concepts/caching#cache-policies) specifying the behavior of the task's cache. * `cache_key_fn`: [a function](/v3/concepts/caching#cache-key-functions) that configures a custom cache policy. Similarly, setting `persist_result=True`, `result_storage`, or `result_serializer` on a flow will enable persistence for that flow. **Enabling persistence on a flow enables persistence by default for its tasks** Enabling result persistence on a flow through any of the above keywords will also enable it for all tasks called within that flow by default. Any settings *explicitly* set on a task take precedence over the flow settings. Additionally, the `PREFECT_TASKS_DEFAULT_PERSIST_RESULT` environment variable can be used to globally control the default persistence behavior for tasks, overriding the default behavior set by a parent flow or task. ### Result storage You can configure the system of record for your results through the `result_storage` keyword argument. This keyword accepts an instantiated [filesystem block](/v3/develop/blocks/), or a block slug. Find your blocks' slugs with `prefect block ls`. Note that if you want your tasks to share a common cache, your result storage should be accessible by the infrastructure in which those tasks run. [Integrations](/integrations/integrations) have cloud-specific storage blocks. For example, a common distributed filesystem for result storage is AWS S3. Additionally, you can control the default persistence behavior for task results using the `default_persist_result` setting. This setting allows you to specify whether results should be persisted by default for all tasks. You can set this to `True` to enable persistence by default, or `False` to disable it. This setting can be overridden at the individual task or flow level. ```python theme={null} from prefect import flow, task from prefect_aws.s3 import S3Bucket test_block = S3Bucket(bucket_name='test-bucket') test_block.save('test-block', overwrite=True) # define three tasks # with different result persistence configuration @task def my_task(): return 42 unpersisted_task = my_task.with_options(persist_result=False) other_storage_task = my_task.with_options(result_storage=test_block) @flow(result_storage='s3-bucket/my-dev-block') def my_flow(): # this task will use the flow's result storage my_task() # this task will not persist results at all unpersisted_task() # this task will persist results to its own bucket using a different S3 block other_storage_task() ``` **Using result storage with decorators** When specifying `result_storage` in `@flow` or `@task` decorators, you have two options: * **Block instances**: The block instance must be saved server-side or loaded from a saved block instance before it is provided to the `@task` or `@flow` decorator. * **String references**: Use the format `"block-type-slug/block-name"` for deferred resolution at runtime For testing scenarios, string references are recommended since they don't require server connectivity at import time. ```python theme={null} from prefect import flow from prefect.filesystems import LocalFileSystem # Option 1: Save block first (requires server connection at import time) storage = LocalFileSystem(basepath="/tmp/results") storage.save("my-storage", overwrite=True) @flow(result_storage=storage) # Works because block is saved def my_flow(): return "result" # Option 2: Use string reference (recommended for testing) @flow(result_storage="local-file-system/my-storage") # Resolved at runtime def my_flow(): return "result" ``` #### Specifying a default filesystem Alternatively, you can specify a different filesystem through the `PREFECT_DEFAULT_RESULT_STORAGE_BLOCK` setting. Specifying a block document slug here will enable result persistence using that filesystem as the default. For example: ```bash theme={null} prefect config set PREFECT_DEFAULT_RESULT_STORAGE_BLOCK='s3-bucket/my-prod-block' ``` Note that any explicit configuration of `result_storage` on either a flow or task will override this default. #### Result filenames By default, the filename of a task's result is computed based on the task's cache policy, which is typically a hash of various pieces of data and metadata. For flows, the filename is a random UUID. You can configure the filename of the result file within result storage using either: * `result_storage_key`: a templated string that can use any of the fields within `prefect.runtime` and the task's individual parameter values. These templated values will be populated at runtime. * `cache_key_fn`: a function that accepts the task run context and its runtime parameters and returns a string. See [task caching documentation](/v3/concepts/caching#cache-key-functions) for more information. If both `result_storage_key` and `cache_key_fn` are provided, only the `result_storage_key` will be used. The following example writes three different result files based on the `name` parameter passed to the task: ```python theme={null} from prefect import flow, task @task(result_storage_key="hello-{parameters[name]}.pickle") def hello_world(name: str = "world"): return f"hello {name}" @flow def my_flow(): hello_world() hello_world(name="foo") hello_world(name="bar") ``` If a result exists at a given storage key in the storage location, the task will load it without running. To learn more about caching mechanics in Prefect, see the [caching documentation](/v3/concepts/caching). ### Result serialization You can configure how results are serialized to storage using result serializers. These can be set using the `result_serializer` keyword on both tasks and flows. A default value can be set using the `PREFECT_RESULTS_DEFAULT_SERIALIZER` setting, which defaults to `pickle`. Current built-in options include `"pickle"`, `"json"`, `"compressed/pickle"` and `"compressed/json"`. The `result_serializer` accepts both a string identifier or an instance of a `ResultSerializer` class, allowing you to customize serialization behavior. ## Caching results in memory When running workflows, Prefect keeps the results of all tasks and flows in memory so they can be passed downstream. In some cases, it is desirable to override this behavior. For example, if you are returning a large amount of data from a task, it can be costly to keep it in memory for the entire duration of the flow run. Flows and tasks both include an option to drop the result from memory once the result has been committed with `cache_result_in_memory`: ```python theme={null} from prefect import flow, task @flow(cache_result_in_memory=False) def foo(): return "pretend this is large data" @task(cache_result_in_memory=False) def bar(): return "pretend this is biiiig data" ``` ## Reading persisted results After a flow or task run completes, you can read the persisted result back using `ResultStore`. This is useful when you need to access a task's return value outside of the flow that produced it, for example in a separate script, a notebook, or a downstream pipeline. ### Read a result with `ResultStore` Use `ResultStore` to read a result record by its storage key. The storage key is the filename of the result in your result storage location. When you use `result_storage_key` on a task, the key is the formatted string you provided. Otherwise, it is a hash derived from the task's cache policy. ```python theme={null} from prefect.results import ResultStore from prefect.filesystems import LocalFileSystem storage = LocalFileSystem(basepath="~/.prefect/storage") store = ResultStore(result_storage=storage) record = store.read(key="hello-world.pickle") print(record.result) ``` The `read` method returns a `ResultRecord` object. Access the deserialized return value through the `.result` attribute. Always pass an explicit `result_storage` when constructing `ResultStore`. If you omit it, `ResultStore` attempts to resolve the default storage from Prefect settings, which requires a running Prefect server or Prefect Cloud connection. ### End-to-end example: persist and then read a result The following example persists a task result with a known storage key and then reads it back in a separate step: ```python theme={null} from prefect import flow, task @task(persist_result=True, result_storage_key="my-result") def compute_value(): return {"answer": 42} @flow def my_flow(): compute_value() # Run the flow to persist the result my_flow() ``` After the flow completes, read the result: ```python theme={null} from prefect.results import ResultStore from prefect.filesystems import LocalFileSystem storage = LocalFileSystem(basepath="~/.prefect/storage") store = ResultStore(result_storage=storage) record = store.read(key="my-result") print(record.result) # {"answer": 42} ``` ### Read a result from a file directly If you need to read a result file without using Prefect's `ResultStore`—for example, in an environment where Prefect is not installed—you can deserialize the file manually. Result files are JSON documents that contain a `result` field with the serialized data, and a `metadata` field that describes the serializer used. The encoding of the `result` field depends on the serializer: * **pickle** (default): the value is base64-encoded pickled bytes. * **json**: the value is a raw JSON string (not base64-encoded). The following example handles both cases: ```python theme={null} import json import base64 import cloudpickle with open("/path/to/.prefect/storage/my-result", "r") as f: result_data = json.load(f) serializer_type = result_data["metadata"]["serializer"]["type"] raw_result = result_data["result"] if serializer_type == "pickle": value = cloudpickle.loads(base64.b64decode(raw_result)) elif serializer_type == "json": value = json.loads(raw_result) else: raise ValueError(f"Unsupported serializer: {serializer_type}") print(value) ``` Manually deserializing results bypasses Prefect's built-in expiration checks and lock management. Use `ResultStore` whenever Prefect is available in your environment. ### Inspect result metadata Each `ResultRecord` includes a `metadata` attribute with information about the serializer, the storage key, and an optional expiration timestamp: ```python theme={null} from prefect.results import ResultStore from prefect.filesystems import LocalFileSystem storage = LocalFileSystem(basepath="~/.prefect/storage") store = ResultStore(result_storage=storage) record = store.read(key="my-result") print(record.metadata.serializer) # PickleSerializer(type='pickle', ...) print(record.metadata.storage_key) # "my-result" print(record.metadata.expiration) # None or a datetime print(record.metadata.prefect_version) # e.g. "3.4.0" ``` **Related pages** * [Caching](/v3/concepts/caching): configure when tasks reuse persisted results * [Transactions](/v3/advanced/transactions): group multiple task results into atomic units # How to secure a self-hosted Prefect server Source: https://docs.prefect.io/v3/advanced/security-settings Learn about the Prefect settings that add security to your self-hosted server. Prefect provides a number of [settings](/v3/concepts/settings-and-profiles) that help secure a self-hosted Prefect server. ## Basic Authentication Self-hosted Prefect servers can be equipped with Basic Authentication through two settings: * **`server.api.auth_string="admin:pass"`**: this setting should be set with an administrator / password combination, separated by a colon, on any process that hosts the Prefect webserver (for example `prefect server start`). * **`api.auth_string="admin:pass"`**: this setting should be set with the same administrator / password combination as the server on any client process that needs to communicate with the Prefect API (for example, any process that runs a workflow). With these settings, the UI will prompt for the full authentication string `"admin:pass"` (no quotes) upon first load. It is recommended to store this information in a secure way, such as a Kubernetes Secret or in a private `.env` file. **Note on API keys** API keys are only used for [authenticating with Prefect Cloud](/v3/how-to-guides/cloud/manage-users/api-keys). If both `PREFECT_API_KEY` and `PREFECT_API_AUTH_STRING` are set on the client, `PREFECT_API_KEY` will take precedence. If you plan to use a self-hosted Prefect server, make sure `PREFECT_API_KEY` is not set in your active profile or as an environment variable, otherwise authentication will fail (`HTTP 401 Unauthorized`). Example `.env` file: ```bash .env theme={null} PREFECT_SERVER_API_AUTH_STRING="admin:pass" PREFECT_API_AUTH_STRING="admin:pass" ``` ## Host the UI behind a reverse proxy When using a reverse proxy (such as [Nginx](https://nginx.org) or [Traefik](https://traefik.io)) to proxy traffic to a hosted Prefect UI instance, you must also configure the self-hosted Prefect server instance to connect to the API. The [`ui.api_url`](/v3/develop/settings-ref/#api_url) setting should be set to the external proxy URL. For example, if your external URL is `https://prefect-server.example.com` then you can configure a `prefect.toml` file for your server like this: ```toml prefect.toml theme={null} [ui] api_url = "https://prefect-server.example.com/api" ``` If you do not set `ui.api_url`, then `api.url` will be used as a fallback. ## CSRF protection settings If using self-hosted Prefect server, you can configure CSRF protection settings. * [`server.api.csrf_protection_enabled`](/v3/develop/settings-ref/#csrf-protection-enabled): activates CSRF protection on the server, requiring valid CSRF tokens for applicable requests. Recommended for production to prevent CSRF attacks. Defaults to `False`. * [`server.api.csrf_token_expiration`](/v3/develop/settings-ref/#csrf-token-expiration): sets the expiration duration for server-issued CSRF tokens, influencing how often tokens need to be refreshed. The default is 1 hour. * [`client.csrf_support_enabled`](/v3/develop/settings-ref/#csrf-support-enabled): enables or disables CSRF token handling in the Prefect client. When enabled, the client manages CSRF tokens for state-changing API requests. Defaults to `True`. By default clients expect that CSRF protection is enabled on the server. If you are running a server without CSRF protection, you can disable CSRF support in the client. ## CORS settings If using self-hosted Prefect server, you can configure CORS settings to control which origins are allowed to make cross-origin requests to your server. * [`server.api.cors_allowed_origins`](/v3/develop/settings-ref/#cors-allowed-origins): a list of origins that are allowed to make cross-origin requests. * [`server.api.cors_allowed_methods`](/v3/develop/settings-ref/#cors-allowed-methods): a list of HTTP methods that are allowed to be used during cross-origin requests. * [`server.api.cors_allowed_headers`](/v3/develop/settings-ref/#cors-allowed-headers): a list of headers that are allowed to be used during cross-origin requests. ## Custom client headers The [`client.custom_headers`](/v3/develop/settings-ref/#custom-headers) setting allows you to configure custom HTTP headers that are included with every API request. This is particularly useful for authentication with proxies, CDNs, or security services that protect your Prefect server. ```bash Environment variable theme={null} export PREFECT_CLIENT_CUSTOM_HEADERS='{ "Proxy-Authorization": "Bearer your-proxy-token", "X-Corporate-ID": "your-corp-identifier" }' ``` ```bash CLI theme={null} prefect config set PREFECT_CLIENT_CUSTOM_HEADERS='{"Proxy-Authorization": "Bearer your-proxy-token", "X-Corporate-ID": "your-corp-ID"}' ``` ```toml prefect.toml theme={null} [client] custom_headers = '''{ "Proxy-Authorization": "Bearer your-proxy-token", "X-Corporate-ID": "your-corp-identifier" }''' ``` Certain headers are protected and cannot be overridden for security reasons: * **`User-Agent`**: Managed by Prefect to identify client version and capabilities * **`Prefect-Csrf-Token`**: Used for CSRF protection when enabled * **`Prefect-Csrf-Client`**: Used for CSRF client identification If you attempt to override these protected headers, Prefect will log a warning and ignore the custom value to maintain security. **Store credentials securely** When using custom headers for authentication, ensure that sensitive values like API keys and tokens are stored securely using environment variables, secrets management systems, or encrypted configuration files. Avoid hardcoding credentials in your source code. # How to scale self-hosted Prefect Source: https://docs.prefect.io/v3/advanced/self-hosted Learn how to run multiple Prefect server instances for high availability and load distribution Running multiple Prefect server instances enables high availability and distributes load across your infrastructure. This guide covers configuration and deployment patterns for scaling self-hosted Prefect. ## Requirements Multi-server deployments require: * PostgreSQL database version 14.9 or higher (SQLite does not support multi-server synchronization) * Redis for event messaging * Load balancer for API traffic distribution ## Architecture A scaled Prefect deployment typically includes: * **Multiple API server instances** - Handle UI and API requests * **Background services** - Runs the scheduler, automation triggers, and other loop services * **[PostgreSQL](https://www.postgresql.org/) database** - Stores all persistent data and synchronizes state across servers * **[Redis](https://redis.io/)** - Distributes events between services * **Load balancer** - Routes traffic to healthy API instances (e.g. [NGINX](https://www.f5.com/go/product/welcome-to-nginx) or [Traefik](https://doc.traefik.io/traefik/)) ```mermaid theme={null} %%{ init: { 'theme': 'neutral', 'flowchart': { 'curve' : 'linear', 'rankSpacing': 120, 'nodeSpacing': 80 } } }%% flowchart TB %% Style definitions classDef userClass fill:#ede7f6db,stroke:#4527a0db,stroke-width:2px classDef lbClass fill:#e3f2fddb,stroke:#1565c0db,stroke-width:2px classDef apiClass fill:#1860f2db,stroke:#1860f2db,stroke-width:2px classDef bgClass fill:#7c3aeddb,stroke:#7c3aeddb,stroke-width:2px classDef dataClass fill:#16a34adb,stroke:#16a34adb,stroke-width:2px classDef workerClass fill:#f59e0bdb,stroke:#f59e0bdb,stroke-width:2px %% Nodes subgraph clients[Client Side] direction TB Users[Users / UI / API Clients]:::userClass Workers[Workers poll any available API server
Process / K8s / Docker / Serverless]:::workerClass end LB[Load Balancer
NGINX / HAProxy / ALB
Port 4200]:::lbClass subgraph servers[Prefect Server Components] direction TB subgraph api[API Servers - Horizontal Scaling] direction LR API1[API Server 1
--no-services]:::apiClass API2[API Server 2
--no-services]:::apiClass API3[API Server N...
--no-services]:::apiClass end BG[Background Services
prefect server services start

• Event Processing
• Automation Triggers
• Schedule Management]:::bgClass end subgraph data[Data Layer] direction LR PG[(PostgreSQL
• Flow/Task State
• Configuration
• History)]:::dataClass Redis[(Redis
• Events
• Automations
• Real-time Updates)]:::dataClass end %% Connections Users --> |HTTPS| LB LB --> |Round Robin| api api --> |Read/Write| PG api --> |Publish| Redis BG --> |Read/Write| PG BG --> |Subscribe| Redis Workers -.-> |Poll Work| api ``` ## Configuration ### Database setup Configure PostgreSQL as your database backend: ```bash theme={null} export PREFECT_API_DATABASE_CONNECTION_URL="postgresql+asyncpg://user:password@host:5432/prefect" ``` PostgreSQL version 14.9 or higher is required for multi-server deployments. SQLite does not support the features needed for state synchronization across multiple servers. ### AWS RDS IAM Authentication To use AWS IAM authentication for your PostgreSQL database (experimental): 1. **Install the AWS integration**: ```bash theme={null} pip install prefect-aws ``` 2. **Create an IAM policy** with `rds-db:connect` permission and attach it to your IAM user/role. 3. **Enable experimental plugins and IAM authentication**: ```bash theme={null} export PREFECT_EXPERIMENTS_PLUGINS_ENABLED=true export PREFECT_INTEGRATIONS_AWS_RDS_IAM_ENABLED=true # Optional: export PREFECT_INTEGRATIONS_AWS_RDS_IAM_REGION_NAME=us-east-1 ``` 4. **Configure your connection URL**: ```bash theme={null} export PREFECT_API_DATABASE_CONNECTION_URL="postgresql+asyncpg://iam_user@host:5432/prefect" ``` ### Redis setup Configure Redis as your server's message broker, cache, and lease storage: ```bash theme={null} export PREFECT_MESSAGING_BROKER="prefect_redis.messaging" export PREFECT_MESSAGING_CACHE="prefect_redis.messaging" export PREFECT_SERVER_EVENTS_CAUSAL_ORDERING="prefect_redis.ordering" export PREFECT_SERVER_CONCURRENCY_LEASE_STORAGE="prefect_redis.lease_storage" export PREFECT_REDIS_MESSAGING_HOST="redis-host" export PREFECT_REDIS_MESSAGING_PORT="6379" export PREFECT_REDIS_MESSAGING_DB="0" ``` If your Redis instance requires authentication, you may configure a username and password: ```bash theme={null} export PREFECT_REDIS_MESSAGING_USERNAME="marvin" export PREFECT_REDIS_MESSAGING_PASSWORD="dontpanic!" ``` For Redis instances that require an encrypted connection, you can enable SSL/TLS: ```bash theme={null} export PREFECT_REDIS_MESSAGING_SSL="true" ``` Alternatively, configure the Redis connection with a single URL instead of individual fields. When `PREFECT_REDIS_MESSAGING_URL` is set, it takes precedence and the individual host, port, db, username, password, and SSL fields are ignored: ```bash theme={null} export PREFECT_REDIS_MESSAGING_URL="redis://username:password@redis-host:6379/0" ``` Use `rediss://` for TLS connections: ```bash theme={null} export PREFECT_REDIS_MESSAGING_URL="rediss://redis-host:6379/0" ``` #### Docket URL for background services Prefect uses [Docket](https://github.com/chrisguidry/docket) to coordinate background services like the scheduler, late run detection, and automation triggers. By default, Docket uses in-memory storage (`memory://`), which only works for single-server deployments. For high-availability deployments, configure Docket to use Redis: ```bash theme={null} export PREFECT_SERVER_DOCKET_URL="redis://redis-host:6379/0" ``` If your Redis instance requires authentication: ```bash theme={null} export PREFECT_SERVER_DOCKET_URL="redis://username:password@redis-host:6379/0" ``` For Redis instances that require SSL/TLS: ```bash theme={null} export PREFECT_SERVER_DOCKET_URL="rediss://redis-host:6379/0" ``` The Docket URL can use the same Redis instance as the messaging configuration above, but you may use a different database number (e.g., `/1` instead of `/0`) to keep the data separate. ### Service separation For optimal performance, run API servers and background services separately: **API servers** (multiple instances): ```bash theme={null} prefect server start --host 0.0.0.0 --port 4200 --no-services ``` **Background services**: ```bash theme={null} prefect server services start ``` For high-volume deployments, consider reducing the event retention period from the default 7 days to prevent rapid database growth. See [database maintenance](/v3/advanced/database-maintenance#configure-event-retention) for configuration details. ### Database migrations Disable automatic migrations in multi-server deployments: ```bash theme={null} export PREFECT_API_DATABASE_MIGRATE_ON_START="false" ``` Run migrations separately before deployment: ```bash theme={null} prefect server database upgrade -y ``` ### Load balancer configuration Configure health checks for your load balancer: * **Health endpoint**: `/api/health` * **Expected response**: HTTP 200 with JSON `{"status": "healthy"}` * **Check interval**: 5-10 seconds Example NGINX configuration: ```nginx theme={null} upstream prefect_api { least_conn; server prefect-api-1:4200 max_fails=3 fail_timeout=30s; server prefect-api-2:4200 max_fails=3 fail_timeout=30s; server prefect-api-3:4200 max_fails=3 fail_timeout=30s; } server { listen 4200; location /api/health { proxy_pass http://prefect_api; proxy_connect_timeout 1s; proxy_read_timeout 1s; } location / { proxy_pass http://prefect_api; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } ``` ### Reverse proxy configuration When hosting Prefect behind a reverse proxy, ensure proper header forwarding: ```nginx theme={null} server { listen 80; server_name prefect.example.com; location / { return 301 https://$host$request_uri; } } server { listen 443 ssl http2; server_name prefect.example.com; ssl_certificate /path/to/ssl/certificate.pem; ssl_certificate_key /path/to/ssl/certificate_key.pem; location /api { proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; # WebSocket support proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; # Authentication headers proxy_set_header Authorization $http_authorization; proxy_pass_header Authorization; proxy_pass http://prefect_api; } location / { proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://prefect_api; } } ``` #### UI proxy settings When self-hosting the UI behind a proxy: * `PREFECT_UI_API_URL`: Connection URL from UI to API * `PREFECT_UI_SERVE_BASE`: Base URL path to serve the UI * `PREFECT_UI_URL`: URL for clients to access the UI #### SSL certificates For self-signed certificates: 1. Add certificate to system bundle and set: ```bash theme={null} export SSL_CERT_FILE=/path/to/certificate.pem ``` 2. Or disable verification (testing only): ```bash theme={null} export PREFECT_API_TLS_INSECURE_SKIP_VERIFY=True ``` #### Environment proxy settings Prefect respects standard proxy environment variables: ```bash theme={null} export HTTPS_PROXY=http://proxy.example.com:8080 export HTTP_PROXY=http://proxy.example.com:8080 export NO_PROXY=localhost,127.0.0.1,.internal ``` ## Deployment examples ### Docker Compose ```yaml theme={null} services: postgres: image: postgres:15 environment: POSTGRES_USER: prefect POSTGRES_PASSWORD: prefect POSTGRES_DB: prefect volumes: - postgres_data:/var/lib/postgresql/data healthcheck: test: pg_isready -h localhost -U $$POSTGRES_USER interval: 2s timeout: 5s retries: 15 redis: image: redis:7 migrate: image: prefecthq/prefect:3-latest depends_on: postgres: condition: service_healthy command: prefect server database upgrade -y environment: PREFECT_API_DATABASE_CONNECTION_URL: postgresql+asyncpg://prefect:prefect@postgres:5432/prefect prefect-api: image: prefecthq/prefect:3-latest depends_on: migrate: condition: service_completed_successfully postgres: condition: service_healthy redis: condition: service_started deploy: replicas: 3 command: prefect server start --host 0.0.0.0 --no-services environment: PREFECT_API_DATABASE_CONNECTION_URL: postgresql+asyncpg://prefect:prefect@postgres:5432/prefect PREFECT_API_DATABASE_MIGRATE_ON_START: "false" PREFECT_MESSAGING_BROKER: prefect_redis.messaging PREFECT_MESSAGING_CACHE: prefect_redis.messaging PREFECT_SERVER_EVENTS_CAUSAL_ORDERING: prefect_redis.ordering PREFECT_SERVER_CONCURRENCY_LEASE_STORAGE: prefect_redis.lease_storage PREFECT_REDIS_MESSAGING_HOST: redis PREFECT_REDIS_MESSAGING_PORT: "6379" PREFECT_SERVER_DOCKET_URL: redis://redis:6379/1 ports: - "4200-4202:4200" # Maps to different ports for each replica prefect-background: image: prefecthq/prefect:3-latest depends_on: migrate: condition: service_completed_successfully postgres: condition: service_healthy redis: condition: service_started command: prefect server services start environment: PREFECT_API_DATABASE_CONNECTION_URL: postgresql+asyncpg://prefect:prefect@postgres:5432/prefect PREFECT_API_DATABASE_MIGRATE_ON_START: "false" PREFECT_MESSAGING_BROKER: prefect_redis.messaging PREFECT_MESSAGING_CACHE: prefect_redis.messaging PREFECT_SERVER_EVENTS_CAUSAL_ORDERING: prefect_redis.ordering PREFECT_SERVER_CONCURRENCY_LEASE_STORAGE: prefect_redis.lease_storage PREFECT_REDIS_MESSAGING_HOST: redis PREFECT_REDIS_MESSAGING_PORT: "6379" PREFECT_SERVER_DOCKET_URL: redis://redis:6379/1 volumes: postgres_data: ``` Deploying Prefect self-hosted somehow else? Consider [opening a PR](/contribute/docs-contribute) to add your deployment pattern to this guide. ## Operations ### Migration considerations #### Handling large databases When running migrations on large database instances (especially where tables like `events`, `flow_runs`, or `task_runs` can reach millions of rows), the default database timeout of 10 seconds may not be sufficient for creating indexes. If you encounter a `TimeoutError` during migrations, increase the database timeout: ```bash theme={null} # Set timeout to 10 minutes (adjust based on your database size) export PREFECT_API_DATABASE_TIMEOUT=600 # Then run the migration prefect server database upgrade -y ``` For Docker deployments: ```bash theme={null} docker run -e PREFECT_API_DATABASE_TIMEOUT=600 prefecthq/prefect:latest prefect server database upgrade -y ``` Index creation time scales with table size. A database with millions of events may require 30+ minutes for some migrations. If a migration fails due to timeout, you may need to manually clean up any partially created indexes before retrying. #### Recovering from failed migrations If a migration times out while creating indexes, you may need to manually complete it. For example, if migration `7a73514ca2d6` fails: 1. First, check which indexes were partially created: ```sql theme={null} SELECT indexname FROM pg_indexes WHERE tablename = 'events' AND indexname LIKE 'ix_events%'; ``` 2. Manually create the missing indexes using `CONCURRENTLY` to avoid blocking: ```sql theme={null} -- Drop any partial indexes from the failed migration DROP INDEX IF EXISTS ix_events__event_related_occurred; DROP INDEX IF EXISTS ix_events__related_resource_ids; -- Create the new indexes CREATE INDEX CONCURRENTLY ix_events__related_gin ON events USING gin(related); CREATE INDEX CONCURRENTLY ix_events__event_occurred ON events (event, occurred); CREATE INDEX CONCURRENTLY ix_events__related_resource_ids_gin ON events USING gin(related_resource_ids); ``` 3. Mark the migration as complete: ```sql theme={null} UPDATE alembic_version SET version_num = '7a73514ca2d6'; ``` Only use manual recovery if increasing the timeout and retrying the migration doesn't work. Always verify the correct migration version and index definitions from the migration files. ### Monitoring Monitor your multi-server deployment: * **Database connections**: Watch for connection pool exhaustion * **Redis memory**: Ensure adequate memory for message queues * **API response times**: Track latency across different endpoints * **Background service lag**: Monitor time between event creation and processing ### Best practices 1. **Start with 2-3 API instances** and scale based on load 2. **Use connection pooling** to manage database connections efficiently 3. **Monitor extensively** before scaling further (e.g. [Prometheus](https://prometheus.io/) + [Grafana](https://grafana.com/) or [Logfire](https://logfire.pydantic.dev/docs/why/)) 4. **Test failover scenarios** regularly ## Further reading * [Database maintenance](/v3/advanced/database-maintenance) - Monitor table sizes, configure event retention, and manage data growth * [Server concepts](/v3/concepts/server) * Deploy [Helm charts](/v3/advanced/server-helm) for Kubernetes # How to self-host the Prefect Server with Helm Source: https://docs.prefect.io/v3/advanced/server-helm Self-host your own Prefect server and connect a Prefect worker to it with Helm. You can use Helm to manage a [self-hosted Prefect server](https://github.com/PrefectHQ/prefect-helm/tree/main/charts/prefect-server) and a [worker](https://github.com/PrefectHQ/prefect-helm/tree/main/charts/prefect-worker). ## Prerequisites * A Kubernetes cluster * Install the [Helm CLI](https://helm.sh/docs/intro/install/) ## Deploy a server with Helm Configuring ingress or publicly exposing Prefect from the cluster is business dependent and not covered in this tutorial. For details on Ingress configuration, consult the [Kubernetes documentation](https://kubernetes.io/docs/concepts/services-networking/ingress/). ### Add the Prefect Helm repository: ```bash theme={null} helm repo add prefect https://prefecthq.github.io/prefect-helm helm repo update ``` ### Create a namespace Create a new namespace for this tutorial (all commands will use this namespace): ```bash theme={null} kubectl create namespace prefect kubectl config set-context --current --namespace=prefect ``` ### Deploy the server For a simple deployment using only the default values defined in the chart: ```bash theme={null} helm install prefect-server prefect/prefect-server --namespace prefect ``` For a customized deployment, first create a `server-values.yaml` file for the server (see [values.yaml template](https://github.com/PrefectHQ/prefect-helm/blob/main/charts/prefect-server/values.yaml)): ```yaml theme={null} server: basicAuth: enabled: true existingSecret: server-auth-secret ``` #### Create a secret for the API basic authentication username and password: ```bash theme={null} kubectl create secret generic server-auth-secret \ --namespace prefect --from-literal auth-string='admin:password123' ``` #### Install the server: ```bash theme={null} helm install prefect-server prefect/prefect-server \ --namespace prefect \ -f server-values.yaml ``` Expected output: ``` NAME: prefect-server LAST DEPLOYED: Tue Mar 4 09:08:07 2025 NAMESPACE: prefect STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Run the following command to port-forward the UI to your localhost: $ kubectl --namespace prefect port-forward svc/prefect-server 4200:4200 Visit http://localhost:4200 to use Prefect! ``` ### Access the Prefect UI: ```bash theme={null} kubectl --namespace prefect port-forward svc/prefect-server 4200:4200 ``` Open `localhost:4200` in your browser. If using basic authentication, sign in with `admin:password123`. ## Deploy a worker with Helm To connect a worker to your self-hosted Prefect server in the same cluster: Create a `worker-values.yaml` file for the worker (see [values.yaml template](https://github.com/PrefectHQ/prefect-helm/blob/main/charts/prefect-worker/values.yaml)): ```yaml theme={null} worker: apiConfig: selfHostedServer config: workPool: kube-test selfHostedServerApiConfig: apiUrl: http://prefect-server.prefect.svc.cluster.local:4200/api ``` #### Install the worker: ```bash theme={null} helm install prefect-worker prefect/prefect-worker \ --namespace prefect \ -f worker-values.yaml ``` Create a `worker-values.yaml` file for the worker (see [values.yaml template](https://github.com/PrefectHQ/prefect-helm/blob/main/charts/prefect-worker/values.yaml)): ```yaml theme={null} worker: apiConfig: selfHostedServer config: workPool: kube-test selfHostedServerApiConfig: apiUrl: http://prefect-server.prefect.svc.cluster.local:4200/api basicAuth: enabled: true existingSecret: worker-auth-secret ``` #### Create a secret for the API basic authentication username and password: ```bash theme={null} kubectl create secret generic worker-auth-secret \ --namespace prefect --from-literal auth-string='admin:password123' ``` #### Install the worker: ```bash theme={null} helm install prefect-worker prefect/prefect-worker \ --namespace prefect \ -f worker-values.yaml ``` Expected output: ``` Release "prefect-worker" has been installed. Happy Helming! NAME: prefect-worker LAST DEPLOYED: Tue Mar 4 11:26:21 2025 NAMESPACE: prefect STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: ``` ## Cleanup To uninstall the self-hosted Prefect server and Prefect worker: ```bash theme={null} helm uninstall prefect-worker helm uninstall prefect-server ``` ## Troubleshooting If you see this error: ``` Error from server (BadRequest): container "prefect-server" in pod "prefect-server-7c87b7f7cf-sgqj2" is waiting to start: CreateContainerConfigError ``` Run `kubectl events` and confirm that the `authString` is correct. If you see this error: ``` prefect.exceptions.PrefectHTTPStatusError: Client error '401 Unauthorized' for url 'http://prefect-server.prefect.svc.cluster.local:4200/api/work_pools/kube-test' Response: {'exception_message': 'Unauthorized'} For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401 An exception occurred. ``` Ensure `basicAuth` is configured in the `worker-values.yaml` file. If you see this error: ``` File "/usr/local/lib/python3.11/site-packages/httpcore/_backends/anyio.py", line 113, in connect_tcp with map_exceptions(exc_map): File "/usr/local/lib/python3.11/contextlib.py", line 158, in __exit__ self.gen.throw(typ, value, traceback) File "/usr/local/lib/python3.11/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions raise to_exc(exc) from exc httpcore.ConnectError: [Errno -2] Name or service not known ``` Ensure the `PREFECT_API_URL` environment variable is properly templated by running the following command: ```bash theme={null} helm template prefect-worker prefect/prefect-worker -f worker-values.yaml ``` The URL format should look like the following: ``` http://prefect-server.prefect.svc.cluster.local:4200/api ``` If the worker is not in the same cluster and namespace, the precise format will vary. For additional troubleshooting and configuration, review the [Prefect Worker Helm Chart](https://github.com/PrefectHQ/prefect-helm/tree/main/charts/prefect-worker). # How to submit flows directly to dynamic infrastructure Source: https://docs.prefect.io/v3/advanced/submit-flows-directly-to-dynamic-infrastructure Submit flows directly to different infrastructure types without a deployment **Beta Feature** This feature is currently in beta. While we encourage you to try it out and provide feedback, please be aware that the API may change in future releases, potentially including breaking changes. Prefect allows you to submit workflows directly to different infrastructure types without requiring a deployment. This enables you to dynamically choose where your workflows run based on their requirements, such as: * Training machine learning models that require GPUs * Processing large datasets that need significant memory * Running lightweight tasks that can use minimal resources ## Benefits Submitting workflows directly to dynamic infrastructure provides several advantages: * **Dynamic resource allocation**: Choose infrastructure based on workflow requirements at runtime * **Cost efficiency**: Use expensive infrastructure only when needed * **Consistency**: Ensure workflows always run on the appropriate infrastructure type * **Simplified workflow management**: No need to create and maintain deployments for different infrastructure types ## Supported infrastructure Direct submission of workflows is currently supported for the following infrastructures: | Infrastructure | Required Package | Decorator | | ------------------------- | -------------------- | --------------------------- | | Docker | `prefect-docker` | `@docker` | | Kubernetes | `prefect-kubernetes` | `@kubernetes` | | AWS ECS | `prefect-aws` | `@ecs` | | Google Cloud Run | `prefect-gcp` | `@cloud_run` | | Google Vertex AI | `prefect-gcp` | `@vertex_ai` | | Azure Container Instances | `prefect-azure` | `@azure_container_instance` | Each package can be installed using pip, for example: ```bash theme={null} pip install prefect-docker ``` ## Prerequisites Before submitting workflows to specific infrastructure, you need: 1. A work pool for each infrastructure type you want to use 2. Object storage to associate with your work pool(s) ## Setting up work pools and storage ### Creating a work pool Create work pools for each infrastructure type using the Prefect CLI: ```bash theme={null} prefect work-pool create NAME --type WORK_POOL_TYPE ``` For detailed information on creating and configuring work pools, refer to the [work pools documentation](/v3/deploy/infrastructure-concepts/work-pools). ### Configuring work pool storage To enable Prefect to run workflows in remote infrastructure, work pools need an associated storage location to store serialized versions of submitted workflows and results from workflow runs. Configure storage for your work pools using one of the supported storage types: ```bash S3 theme={null} prefect work-pool storage configure s3 WORK_POOL_NAME \ --bucket BUCKET_NAME \ --aws-credentials-block-name BLOCK_NAME ``` ```bash Google Cloud Storage theme={null} prefect work-pool storage configure gcs WORK_POOL_NAME \ --bucket BUCKET_NAME \ --gcp-credentials-block-name BLOCK_NAME ``` ```bash Azure Blob Storage theme={null} prefect work-pool storage configure azure-blob-storage WORK_POOL_NAME \ --container CONTAINER_NAME \ --azure-blob-storage-credentials-block-name BLOCK_NAME ``` To allow Prefect to upload and download serialized workflows, you can [create a block](/v3/develop/blocks) containing credentials with permission to access your configured storage location. If a credentials block is not provided, Prefect will use the default credentials (for example, a local profile or an IAM role) as determined by the corresponding cloud provider. You can inspect your storage configuration using: ```bash theme={null} prefect work-pool storage inspect WORK_POOL_NAME ``` **Local storage for `@docker`** When using the `@docker` decorator with a local Docker engine, you can use volume mounts to share data between your Docker container and host machine. Here's an example: ```python theme={null} from prefect import flow from prefect.filesystems import LocalFileSystem from prefect_docker.experimental import docker result_storage = LocalFileSystem(basepath="/tmp/results") result_storage.save("result-storage", overwrite=True) @docker( work_pool="above-ground", volumes=["/tmp/results:/tmp/results"], ) @flow(result_storage=result_storage) def run_in_docker(name: str): return(f"Hello, {name}!") print(run_in_docker("world")) # prints "Hello, world!" ``` To use local storage, ensure that: 1. The volume mount path is identical on both the host and container side 2. The `LocalFileSystem` block's `basepath` matches the path specified in the volume mount ## Running infrastructure-bound flows An infrastructure-bound flow supports three execution modes: direct calling, `.submit()`, and `.submit_to_work_pool()`. Each mode targets a different use case depending on whether you need blocking or non-blocking execution and whether the submitting machine has direct access to the target infrastructure. | Method | Blocking | Requires local infrastructure access | Requires a running worker | | ------------------------ | -------- | ------------------------------------ | ------------------------- | | Direct call | Yes | Yes | No | | `.submit()` | No | Yes | No | | `.submit_to_work_pool()` | No | No | Yes | ### Direct call (blocking) Calling an infrastructure-bound flow directly submits it to remote infrastructure and blocks until the run completes. Prefect spins up a temporary local worker to create the infrastructure and monitor the run. ```python theme={null} from prefect import flow from prefect_kubernetes.experimental.decorators import kubernetes @kubernetes(work_pool="olympic") @flow def my_remote_flow(name: str): print(f"Hello {name}!") @flow def my_flow(): # Blocks until my_remote_flow completes on Kubernetes my_remote_flow("Marvin") my_flow() ``` When you run this code on your machine, `my_flow` executes locally, while `my_remote_flow` is submitted to run in a Kubernetes job. The call blocks until the Kubernetes job finishes. ### Non-blocking submission with `.submit()` Use `.submit()` when you want to submit a flow to remote infrastructure without blocking the caller. Like a direct call, `.submit()` spins up a temporary local worker to create the infrastructure, but it returns a `PrefectFlowRunFuture` immediately so you can continue running other work. Use `.submit()` when: * The submitting machine has access to the target infrastructure (for example, it can connect to the Kubernetes cluster or has permissions to create an ECS task) * You want to run multiple infrastructure-bound flows concurrently * You don't have a worker already running for the work pool ```python theme={null} from prefect import flow from prefect_kubernetes.experimental.decorators import kubernetes @kubernetes(work_pool="olympic") @flow def train_model(dataset: str): print(f"Training on {dataset}") return {"accuracy": 0.95} @flow def orchestrator(): # Submit two training jobs without waiting future_a = train_model.submit(dataset="dataset-a") future_b = train_model.submit(dataset="dataset-b") # Retrieve results when needed result_a = future_a.result() result_b = future_b.result() print(f"Results: {result_a}, {result_b}") orchestrator() ``` In this example, both training jobs are submitted to Kubernetes concurrently. The `orchestrator` flow continues executing and only blocks when it calls `.result()` on each future. ### Submitting to a work pool with `.submit_to_work_pool()` Use `.submit_to_work_pool()` when you want to submit a flow to remote infrastructure but the submitting machine does not have direct access to create that infrastructure. Instead of spinning up a local worker, this method creates a flow run and places it in the work pool for an already-running worker to pick up. Use `.submit_to_work_pool()` when: * The submitting machine cannot connect to the target infrastructure (for example, it cannot reach the Kubernetes cluster or lacks permissions to create an ECS task) * You already have a worker running that polls the target work pool * You want to separate the submission environment from the execution environment ```python theme={null} from prefect import flow from prefect_aws.experimental import ecs @ecs(work_pool="my-ecs-pool") @flow def process_data(source: str): print(f"Processing {source}") return {"rows": 1000} # Submit to the work pool for an existing worker to execute future = process_data.submit_to_work_pool(source="s3://my-bucket/data.csv") # Retrieve the result once the worker completes the run result = future.result() print(result) ``` Before calling `.submit_to_work_pool()`, start a worker that polls the target work pool: ```bash theme={null} prefect worker start --pool my-ecs-pool ``` ### Working with `PrefectFlowRunFuture` Both `.submit()` and `.submit_to_work_pool()` return a `PrefectFlowRunFuture`. Use this object to check the status of the flow run, wait for it to finish, or retrieve the result. ```python theme={null} future = my_flow.submit(name="Marvin") # Check the current state without blocking print(future.state) # Block until the run completes future.wait() # Retrieve the result (blocks if the run is still in progress) result = future.result() ``` **Parameters must be serializable** Parameters passed to infrastructure-bound flows are serialized with `cloudpickle` to allow them to be transported to the destination infrastructure. Most Python objects can be serialized with `cloudpickle`, but objects like database connections cannot be serialized. For parameters that cannot be serialized, create the object inside your infrastructure-bound workflow. ## Customizing infrastructure configuration You can override the default configuration by providing additional kwargs to the infrastructure decorator: ```python theme={null} from prefect import flow from prefect_kubernetes.experimental.decorators import kubernetes @kubernetes( work_pool="my-kubernetes-pool", namespace="custom-namespace" ) @flow def custom_namespace_flow(): pass ``` Any kwargs passed to the infrastructure decorator will override the corresponding default value in the [base job template](/v3/how-to-guides/deployment_infra/manage-work-pools#base-job-template) for the specified work pool. ## Including files in the bundle When a flow runs on remote infrastructure, your code is serialized and sent to the execution environment. However, non-Python files such as configuration files, data files, or model artifacts are not included by default. Use the `include_files` parameter on any infrastructure decorator to bundle additional files alongside your flow. ```python theme={null} from prefect import flow from prefect_docker.experimental import docker @docker( work_pool="my-pool", include_files=["config.yaml", "data/"] ) @flow def my_flow(): import yaml with open("config.yaml") as f: config = yaml.safe_load(f) print(config) ``` The `include_files` parameter accepts a list of relative paths and glob patterns. Paths are resolved relative to the directory containing the flow file. ### Supported patterns | Pattern | Description | | ----------------- | ---------------------------------------------------- | | `"config.yaml"` | A single file | | `"data/"` | All files in a directory (recursive) | | `"*.yaml"` | Glob pattern matching files in the flow directory | | `"data/**/*.csv"` | Recursive glob pattern | | `"!*.test.py"` | Negation pattern to exclude previously matched files | Patterns are processed in order. Negation patterns (prefixed with `!`) remove files already matched by earlier patterns: ```python theme={null} from prefect import flow from prefect_kubernetes.experimental import kubernetes @kubernetes( work_pool="my-pool", include_files=["*.json", "!fixtures/*.json"] ) @flow def process_json(): ... ``` This example includes all JSON files except those in the `fixtures/` directory. ### Filtering with `.prefectignore` If a `.prefectignore` file exists in the flow file's directory or at the project root (detected via `pyproject.toml`), its patterns are applied to filter out matching files. The `.prefectignore` file uses gitignore-style syntax: ```text theme={null} # .prefectignore *.log tmp/ __pycache__/ ``` Files that match a `.prefectignore` pattern are excluded from the bundle even if they match an `include_files` pattern. ### Default exclusions Certain common directories and file types are always excluded from directory and glob collection, even without a `.prefectignore` file: * `__pycache__/`, `*.pyc`, `*.pyo` * `.git/`, `.hg/`, `.svn/` * `node_modules/`, `.venv/`, `venv/` * `.idea/`, `.vscode/` * `.DS_Store`, `Thumbs.db` Hidden files and directories (names starting with `.`) are also excluded when collecting directories. Files matching sensitive patterns such as `.env*`, `*.pem`, `*.key`, or `credentials.*` are bundled without special treatment. Add sensitive files to `.prefectignore` to prevent accidental inclusion. ## Configuring bundle launchers By default, Prefect uses `uv run` to execute bundle upload and execution commands. If your execution environment already has the required dependencies installed—for example, a custom Docker image, a Poetry-managed environment, or a system-level Python interpreter—you can override the default launcher to skip the `uv run` wrapper entirely. You can configure launchers at two levels: * **Per-flow**: using the `launcher` parameter on an infrastructure decorator * **Per-work-pool**: using CLI flags during storage configuration ### Per-flow launcher override Pass the `launcher` parameter to any infrastructure decorator to override the command prefix used for both bundle upload and execution: ```python theme={null} from prefect import flow from prefect_kubernetes.experimental.decorators import kubernetes @kubernetes( work_pool="my-kubernetes-pool", launcher=["python"], ) @flow def my_flow(): print("Running with system Python!") ``` The `launcher` parameter accepts: * A `list[str]` that applies to both upload and execution (for example, `["python"]` or `["poetry", "run", "python"]`) * A `dict` with `"upload"` and/or `"execution"` keys when you need different launchers for each phase: ```python theme={null} from prefect import flow from prefect_docker.experimental import docker @docker( work_pool="my-docker-pool", launcher={ "upload": ["python"], "execution": ["poetry", "run", "python"], }, ) @flow def my_flow(): print("Different launchers for upload and execution!") ``` When a launcher override is provided, Prefect skips the default `uv run` behavior, including automatic dependency installation via `--with` flags. Ensure your execution environment has all required dependencies pre-installed. You can also override the launcher with `.with_options()`: ```python theme={null} custom_flow = my_flow.with_options(launcher=["python3.12"]) ``` ### Per-work-pool launcher via CLI Configure a launcher for all flows that use a work pool by passing launcher flags during storage configuration: ```bash theme={null} prefect work-pool storage configure s3 my-pool \ --bucket my-bucket \ --aws-credentials-block-name my-creds \ --launcher python ``` This sets `python` as the launcher for both upload and execution steps on the work pool. To pass additional arguments to the launcher executable, use `--launcher-arg` (repeatable): ```bash theme={null} prefect work-pool storage configure s3 my-pool \ --bucket my-bucket \ --aws-credentials-block-name my-creds \ --launcher python \ --launcher-arg -X \ --launcher-arg utf8 ``` ### Separate upload and execution launchers via CLI To configure different launchers for the upload and execution phases, use the `--upload-launcher` and `--execution-launcher` flags: ```bash theme={null} prefect work-pool storage configure s3 my-pool \ --bucket my-bucket \ --aws-credentials-block-name my-creds \ --upload-launcher python \ --execution-launcher poetry \ --execution-launcher-arg run \ --execution-launcher-arg python ``` You can also combine `--launcher` with phase-specific overrides. The shared launcher serves as the base, and phase-specific flags replace the executable for that phase: ```bash theme={null} prefect work-pool storage configure gcs my-pool \ --bucket my-bucket \ --gcp-credentials-block-name my-creds \ --launcher python \ --execution-launcher poetry \ --execution-launcher-arg run \ --execution-launcher-arg python ``` In this example, upload uses `python` while execution uses `poetry run python`. ### Inspecting launcher configuration After configuring a launcher, verify the storage settings with: ```bash theme={null} prefect work-pool storage inspect my-pool ``` Use `--output json` for machine-readable output that includes the launcher configuration for both upload and execution steps: ```bash theme={null} prefect work-pool storage inspect my-pool --output json ``` When a per-flow launcher override is provided, it takes precedence over the work pool's configured launcher for that flow submission. ## Further reading * [Work pools](/v3/concepts/work-pools) concept page * [Manage work pools](/v3/how-to-guides/deployment_infra/manage-work-pools) # How to write transactional workflows Source: https://docs.prefect.io/v3/advanced/transactions Prefect supports *transactional semantics* in your workflows that allow you to rollback on task failure and configure groups of tasks that run as an atomic unit. A *transaction* in Prefect corresponds to a job that needs to be done. A transaction runs at most one time, and produces a result record upon completion at a unique address specified by a dynamically computed cache key. These records can be shared across tasks and flows. Under the hood, every Prefect task run is governed by a transaction. In the default mode of task execution, all you need to understand about transactions are [the policies determining the task's cache key computation](/v3/concepts/caching). **Transactions and states** Transactions and states are similar but different in important ways. Transactions determine whether a task should or should not execute, whereas states enable visibility into code execution status. ## Write your first transaction Tasks can be grouped into a common transaction using the `transaction` context manager: ```python theme={null} import os from time import sleep from prefect import task, flow from prefect.transactions import transaction @task def write_file(contents: str): "Writes to a file." with open("side-effect.txt", "w") as f: f.write(contents) @write_file.on_rollback def del_file(transaction): "Deletes file." os.unlink("side-effect.txt") @task def quality_test(): "Checks contents of file." with open("side-effect.txt", "r") as f: data = f.readlines() if len(data) < 2: raise ValueError("Not enough data!") @flow def pipeline(contents: str): with transaction(): write_file(contents) sleep(2) # sleeping to give you a chance to see the file quality_test() if __name__ == "__main__": pipeline(contents="hello world") ``` If you run this flow `pipeline(contents="hello world!")` it will fail. Importantly, after the flow has exited, there is no `"side-effect.txt"` file in your working directory. This is because the `write_file` task's `on_rollback` hook was executed due to the transaction failing. **`on_rollback` hooks are different than `on_failure` hooks** Note that the `on_rollback` hook is executed when the `quality_test` task fails, not the `write_file` task that it is associated with it, which succeeded. This is because rollbacks occur whenever the transaction a task is participating in fails, even if that failure is outside the task's local scope. This behavior makes transactions a valuable pattern for managing pipeline failure. ## Transaction lifecycle Every transaction goes through at most four lifecycle stages: * **BEGIN**: in this phase, the transaction's key is computed and looked up. If a record already exists at the key location the transaction considers itself committed. * **STAGE**: in this phase, the transaction stages a piece of data to be committed to its result location. Whether this data is committed or rolled back depends on the commit mode of the transaction. * **ROLLBACK**: if the transaction encounters *any* error **after** staging, it rolls itself back and does not proceed to commit anything. * **COMMIT**: in this final phase, the transaction writes its record to its configured location. At this point the transaction is complete. It is important to note that rollbacks only occur *after* the transaction has been staged. Revisiting our example from above, there are actually *three* transactions at play: * the larger transaction that begins when `with transaction()` is executed; this transaction remains active throughout the duration of the subtransactions within it. * the transaction associated with the `write_file` task. Upon completion of the `write_file` task, this transaction is now **STAGED**. * the transaction associated with the `quality_test` task. This transaction fails before it can be staged, causing a rollback in its parent transaction which then rolls back any staged subtransactions. In particular, the staged `write_file`'s transaction is rolled back. **Tasks also have `on_commit` lifecycle hooks** In addition to the `on_rollback` hook, a task can also register `on_commit` hooks that execute whenever its transaction is committed. A task run persists its result only at transaction commit time, which could be significantly after the task's completion time if it is within a long running transaction. The signature for an `on_commit` hook is the same as that of an `on_rollback` hook: ```python theme={null} @write_file.on_commit def confirmation(transaction): print("committing a record now using the task's cache key!") ``` ## Idempotency You can ensure sections of code are functionally idempotent by wrapping them in a transaction. By specifying a `key` for your transaction, you can ensure that your code is executed only once. For example, here's a flow that downloads some data from an API and writes it to a file: ```python theme={null} from prefect import task, flow from prefect.transactions import transaction @task def download_data(): """Imagine this downloads some data from an API""" return "some data" @task def write_data(data: str): """This writes the data to a file""" with open("data.txt", "w") as f: f.write(data) @flow(log_prints=True) def pipeline(): with transaction(key="download-and-write-data") as txn: if txn.is_committed(): print("Data file has already been written. Exiting early.") return data = download_data() write_data(data) if __name__ == "__main__": pipeline() ``` If you run this flow, it will write data to a file the first time, but it will exit early on subsequent runs because the transaction has already been committed. Giving the transaction a `key` will cause the transaction to write a record on commit signifying that the transaction has completed. The call to `txn.is_committed()` will return `True` only if the persisted record exists. ### Handling race conditions Persisting transaction records works well to ensure sequential executions are idempotent, but what about when about when multiple transactions with the same key run at the same time? By default, transactions have an isolation level of `READ_COMMITED` which means that they can see any previously committed records, but they are not prevented from overwriting a record that was created by another transaction between the time they started and the time they committed. To see this behavior in action in the following script: ```python theme={null} import threading from prefect import flow, task from prefect.transactions import transaction @task def download_data(): return f"{threading.current_thread().name} is the winner!" @task def write_file(contents: str): "Writes to a file." with open("race-condition.txt", "w") as f: f.write(contents) @flow def pipeline(transaction_key: str): with transaction(key=transaction_key) as txn: if txn.is_committed(): print("Data file has already been written. Exiting early.") return data = download_data() write_file(data) if __name__ == "__main__": # Run the pipeline twice to see the race condition transaction_key = f"race-condition-{uuid.uuid4()}" thread_1 = threading.Thread(target=pipeline, name="Thread 1", args=(transaction_key,)) thread_2 = threading.Thread(target=pipeline, name="Thread 2", args=(transaction_key,)) thread_1.start() thread_2.start() thread_1.join() thread_2.join() ``` If you run this script, you will see that sometimes "Thread 1 is the winner!" is written to the file and sometimes "Thread 2 is the winner!" is written **even though the transactions have the same key**. You can ensure subsequent runs don't exit early by changing the `key` argument between runs. To prevent race conditions, you can set the `isolation_level` of a transaction to `SERIALIZABLE`. This will cause each transaction to take a lock on the provided key. This will prevent other transactions from starting until the first transaction has completed. Here's an updated example that uses `SERIALIZABLE` isolation: ```python theme={null} import threading import uuid from prefect import flow, task from prefect.locking.filesystem import FileSystemLockManager from prefect.results import ResultStore from prefect.settings import PREFECT_HOME from prefect.transactions import IsolationLevel, transaction @task def download_data(): return f"{threading.current_thread().name} is the winner!" @task def write_file(contents: str): "Writes to a file." with open("race-condition.txt", "w") as f: f.write(contents) @flow def pipeline(transaction_key: str): with transaction( key=transaction_key, isolation_level=IsolationLevel.SERIALIZABLE, store=ResultStore( lock_manager=FileSystemLockManager( lock_files_directory=PREFECT_HOME.value() / "locks" ) ), ) as txn: if txn.is_committed(): print("Data file has already been written. Exiting early.") return data = download_data() write_file(data) if __name__ == "__main__": transaction_key = f"race-condition-{uuid.uuid4()}" thread_1 = threading.Thread(target=pipeline, name="Thread 1", args=(transaction_key,)) thread_2 = threading.Thread(target=pipeline, name="Thread 2", args=(transaction_key,)) thread_1.start() thread_2.start() thread_1.join() thread_2.join() ``` To use a transaction with the `SERIALIZABLE` isolation level, you must also provide a `lock_manager` to the `transaction` context manager. The lock manager is responsible for acquiring and releasing locks on the transaction key. In the example above, we use a `FileSystemLockManager` which will manage locks as files on the current instance's filesystem. Prefect offers several lock managers for different concurrency use cases: | Lock Manager | Storage | Supports | Module/Package | | ------------------- | -------------- | ------------------------------------------- | ---------------------------- | | `MemoryLockManager` | In-memory | Single-process workflows using threads | `prefect.locking.memory` | | `FileLockManager` | Filesystem | Multi-process workflows on a single machine | `prefect.locking.filesystem` | | `RedisLockManager` | Redis database | Distributed workflows | `prefect-redis` | ## Access data within transactions Key-value pairs can be set within a transaction and accessed elsewhere within the transaction, including within the `on_rollback` hook. The code below shows how to set a key-value pair within a transaction and access it within the `on_rollback` hook: ```python theme={null} import os from time import sleep from prefect import task, flow from prefect.transactions import transaction @task def write_file(filename: str, contents: str): "Writes to a file." with open(filename, "w") as f: f.write(contents) @write_file.on_rollback def del_file(txn): "Deletes file." os.unlink(txn.get("filename")) @task def quality_test(filename): "Checks contents of file." with open(filename, "r") as f: data = f.readlines() if len(data) < 2: raise ValueError(f"Not enough data!") @flow def pipeline(filename: str, contents: str): with transaction() as txn: txn.set("filename", filename) write_file(filename, contents) sleep(2) # sleeping to give you a chance to see the file quality_test(filename) if __name__ == "__main__": pipeline( filename="side-effect.txt", contents="hello world", ) ``` The value of `contents` is accessible within the `on_rollback` hook. Use `get_transaction()` to access the transaction object and `txn.get("key")` to access the value of the key. # How to emit and use custom events Source: https://docs.prefect.io/v3/advanced/use-custom-event-grammar Learn how to define specific trigger conditions based on custom event grammar. ## Motivating custom events Imagine you are running an e-commerce platform and you want to trigger a deployment when a customer completes an order. There might be a number of events that occur during an order on your platform, for example: * `order.created` * `order.item.added` * `order.payment-method.confirmed` * `order.shipping-method.added` * `order.complete` **Event grammar** The above choices of event names are arbitrary. With Prefect events, you're free to select any event grammar that best represents your use case. In this case, we want to trigger a deployment when a user completes an order, so our trigger should: * `expect` an `order.complete` event * `after` an `order.created` event * evaluate these conditions `for_each` user id Finally, it should pass the `user_id` as a parameter to the deployment. ### Define the trigger Here's how this looks in code: ```python post_order_deployment.py theme={null} from prefect import flow from prefect.events.schemas.deployment_triggers import DeploymentEventTrigger order_complete = DeploymentEventTrigger( expect={"order.complete"}, after={"order.created"}, for_each={"prefect.resource.id"}, parameters={"user_id": "{{ event.resource.id }}"}, ) @flow(log_prints=True) def post_order_complete(user_id: str): print(f"User {user_id} has completed an order -- doing stuff now") if __name__ == "__main__": post_order_complete.serve(triggers=[order_complete]) ``` **Specify multiple events or resources** The `expect` and `after` fields accept a `set` of event names, so you can specify multiple events for each condition. Similarly, the `for_each` field accepts a `set` of resource ids. ### Simulate events To simulate users causing order status events, run the following in a Python shell or script: ```python simulate_events.py theme={null} import time from prefect.events import emit_event user_id_1, user_id_2 = "123", "456" for event_name, user_id in [ ("order.created", user_id_1), ("order.created", user_id_2), # other user ("order.complete", user_id_1), ]: event = emit_event( event=event_name, resource={"prefect.resource.id": user_id}, ) time.sleep(1) print(f"{user_id} emitted {event_name}") ``` In the above example: * `user_id_1` creates and then completes an order, triggering a run of our deployment. * `user_id_2` creates an order, but no completed event is emitted so no deployment is triggered. # How to configure worker healthchecks Source: https://docs.prefect.io/v3/advanced/worker-healthchecks Learn how to monitor worker health and automatically restart workers when they become unresponsive. Worker healthchecks provide a way to monitor whether your Prefect workers are functioning properly and polling for work as expected. This is particularly useful in production environments where you need to ensure workers are available to execute scheduled flow runs. ## Overview Worker healthchecks work by: 1. **Tracking polling activity**: Workers record when they last polled for flow runs from their work pool 2. **Exposing a health endpoint**: When enabled, workers start an HTTP server that provides health status 3. **Detecting unresponsive workers**: The health endpoint returns an error status if the worker hasn't polled recently This allows external monitoring systems, container orchestrators, or process managers to detect and restart unhealthy workers automatically. ## Enabling Healthchecks Start a worker with healthchecks enabled using the `--with-healthcheck` flag: ```bash theme={null} prefect worker start --pool "my-pool" --with-healthcheck ``` This starts both the worker and a lightweight HTTP health server that exposes a `/health` endpoint. When enabled, the worker exposes an HTTP endpoint at: ``` http://localhost:8080/health ``` For GET requests the endpoint returns: * **200 OK** with `{"message": "OK"}` when the worker is healthy * **503 Service Unavailable** with `{"message": "Worker may be unresponsive at this time"}` when the worker is unhealthy ### Configuring the Health Server You can customize the health server's host and port using environment variables: ```bash theme={null} export PREFECT_WORKER_WEBSERVER_HOST=0.0.0.0 export PREFECT_WORKER_WEBSERVER_PORT=9090 prefect worker start --pool "my-pool" --with-healthcheck ``` ## Health Detection Logic A worker is considered unhealthy if it hasn't polled for flow runs within a specific timeframe defined by its configured polling interval. The health check algorithm works as follows: If a worker hasn't made a successful poll within the time window of `PREFECT_WORKER_QUERY_SECONDS * 30` seconds, it is considered unhealthy and its health endpoint will return 503 (Service Unavailable). With default settings, a worker is unhealthy if it hasn't polled in 450 seconds (7.5 minutes). This generous threshold accounts for temporary network issues, API unavailability, or brief worker pauses without triggering false alarms. ## Production Deployment Patterns ### Docker with Health Checks Use Docker's built-in health check functionality by including these lines in your Dockerfile: ```dockerfile theme={null} # Health check configuration HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=3 \ CMD python -c "import urllib.request as u; u.urlopen('http://localhost:8080/health', timeout=1)" # Start worker with healthcheck CMD ["prefect", "worker", "start", "--pool", "my-pool", "--with-healthcheck"] ``` For more information see [Docker's reference guide](https://docs.docker.com/reference/dockerfile/#healthcheck). ### Kubernetes with Liveness Probes Configure Kubernetes to automatically restart unhealthy worker pods by including this configuration in your worker deployment: ```yaml theme={null} livenessProbe: httpGet: path: /health port: 8080 initialDelaySeconds: 60 periodSeconds: 30 timeoutSeconds: 10 failureThreshold: 3 ``` This is enabled by default when using [Prefect's Helm Chart](v3/advanced/server-helm). ### Docker Compose with Health Checks Use Docker Compose's built-in health check functionality by including these lines in your Docker compose file: ```yaml theme={null} version: '3.8' services: prefect-worker: image: my-prefect-worker:latest command: ["prefect", "worker", "start", "--pool", "my-pool", "--with-healthcheck"] healthcheck: test: ["CMD", "python", "-c", "import urllib.request as u; u.urlopen('http://localhost:8080/health', timeout=1)"] interval: 30s timeout: 10s retries: 3 start_period: 60s restart: unless-stopped ``` For more information see [Docker Compose's reference guide](https://docs.docker.com/reference/compose-file/services/#healthcheck). ## Troubleshooting ### Health endpoint not accessible * Verify the worker was started with `--with-healthcheck` * Check that the configured host/port is accessible (default: `http://localhost:8080/health`) * Ensure no firewall rules are blocking the health port * Port 8080 may conflict with other services - change with `PREFECT_WORKER_WEBSERVER_PORT` * Verify configuration: `prefect config view --show-defaults | grep WORKER` ### Worker appears healthy but not picking up flows * Health checks only verify polling activity, not successful flow execution * Check work pool and work queue configuration: ensure worker is polling the correct pool/queue * Verify deployment configuration matches worker capabilities * Review flow run states - flows may be stuck in PENDING due to concurrency limits * Enable debug logging: set `PREFECT_LOGGING_LEVEL=DEBUG` on the worker to see detailed polling activity * Increase polling frequency temporarily: `PREFECT_WORKER_QUERY_SECONDS=5` ### False positive health failures * Increase `PREFECT_WORKER_QUERY_SECONDS` if your API has high latency * Check for network connectivity issues between worker and Prefect API * Review worker logs for authentication or authorization errors (API key issues) * Verify `PREFECT_API_URL` is correctly configured and accessible * Check for temporary API outages or [rate limiting](v3/concepts/rate-limits) ## Related Configuration Relevant settings for worker health and polling behavior: * `PREFECT_WORKER_HEARTBEAT_SECONDS`: How often workers send heartbeats to the API (default: 30) * `PREFECT_WORKER_QUERY_SECONDS`: How often workers poll for new flow runs (default: 15) * `PREFECT_WORKER_PREFETCH_SECONDS`: How far in advance to submit flow runs (default: 10) * `PREFECT_WORKER_WEBSERVER_HOST`: Health server host (default: 127.0.0.1) * `PREFECT_WORKER_WEBSERVER_PORT`: Health server port (default: 8080) ## Further Reading For more information on worker configuration, see the [Workers concept guide](/v3/concepts/workers/). # Source: https://docs.prefect.io/v3/api-ref/cli/artifact # `prefect artifact` ```command theme={null} prefect artifact [OPTIONS] COMMAND [ARGS]... ``` Inspect and delete artifacts. ## `prefect artifact ls` ```command theme={null} prefect artifact ls [OPTIONS] ``` List artifacts. The maximum number of artifacts to return. Whether or not to only return the latest version of each artifact. Specify an output format. Currently supports: json ## `prefect artifact inspect` ```command theme={null} prefect artifact inspect [OPTIONS] KEY ``` View details about an artifact. \[required] The maximum number of artifacts to return. Specify an output format. Currently supports: json **Example:** `$ prefect artifact inspect "my-artifact"` ```json theme={null} [ { 'id': 'ba1d67be-0bd7-452e-8110-247fe5e6d8cc', 'created': '2023-03-21T21:40:09.895910+00:00', 'updated': '2023-03-21T21:40:09.895910+00:00', 'key': 'my-artifact', 'type': 'markdown', 'description': None, 'data': 'my markdown', 'metadata_': None, 'flow_run_id': '8dc54b6f-6e24-4586-a05c-e98c6490cb98', 'task_run_id': None }, { 'id': '57f235b5-2576-45a5-bd93-c829c2900966', 'created': '2023-03-27T23:16:15.536434+00:00', 'updated': '2023-03-27T23:16:15.536434+00:00', 'key': 'my-artifact', 'type': 'markdown', 'description': 'my-artifact-description', 'data': 'my markdown', 'metadata_': None, 'flow_run_id': 'ffa91051-f249-48c1-ae0f-4754fcb7eb29', 'task_run_id': None } ] ``` ## `prefect artifact delete` ```command theme={null} prefect artifact delete [OPTIONS] [KEY] ``` Delete an artifact. The key of the artifact to delete. The ID of the artifact to delete. **Example:** `$ prefect artifact delete "my-artifact"` # Source: https://docs.prefect.io/v3/api-ref/cli/automation # `prefect automation` ```command theme={null} prefect automation [OPTIONS] COMMAND [ARGS]... ``` Manage automations. ## `prefect automation ls` ```command theme={null} prefect automation ls [OPTIONS] ``` List all automations. ## `prefect automation inspect` ```command theme={null} prefect automation inspect [OPTIONS] [NAME] ``` Inspect an automation. Arguments: name: the name of the automation to inspect id: the id of the automation to inspect yaml: output as YAML json: output as JSON Examples: `$ prefect automation inspect "my-automation"` `$ prefect automation inspect --id "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"` `$ prefect automation inspect "my-automation" --yaml` `$ prefect automation inspect "my-automation" --output json` `$ prefect automation inspect "my-automation" --output yaml` An automation's name An automation's id Output as YAML Output as JSON Specify an output format. Currently supports: json, yaml ## `prefect automation resume` ```command theme={null} prefect automation resume [OPTIONS] [NAME] ``` Resume an automation. Arguments: name: the name of the automation to resume id: the id of the automation to resume Examples: `$ prefect automation resume "my-automation"` `$ prefect automation resume --id "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"` An automation's name An automation's id ## `prefect automation enable` ```command theme={null} prefect automation enable [OPTIONS] [NAME] ``` Resume an automation. Arguments: name: the name of the automation to resume id: the id of the automation to resume Examples: `$ prefect automation resume "my-automation"` `$ prefect automation resume --id "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"` An automation's name An automation's id ## `prefect automation pause` ```command theme={null} prefect automation pause [OPTIONS] [NAME] ``` Pause an automation. Arguments: name: the name of the automation to pause id: the id of the automation to pause Examples: `$ prefect automation pause "my-automation"` `$ prefect automation pause --id "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"` An automation's name An automation's id ## `prefect automation disable` ```command theme={null} prefect automation disable [OPTIONS] [NAME] ``` Pause an automation. Arguments: name: the name of the automation to pause id: the id of the automation to pause Examples: `$ prefect automation pause "my-automation"` `$ prefect automation pause --id "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"` An automation's name An automation's id ## `prefect automation delete` ```command theme={null} prefect automation delete [OPTIONS] [NAME] ``` Delete an automation. An automation's name An automation's id Delete all automations **Example:** `$ prefect automation delete "my-automation"` `$ prefect automation delete --id "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"` `$ prefect automation delete --all` ## `prefect automation create` ```command theme={null} prefect automation create [OPTIONS] ``` Create one or more automations from a file or JSON string. Path to YAML or JSON file containing automation(s) JSON string containing automation(s) **Example:** `$ prefect automation create --from-file automation.yaml` `$ prefect automation create -f automation.json` `$ prefect automation create --from-json '{"name": "my-automation", "trigger": {...}, "actions": [...]}'` `$ prefect automation create -j '[{"name": "auto1", ...}, {"name": "auto2", ...}]'` ## `prefect automation update` ```command theme={null} prefect automation update [OPTIONS] ``` Update an existing automation from a file or JSON string. The ID of the automation to update Path to YAML or JSON file containing the updated automation JSON string containing the updated automation **Example:** `$ prefect automation update --id "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" --from-file automation.yaml` `$ prefect automation update --id "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" -f automation.json` `$ prefect automation update --id "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" --from-json '{"name": "updated-automation", "trigger": {...}, "actions": [...]}'` # Source: https://docs.prefect.io/v3/api-ref/cli/block # `prefect block` ```command theme={null} prefect block [OPTIONS] COMMAND [ARGS]... ``` Manage blocks. ## `prefect block register` ```command theme={null} prefect block register [OPTIONS] ``` Register blocks types within a module or file. This makes the blocks available for configuration via the UI. If a block type has already been registered, its registration will be updated to match the block's current definition.  Examples:  Register block types in a Python module: \$ prefect block register -m prefect\_aws.credentials  Register block types in a .py file: \$ prefect block register -f my\_blocks.py Python module containing block types to be registered Path to .py file containing block types to be registered ## `prefect block ls` ```command theme={null} prefect block ls [OPTIONS] ``` View all configured blocks. Specify an output format. Currently supports: json ## `prefect block delete` ```command theme={null} prefect block delete [OPTIONS] [SLUG] ``` Delete a configured block. A block slug. Formatted as '\/\' A block id. ## `prefect block create` ```command theme={null} prefect block create [OPTIONS] BLOCK_TYPE_SLUG ``` Generate a link to the Prefect UI to create a block. A block type slug. View available types with: prefect block type ls \[required] ## `prefect block inspect` ```command theme={null} prefect block inspect [OPTIONS] [SLUG] ``` Displays details about a configured block. A Block slug: \/\ A Block id to search for if no slug is given ## `prefect block types` ```command theme={null} prefect block types [OPTIONS] COMMAND [ARGS]... ``` Inspect and delete block types. ### `prefect block types ls` ```command theme={null} prefect block types ls [OPTIONS] ``` List all block types. Specify an output format. Currently supports: json ### `prefect block types inspect` ```command theme={null} prefect block types inspect [OPTIONS] SLUG ``` Display details about a block type. A block type slug \[required] ### `prefect block types delete` ```command theme={null} prefect block types delete [OPTIONS] SLUG ``` Delete an unprotected Block Type. A Block type slug \[required] ## `prefect block type` ```command theme={null} prefect block type [OPTIONS] COMMAND [ARGS]... ``` Inspect and delete block types. ### `prefect block type ls` ```command theme={null} prefect block type ls [OPTIONS] ``` List all block types. Specify an output format. Currently supports: json ### `prefect block type inspect` ```command theme={null} prefect block type inspect [OPTIONS] SLUG ``` Display details about a block type. A block type slug \[required] ### `prefect block type delete` ```command theme={null} prefect block type delete [OPTIONS] SLUG ``` Delete an unprotected Block Type. A Block type slug \[required] # Source: https://docs.prefect.io/v3/api-ref/cli/concurrency-limit # `prefect concurrency-limit` ```command theme={null} prefect concurrency-limit [OPTIONS] COMMAND [ARGS]... ``` Manage task-level concurrency limits. ## `prefect concurrency-limit create` ```command theme={null} prefect concurrency-limit create [OPTIONS] TAG CONCURRENCY_LIMIT ``` Create a concurrency limit against a tag. This limit controls how many task runs with that tag may simultaneously be in a Running state. \[required] \[required] ## `prefect concurrency-limit inspect` ```command theme={null} prefect concurrency-limit inspect [OPTIONS] TAG ``` View details about a concurrency limit. `active_slots` shows a list of TaskRun IDs which are currently using a concurrency slot. \[required] Specify an output format. Currently supports: json ## `prefect concurrency-limit ls` ```command theme={null} prefect concurrency-limit ls [OPTIONS] ``` View all concurrency limits. ## `prefect concurrency-limit reset` ```command theme={null} prefect concurrency-limit reset [OPTIONS] TAG ``` Resets the concurrency limit slots set on the specified tag. \[required] ## `prefect concurrency-limit delete` ```command theme={null} prefect concurrency-limit delete [OPTIONS] TAG ``` Delete the concurrency limit set on the specified tag. \[required] # Source: https://docs.prefect.io/v3/api-ref/cli/config # `prefect config` ```command theme={null} prefect config [OPTIONS] COMMAND [ARGS]... ``` View and set Prefect profiles. ## `prefect config set` ```command theme={null} prefect config set [OPTIONS] SETTINGS... ``` Change the value for a setting by setting the value in the current profile. \[required] ## `prefect config validate` ```command theme={null} prefect config validate [OPTIONS] ``` Read and validate the current profile. Deprecated settings will be automatically converted to new names unless both are set. ## `prefect config unset` ```command theme={null} prefect config unset [OPTIONS] SETTING_NAMES... ``` Restore the default value for a setting. Removes the setting from the current profile. \[required] ## `prefect config view` ```command theme={null} prefect config view [OPTIONS] ``` Display the current settings. Toggle display of default settings. \--show-defaults displays all settings, even if they are not changed from the default values. \--hide-defaults displays only settings that are changed from default values. Toggle display of the source of a value for a setting. The value for a setting can come from the current profile, environment variables, or the defaults. Toggle display of secrets setting values. # Source: https://docs.prefect.io/v3/api-ref/cli/dashboard # `prefect dashboard` ```command theme={null} prefect dashboard [OPTIONS] COMMAND [ARGS]... ``` Commands for interacting with the Prefect UI. ## `prefect dashboard open` ```command theme={null} prefect dashboard open [OPTIONS] ``` Open the Prefect UI in the browser. # Source: https://docs.prefect.io/v3/api-ref/cli/deployment # `prefect deployment` ```command theme={null} prefect deployment [OPTIONS] COMMAND [ARGS]... ``` Manage deployments. ## `prefect deployment inspect` ```command theme={null} prefect deployment inspect [OPTIONS] NAME ``` View details about a deployment. \[required] Specify an output format. Currently supports: json **Example:** `$ prefect deployment inspect "hello-world/my-deployment"` ```python theme={null} { 'id': '610df9c3-0fb4-4856-b330-67f588d20201', 'created': '2022-08-01T18:36:25.192102+00:00', 'updated': '2022-08-01T18:36:25.188166+00:00', 'name': 'my-deployment', 'description': None, 'flow_id': 'b57b0aa2-ef3a-479e-be49-381fb0483b4e', 'schedules': None, 'parameters': {'name': 'Marvin'}, 'tags': ['test'], 'parameter_openapi_schema': { 'title': 'Parameters', 'type': 'object', 'properties': { 'name': { 'title': 'name', 'type': 'string' } }, 'required': ['name'] }, 'storage_document_id': '63ef008f-1e5d-4e07-a0d4-4535731adb32', 'infrastructure_document_id': '6702c598-7094-42c8-9785-338d2ec3a028', 'infrastructure': { 'type': 'process', 'env': {}, 'labels': {}, 'name': None, 'command': ['python', '-m', 'prefect.engine'], 'stream_output': True } } ``` ## `prefect deployment ls` ```command theme={null} prefect deployment ls [OPTIONS] ``` View all deployments or deployments for specific flows. Specify an output format. Currently supports: json ## `prefect deployment run` ```command theme={null} prefect deployment run [OPTIONS] [NAME] ``` Create a flow run for the given flow and deployment. The flow run will be scheduled to run immediately unless `--start-in` or `--start-at` is specified. The flow run will not execute until a worker starts. To watch the flow run until it reaches a terminal state, use the `--watch` flag. A deployed flow's name: \/\ A deployment id to search for if no name is given A key, value pair (key=value) specifying a flow run job variable. The value will be interpreted as JSON. May be passed multiple times to specify multiple job variable values. A key, value pair (key=value) specifying a flow parameter. The value will be interpreted as JSON. May be passed multiple times to specify multiple parameter values. A mapping of parameters to values. To use a stdin, pass '-'. Any parameters passed with `--param` will take precedence over these values. A human-readable string specifying a time interval to wait before starting the flow run. E.g. 'in 5 minutes', 'in 1 hour', 'in 2 days'. A human-readable string specifying a time to start the flow run. E.g. 'at 5:30pm', 'at 2022-08-01 17:30', 'at 2022-08-01 17:30:00'. Tag(s) to be applied to flow run. Whether to poll the flow run until a terminal state is reached. How often to poll the flow run for state changes (in seconds). Timeout for --watch. Custom name to give the flow run. ## `prefect deployment delete` ```command theme={null} prefect deployment delete [OPTIONS] [NAME] ``` Delete a deployment. A deployed flow's name: \/\ A deployment id to search for if no name is given Delete all deployments **Example:** ```bash theme={null} $ prefect deployment delete test_flow/test_deployment $ prefect deployment delete --id dfd3e220-a130-4149-9af6-8d487e02fea6 ``` ## `prefect deployment schedule` ```command theme={null} prefect deployment schedule [OPTIONS] COMMAND [ARGS]... ``` Manage deployment schedules. ### `prefect deployment schedule create` ```command theme={null} prefect deployment schedule create [OPTIONS] NAME ``` Create a schedule for a given deployment. \[required] An interval to schedule on, specified in seconds The anchor date for an interval schedule Deployment schedule rrule string Deployment schedule cron string Control how croniter handles `day` and `day_of_week` entries Deployment schedule timezone string e.g. 'America/New\_York' Whether the schedule is active. Defaults to True. Replace the deployment's current schedule(s) with this new schedule. Accept the confirmation prompt without prompting ### `prefect deployment schedule delete` ```command theme={null} prefect deployment schedule delete [OPTIONS] DEPLOYMENT_NAME SCHEDULE_ID ``` Delete a deployment schedule. \[required] \[required] Accept the confirmation prompt without prompting ### `prefect deployment schedule pause` ```command theme={null} prefect deployment schedule pause [OPTIONS] [DEPLOYMENT_NAME] [SCHEDULE_ID] ``` Pause deployment schedules. Pause all deployment schedules **Example:** Pause a specific schedule: \$ prefect deployment schedule pause my-flow/my-deployment abc123-... Pause all schedules: \$ prefect deployment schedule pause --all ### `prefect deployment schedule resume` ```command theme={null} prefect deployment schedule resume [OPTIONS] [DEPLOYMENT_NAME] [SCHEDULE_ID] ``` Resume deployment schedules. Resume all deployment schedules **Example:** Resume a specific schedule: \$ prefect deployment schedule resume my-flow/my-deployment abc123-... Resume all schedules: \$ prefect deployment schedule resume --all ### `prefect deployment schedule ls` ```command theme={null} prefect deployment schedule ls [OPTIONS] DEPLOYMENT_NAME ``` View all schedules for a deployment. \[required] Specify an output format. Currently supports: json ### `prefect deployment schedule clear` ```command theme={null} prefect deployment schedule clear [OPTIONS] DEPLOYMENT_NAME ``` Clear all schedules for a deployment. \[required] Accept the confirmation prompt without prompting # Source: https://docs.prefect.io/v3/api-ref/cli/dev # `prefect dev` ```command theme={null} prefect dev [OPTIONS] COMMAND [ARGS]... ``` Internal Prefect development. Note that many of these commands require extra dependencies (such as npm and MkDocs) to function properly. ## `prefect dev build-docs` ```command theme={null} prefect dev build-docs [OPTIONS] ``` Builds REST API reference documentation for static display. ## `prefect dev build-ui` ```command theme={null} prefect dev build-ui [OPTIONS] ``` Installs dependencies and builds UI locally. Requires npm. ## `prefect dev ui` ```command theme={null} prefect dev ui [OPTIONS] ``` Starts a hot-reloading development UI. ## `prefect dev api` ```command theme={null} prefect dev api [OPTIONS] ``` Starts a hot-reloading development API. ## `prefect dev start` ```command theme={null} prefect dev start [OPTIONS] ``` Starts a hot-reloading development server with API, UI, and agent processes. Each service has an individual command if you wish to start them separately. Each service can be excluded here as well. ## `prefect dev build-image` ```command theme={null} prefect dev build-image [OPTIONS] ``` Build a docker image for development. The architecture to build the container for. Defaults to the architecture of the host Python. \[default: x86\_64] The Python version to build the container for. Defaults to the version of the host Python. \[default: 3.12] An alternative flavor to build, for example 'conda'. Defaults to the standard Python base image This will directly pass a --build-arg into the docker build process. Can be added to the command line multiple times. ## `prefect dev container` ```command theme={null} prefect dev container [OPTIONS] ``` Run a docker container with local code mounted and installed. # Source: https://docs.prefect.io/v3/api-ref/cli/event # `prefect event` ```command theme={null} prefect event [OPTIONS] COMMAND [ARGS]... ``` Stream events. ## `prefect event stream` ```command theme={null} prefect event stream [OPTIONS] ``` Subscribes to the event stream of a workspace, printing each event as it is received. By default, events are printed as JSON, but can be printed as text by passing `--format text`. Output format (json or text) File to write events to Stream events for entire account, including audit logs Stream only one event ## `prefect event emit` ```command theme={null} prefect event emit [OPTIONS] EVENT ``` Emit a single event to Prefect. The name of the event \[required] Resource specification as 'key=value' or JSON. Can be used multiple times. The resource ID (shorthand for --resource prefect.resource.id=\) Related resources as JSON string Event payload as JSON string **Example:** ```bash theme={null} # Simple event with resource ID prefect event emit user.logged_in --resource-id user-123 # Event with payload prefect event emit order.shipped --resource-id order-456 --payload '{"tracking": "ABC123"}' # Event with full resource specification prefect event emit customer.subscribed --resource '{"prefect.resource.id": "customer-789", "prefect.resource.name": "ACME Corp"}' ``` # Source: https://docs.prefect.io/v3/api-ref/cli/flow # `prefect flow` ```command theme={null} prefect flow [OPTIONS] COMMAND [ARGS]... ``` View and serve flows. ## `prefect flow ls` ```command theme={null} prefect flow ls [OPTIONS] ``` View flows. ## `prefect flow serve` ```command theme={null} prefect flow serve [OPTIONS] ENTRYPOINT ``` Serve a flow via an entrypoint. The path to a file containing a flow and the name of the flow function in the format `./path/to/file.py:flow_func_name`. \[required] The name to give the deployment created for the flow. The description to give the created deployment. If not provided, the description will be populated from the flow's description. A version to give the created deployment. One or more optional tags to apply to the created deployment. A cron string that will be used to set a schedule for the created deployment. An integer specifying an interval (in seconds) between scheduled runs of the flow. The start date for an interval schedule. An RRule that will be used to set a schedule for the created deployment. Timezone to used scheduling flow runs e.g. 'America/New\_York' If set, provided schedule will be paused when the serve command is stopped. If not set, the schedules will continue running. The maximum number of runs that can be executed concurrently by the created runner; only applies to this served flow. To apply a limit across multiple served flows, use global\_limit. The maximum number of concurrent runs allowed across all served flow instances associated with the same deployment. # Source: https://docs.prefect.io/v3/api-ref/cli/flow-run # `prefect flow-run` ```command theme={null} prefect flow-run [OPTIONS] COMMAND [ARGS]... ``` Interact with flow runs. ## `prefect flow-run inspect` ```command theme={null} prefect flow-run inspect [OPTIONS] ID ``` View details about a flow run. \[required] Open the flow run in a web browser. Specify an output format. Currently supports: json ## `prefect flow-run ls` ```command theme={null} prefect flow-run ls [OPTIONS] ``` View recent flow runs or flow runs for specific flows. Arguments: flow\_name: Name of the flow limit: Maximum number of flow runs to list. Defaults to 15. state: Name of the flow run's state. Can be provided multiple times. Options are 'SCHEDULED', 'PENDING', 'RUNNING', 'COMPLETED', 'FAILED', 'CRASHED', 'CANCELLING', 'CANCELLED', 'PAUSED', 'SUSPENDED', 'AWAITINGRETRY', 'RETRYING', and 'LATE'. state\_type: Type of the flow run's state. Can be provided multiple times. Options are 'SCHEDULED', 'PENDING', 'RUNNING', 'COMPLETED', 'FAILED', 'CRASHED', 'CANCELLING', 'CANCELLED', 'CRASHED', and 'PAUSED'. Examples: \$ prefect flow-runs ls --state Running \$ prefect flow-runs ls --state Running --state late \$ prefect flow-runs ls --state-type RUNNING \$ prefect flow-runs ls --state-type RUNNING --state-type FAILED Name of the flow Maximum number of flow runs to list Name of the flow run's state Type of the flow run's state Specify an output format. Currently supports: json ## `prefect flow-run delete` ```command theme={null} prefect flow-run delete [OPTIONS] ID ``` Delete a flow run by ID. \[required] ## `prefect flow-run cancel` ```command theme={null} prefect flow-run cancel [OPTIONS] ID ``` Cancel a flow run by ID. \[required] ## `prefect flow-run retry` ```command theme={null} prefect flow-run retry [OPTIONS] ID_OR_NAME ``` Retry a failed or completed flow run. The flow run can be specified by either its UUID or its name. If multiple flow runs have the same name, you must use the UUID to disambiguate. If the flow run has an associated deployment, it will be scheduled for retry and a worker will pick it up. If there is no deployment, you must provide an --entrypoint to the flow code, and the flow will execute locally.  Examples: \$ prefect flow-run retry abc123-def456-7890-... \$ prefect flow-run retry my-flow-run-name \$ prefect flow-run retry abc123 --entrypoint ./flows/my\_flow\.py:my\_flow The flow run ID (UUID) or name to retry. \[required] The path to a file containing the flow to run, and the name of the flow function, in the format `path/to/file.py:flow_function_name`. Required if the flow run does not have an associated deployment. ## `prefect flow-run logs` ```command theme={null} prefect flow-run logs [OPTIONS] ID ``` View logs for a flow run. \[required] Show the first 20 logs instead of all logs. Number of logs to show when using the --head or --tail flag. If None, defaults to 20. Reverse the logs order to print the most recent logs first Show the last 20 logs instead of all logs. ## `prefect flow-run execute` ```command theme={null} prefect flow-run execute [OPTIONS] [ID] ``` ID of the flow run to execute # Source: https://docs.prefect.io/v3/api-ref/cli/global-concurrency-limit # `prefect global-concurrency-limit` ```command theme={null} prefect global-concurrency-limit [OPTIONS] COMMAND [ARGS]... ``` Manage global concurrency limits. ## `prefect global-concurrency-limit ls` ```command theme={null} prefect global-concurrency-limit ls [OPTIONS] ``` List all global concurrency limits. Specify an output format. Currently supports: json ## `prefect global-concurrency-limit inspect` ```command theme={null} prefect global-concurrency-limit inspect [OPTIONS] NAME ``` Inspect a global concurrency limit. The name of the global concurrency limit to inspect. \[required] Specify an output format. Currently supports: json Path to .json file to write the global concurrency limit output to. ## `prefect global-concurrency-limit delete` ```command theme={null} prefect global-concurrency-limit delete [OPTIONS] NAME ``` Delete a global concurrency limit. The name of the global concurrency limit to delete. \[required] ## `prefect global-concurrency-limit enable` ```command theme={null} prefect global-concurrency-limit enable [OPTIONS] NAME ``` Enable a global concurrency limit. The name of the global concurrency limit to enable. \[required] ## `prefect global-concurrency-limit disable` ```command theme={null} prefect global-concurrency-limit disable [OPTIONS] NAME ``` Disable a global concurrency limit. The name of the global concurrency limit to disable. \[required] ## `prefect global-concurrency-limit update` ```command theme={null} prefect global-concurrency-limit update [OPTIONS] NAME ``` Update a global concurrency limit. The name of the global concurrency limit to update. \[required] Enable the global concurrency limit. Disable the global concurrency limit. The limit of the global concurrency limit. The number of active slots. The slot decay per second. **Example:** $prefect global-concurrency-limit update my-gcl --limit 10$ prefect gcl update my-gcl --active-slots 5 $prefect gcl update my-gcl --slot-decay-per-second 0.5$ prefect gcl update my-gcl --enable \$ prefect gcl update my-gcl --disable --limit 5 ## `prefect global-concurrency-limit create` ```command theme={null} prefect global-concurrency-limit create [OPTIONS] NAME ``` Create a global concurrency limit. Arguments: name (str): The name of the global concurrency limit to create. limit (int): The limit of the global concurrency limit. disable (Optional\[bool]): Create an inactive global concurrency limit. active\_slots (Optional\[int]): The number of active slots. slot\_decay\_per\_second (Optional\[float]): The slot decay per second. Examples: \$ prefect global-concurrency-limit create my-gcl --limit 10 \$ prefect gcl create my-gcl --limit 5 --active-slots 3 \$ prefect gcl create my-gcl --limit 5 --active-slots 3 --slot-decay-per-second 0.5 \$ prefect gcl create my-gcl --limit 5 --inactive The name of the global concurrency limit to create. \[required] The limit of the global concurrency limit. Create an inactive global concurrency limit. The number of active slots. The slot decay per second. # Source: https://docs.prefect.io/v3/api-ref/cli/init # `prefect init` ```command theme={null} prefect init [OPTIONS] ``` One or more fields to pass to the recipe (e.g., image\_name) in the format of key=value. # Source: https://docs.prefect.io/v3/api-ref/cli/profile # `prefect profile` ```command theme={null} prefect profile [OPTIONS] COMMAND [ARGS]... ``` Select and manage Prefect profiles. ## `prefect profile ls` ```command theme={null} prefect profile ls [OPTIONS] ``` List profile names. ## `prefect profile create` ```command theme={null} prefect profile create [OPTIONS] NAME ``` Create a new profile. \[required] Copy an existing profile. ## `prefect profile use` ```command theme={null} prefect profile use [OPTIONS] NAME ``` Set the given profile to active. \[required] ## `prefect profile delete` ```command theme={null} prefect profile delete [OPTIONS] NAME ``` Delete the given profile. \[required] ## `prefect profile rename` ```command theme={null} prefect profile rename [OPTIONS] NAME NEW_NAME ``` Change the name of a profile. \[required] \[required] ## `prefect profile inspect` ```command theme={null} prefect profile inspect [OPTIONS] [NAME] ``` Display settings from a given profile; defaults to active. Name of profile to inspect; defaults to active profile. Specify an output format. Currently supports: json ## `prefect profile populate-defaults` ```command theme={null} prefect profile populate-defaults [OPTIONS] ``` Populate the profiles configuration with default base profiles, preserving existing user profiles. # Source: https://docs.prefect.io/v3/api-ref/cli/profiles # `prefect profiles` ```command theme={null} prefect profiles [OPTIONS] COMMAND [ARGS]... ``` Select and manage Prefect profiles. ## `prefect profiles ls` ```command theme={null} prefect profiles ls [OPTIONS] ``` List profile names. ## `prefect profiles create` ```command theme={null} prefect profiles create [OPTIONS] NAME ``` Create a new profile. \[required] Copy an existing profile. ## `prefect profiles use` ```command theme={null} prefect profiles use [OPTIONS] NAME ``` Set the given profile to active. \[required] ## `prefect profiles delete` ```command theme={null} prefect profiles delete [OPTIONS] NAME ``` Delete the given profile. \[required] ## `prefect profiles rename` ```command theme={null} prefect profiles rename [OPTIONS] NAME NEW_NAME ``` Change the name of a profile. \[required] \[required] ## `prefect profiles inspect` ```command theme={null} prefect profiles inspect [OPTIONS] [NAME] ``` Display settings from a given profile; defaults to active. Name of profile to inspect; defaults to active profile. Specify an output format. Currently supports: json ## `prefect profiles populate-defaults` ```command theme={null} prefect profiles populate-defaults [OPTIONS] ``` Populate the profiles configuration with default base profiles, preserving existing user profiles. # Source: https://docs.prefect.io/v3/api-ref/cli/sdk # `prefect sdk` ```command theme={null} prefect sdk [OPTIONS] COMMAND [ARGS]... ``` Manage Prefect SDKs. (beta) ## `prefect sdk generate` ```command theme={null} prefect sdk generate [OPTIONS] ``` (beta) Generate a typed Python SDK from workspace deployments. The generated SDK provides IDE autocomplete and type checking for your deployments. Requires an active Prefect API connection (use `prefect cloud login` or configure PREFECT\_API\_URL).  Examples: Generate SDK for all deployments: \$ prefect sdk generate --output ./my\_sdk.py Generate SDK for specific flows: \$ prefect sdk generate --output ./my\_sdk.py --flow my-etl-flow Generate SDK for specific deployments: \$ prefect sdk generate --output ./my\_sdk.py --deployment my-flow/production Output file path for the generated SDK. Filter to specific flow(s). Can be specified multiple times. Filter to specific deployment(s). Can be specified multiple times. Use 'flow-name/deployment-name' format for exact matching. # Source: https://docs.prefect.io/v3/api-ref/cli/server # `prefect server` ```command theme={null} prefect server [OPTIONS] COMMAND [ARGS]... ``` Start a Prefect server instance and interact with the database ## `prefect server start` ```command theme={null} prefect server start [OPTIONS] ``` Start a Prefect server instance Only run the webserver API and UI Run the server in the background Number of worker processes to run. Only runs the webserver API and UI ## `prefect server status` ```command theme={null} prefect server status [OPTIONS] ``` Check the status of the Prefect server. Wait for the server to become available before returning. Maximum number of seconds to wait when using --wait. A value of 0 means wait indefinitely. Specify an output format. Currently supports: json ## `prefect server stop` ```command theme={null} prefect server stop [OPTIONS] ``` Stop a Prefect server instance running in the background ## `prefect server database` ```command theme={null} prefect server database [OPTIONS] COMMAND [ARGS]... ``` Interact with the database. ### `prefect server database reset` ```command theme={null} prefect server database reset [OPTIONS] ``` Drop and recreate all Prefect database tables ### `prefect server database upgrade` ```command theme={null} prefect server database upgrade [OPTIONS] ``` Upgrade the Prefect database The revision to pass to `alembic upgrade`. If not provided, runs all migrations. Flag to show what migrations would be made without applying them. Will emit sql statements to stdout. ### `prefect server database downgrade` ```command theme={null} prefect server database downgrade [OPTIONS] ``` Downgrade the Prefect database The revision to pass to `alembic downgrade`. If not provided, downgrades to the most recent revision. Use 'base' to run all migrations. Flag to show what migrations would be made without applying them. Will emit sql statements to stdout. ### `prefect server database revision` ```command theme={null} prefect server database revision [OPTIONS] ``` Create a new migration for the Prefect database A message to describe the migration. ### `prefect server database stamp` ```command theme={null} prefect server database stamp [OPTIONS] REVISION ``` Stamp the revision table with the given revision; don't run any migrations \[required] ## `prefect server services` ```command theme={null} prefect server services [OPTIONS] COMMAND [ARGS]... ``` Interact with server loop services. ### `prefect server services manager` ```command theme={null} prefect server services manager [OPTIONS] ``` This is an internal entrypoint used by `prefect server services start --background`. Users do not call this directly. We do everything in sync so that the child won't exit until the user kills it. ### `prefect server services list-services` ```command theme={null} prefect server services list-services [OPTIONS] ``` List all available services and their status. ### `prefect server services ls` ```command theme={null} prefect server services ls [OPTIONS] ``` List all available services and their status. ### `prefect server services start-services` ```command theme={null} prefect server services start-services [OPTIONS] ``` Start all enabled Prefect services in one process. Run the services in the background ### `prefect server services start` ```command theme={null} prefect server services start [OPTIONS] ``` Start all enabled Prefect services in one process. Run the services in the background ### `prefect server services stop-services` ```command theme={null} prefect server services stop-services [OPTIONS] ``` Stop any background Prefect services that were started. ### `prefect server services stop` ```command theme={null} prefect server services stop [OPTIONS] ``` Stop any background Prefect services that were started. # Source: https://docs.prefect.io/v3/api-ref/cli/shell # `prefect shell` ```command theme={null} prefect shell [OPTIONS] COMMAND [ARGS]... ``` Serve and watch shell commands as Prefect flows. ## `prefect shell watch` ```command theme={null} prefect shell watch [OPTIONS] COMMAND ``` Executes a shell command and observes it as Prefect flow. \[required] Log the output of the command to Prefect logs. Name of the flow run. Name of the flow. Stream the output of the command. Optional tags for the flow run. ## `prefect shell serve` ```command theme={null} prefect shell serve [OPTIONS] COMMAND ``` Creates and serves a Prefect deployment that runs a specified shell command according to a cron schedule or ad hoc. This function allows users to integrate shell command execution into Prefect workflows seamlessly. It provides options for scheduled execution via cron expressions, flow and deployment naming for better management, and the application of tags for easier categorization and filtering within the Prefect UI. Additionally, it supports streaming command output to Prefect logs, setting concurrency limits to control flow execution, and optionally running the deployment once for ad-hoc tasks. \[required] Name of the flow Name of the deployment Tag for the deployment (can be provided multiple times) Stream the output of the command Cron schedule for the flow Timezone for the schedule The maximum number of flow runs that can execute at the same time Run the agent loop once, instead of forever. # Source: https://docs.prefect.io/v3/api-ref/cli/task # `prefect task` ```command theme={null} prefect task [OPTIONS] COMMAND [ARGS]... ``` Work with task scheduling. ## `prefect task serve` ```command theme={null} prefect task serve [OPTIONS] [ENTRYPOINTS]... ``` Serve the provided tasks so that their runs may be submitted to and executed in the engine. The paths to one or more tasks, in the form of `./path/to/file.py:task_func_name`. The module(s) to import the tasks from. The maximum number of tasks that can be run concurrently. Defaults to 10. # Source: https://docs.prefect.io/v3/api-ref/cli/task-run # `prefect task-run` ```command theme={null} prefect task-run [OPTIONS] COMMAND [ARGS]... ``` View and inspect task runs. ## `prefect task-run inspect` ```command theme={null} prefect task-run inspect [OPTIONS] ID ``` View details about a task run. \[required] Open the task run in a web browser. Specify an output format. Currently supports: json ## `prefect task-run ls` ```command theme={null} prefect task-run ls [OPTIONS] ``` View recent task runs Name of the task Maximum number of task runs to list Name of the task run's state Type of the task run's state ## `prefect task-run logs` ```command theme={null} prefect task-run logs [OPTIONS] ID ``` View logs for a task run. \[required] Show the first 20 logs instead of all logs. Number of logs to show when using the --head or --tail flag. If None, defaults to 20. Reverse the logs order to print the most recent logs first Show the last 20 logs instead of all logs. # Source: https://docs.prefect.io/v3/api-ref/cli/variable # `prefect variable` ```command theme={null} prefect variable [OPTIONS] COMMAND [ARGS]... ``` Manage variables. ## `prefect variable ls` ```command theme={null} prefect variable ls [OPTIONS] ``` List variables. The maximum number of variables to return. Specify an output format. Currently supports: json ## `prefect variable inspect` ```command theme={null} prefect variable inspect [OPTIONS] NAME ``` View details about a variable. \[required] Specify an output format. Currently supports: json ## `prefect variable get` ```command theme={null} prefect variable get [OPTIONS] NAME ``` Get a variable's value. \[required] ## `prefect variable set` ```command theme={null} prefect variable set [OPTIONS] NAME VALUE ``` Set a variable. If the variable already exists, use `--overwrite` to update it. \[required] \[required] Overwrite the variable if it already exists. Tag to associate with the variable. ## `prefect variable unset` ```command theme={null} prefect variable unset [OPTIONS] NAME ``` Unset a variable. \[required] ## `prefect variable delete` ```command theme={null} prefect variable delete [OPTIONS] NAME ``` Unset a variable. \[required] # Source: https://docs.prefect.io/v3/api-ref/cli/version # `prefect version` ```command theme={null} prefect version [OPTIONS] ``` Get the current Prefect version and integration information. Omit integration information # Source: https://docs.prefect.io/v3/api-ref/cli/work-pool # `prefect work-pool` ```command theme={null} prefect work-pool [OPTIONS] COMMAND [ARGS]... ``` Manage work pools. ## `prefect work-pool create` ```command theme={null} prefect work-pool create [OPTIONS] NAME ``` Create a new work pool or update an existing one.  Examples:  Create a Kubernetes work pool in a paused state:  \$ prefect work-pool create "my-pool" --type kubernetes --paused  Create a Docker work pool with a custom base job template:  \$ prefect work-pool create "my-pool" --type docker --base-job-template ./base-job-template.json  Update an existing work pool:  \$ prefect work-pool create "existing-pool" --base-job-template ./base-job-template.json --overwrite The name of the work pool. \[required] The path to a JSON file containing the base job template to use. If unspecified, Prefect will use the default base job template for the given worker type. Whether or not to create the work pool in a paused state. The type of work pool to create. Whether or not to use the created work pool as the local default for deployment. Whether or not to provision infrastructure for the work pool if supported for the given work pool type. Whether or not to overwrite an existing work pool with the same name. ## `prefect work-pool ls` ```command theme={null} prefect work-pool ls [OPTIONS] ``` List work pools.  Examples: \$ prefect work-pool ls Show additional information about work pools. ## `prefect work-pool inspect` ```command theme={null} prefect work-pool inspect [OPTIONS] NAME ``` Inspect a work pool.  Examples: \$ prefect work-pool inspect "my-pool" \$ prefect work-pool inspect "my-pool" --output json The name of the work pool to inspect. \[required] Specify an output format. Currently supports: json ## `prefect work-pool pause` ```command theme={null} prefect work-pool pause [OPTIONS] NAME ``` Pause a work pool.  Examples: \$ prefect work-pool pause "my-pool" The name of the work pool to pause. \[required] ## `prefect work-pool resume` ```command theme={null} prefect work-pool resume [OPTIONS] NAME ``` Resume a work pool.  Examples: \$ prefect work-pool resume "my-pool" The name of the work pool to resume. \[required] ## `prefect work-pool update` ```command theme={null} prefect work-pool update [OPTIONS] NAME ``` Update a work pool.  Examples: \$ prefect work-pool update "my-pool" The name of the work pool to update. \[required] The path to a JSON file containing the base job template to use. If unspecified, Prefect will use the default base job template for the given worker type. If None, the base job template will not be modified. The concurrency limit for the work pool. If None, the concurrency limit will not be modified. The description for the work pool. If None, the description will not be modified. ## `prefect work-pool provision-infrastructure` ```command theme={null} prefect work-pool provision-infrastructure [OPTIONS] NAME ``` Provision infrastructure for a work pool.  Examples: \$ prefect work-pool provision-infrastructure "my-pool" \$ prefect work-pool provision-infra "my-pool" The name of the work pool to provision infrastructure for. \[required] ## `prefect work-pool provision-infra` ```command theme={null} prefect work-pool provision-infra [OPTIONS] NAME ``` Provision infrastructure for a work pool.  Examples: \$ prefect work-pool provision-infrastructure "my-pool" \$ prefect work-pool provision-infra "my-pool" The name of the work pool to provision infrastructure for. \[required] ## `prefect work-pool delete` ```command theme={null} prefect work-pool delete [OPTIONS] NAME ``` Delete a work pool.  Examples: \$ prefect work-pool delete "my-pool" The name of the work pool to delete. \[required] ## `prefect work-pool set-concurrency-limit` ```command theme={null} prefect work-pool set-concurrency-limit [OPTIONS] NAME CONCURRENCY_LIMIT ``` Set the concurrency limit for a work pool.  Examples: \$ prefect work-pool set-concurrency-limit "my-pool" 10 The name of the work pool to update. \[required] The new concurrency limit for the work pool. \[required] ## `prefect work-pool clear-concurrency-limit` ```command theme={null} prefect work-pool clear-concurrency-limit [OPTIONS] NAME ``` Clear the concurrency limit for a work pool.  Examples: \$ prefect work-pool clear-concurrency-limit "my-pool" The name of the work pool to update. \[required] ## `prefect work-pool get-default-base-job-template` ```command theme={null} prefect work-pool get-default-base-job-template [OPTIONS] ``` Get the default base job template for a given work pool type.  Examples: \$ prefect work-pool get-default-base-job-template --type kubernetes The type of work pool for which to get the default base job template. If set, write the output to a file. ## `prefect work-pool preview` ```command theme={null} prefect work-pool preview [OPTIONS] [NAME] ``` Preview the work pool's scheduled work for all queues.  Examples: \$ prefect work-pool preview "my-pool" --hours 24 The name or ID of the work pool to preview The number of hours to look ahead; defaults to 1 hour ## `prefect work-pool storage` ```command theme={null} prefect work-pool storage [OPTIONS] COMMAND [ARGS]... ``` EXPERIMENTAL: Manage work pool storage. ### `prefect work-pool storage inspect` ```command theme={null} prefect work-pool storage inspect [OPTIONS] WORK_POOL_NAME ``` EXPERIMENTAL: Inspect the storage configuration for a work pool. The name of the work pool to display storage configuration for. \[required] Specify an output format. Currently supports: json **Example:** $prefect work-pool storage inspect "my-pool"$ prefect work-pool storage inspect "my-pool" --output json ### `prefect work-pool storage configure` ```command theme={null} prefect work-pool storage configure [OPTIONS] COMMAND [ARGS]... ``` EXPERIMENTAL: Configure work pool storage. #### `prefect work-pool storage configure s3` ```command theme={null} prefect work-pool storage configure s3 [OPTIONS] WORK_POOL_NAME ``` EXPERIMENTAL: Configure AWS S3 storage for a work pool.  Examples: \$ prefect work-pool storage configure s3 "my-pool" --bucket my-bucket --aws-credentials-block-name my-credentials The name of the work pool to configure storage for. \[required] The name of the S3 bucket to use. The name of the AWS credentials block to use. #### `prefect work-pool storage configure gcs` ```command theme={null} prefect work-pool storage configure gcs [OPTIONS] WORK_POOL_NAME ``` EXPERIMENTAL: Configure Google Cloud storage for a work pool.  Examples: \$ prefect work-pool storage configure gcs "my-pool" --bucket my-bucket --gcp-credentials-block-name my-credentials The name of the work pool to configure storage for. \[required] The name of the Google Cloud Storage bucket to use. The name of the Google Cloud credentials block to use. #### `prefect work-pool storage configure azure-blob-storage` ```command theme={null} prefect work-pool storage configure azure-blob-storage [OPTIONS] WORK_POOL_NAME ``` EXPERIMENTAL: Configure Azure Blob Storage for a work pool.  Examples: \$ prefect work-pool storage configure azure-blob-storage "my-pool" --container my-container --azure-blob-storage-credentials-block-name my-credentials The name of the work pool to configure storage for. \[required] The name of the Azure Blob Storage container to use. The name of the Azure Blob Storage credentials block to use. # Source: https://docs.prefect.io/v3/api-ref/cli/work-queue # `prefect work-queue` ```command theme={null} prefect work-queue [OPTIONS] COMMAND [ARGS]... ``` Manage work queues. ## `prefect work-queue create` ```command theme={null} prefect work-queue create [OPTIONS] NAME ``` Create a work queue. The unique name to assign this work queue \[required] The concurrency limit to set on the queue. The name of the work pool to create the work queue in. The associated priority for the created work queue ## `prefect work-queue set-concurrency-limit` ```command theme={null} prefect work-queue set-concurrency-limit [OPTIONS] NAME LIMIT ``` Set a concurrency limit on a work queue. The name or ID of the work queue \[required] The concurrency limit to set on the queue. \[required] The name of the work pool that the work queue belongs to. ## `prefect work-queue clear-concurrency-limit` ```command theme={null} prefect work-queue clear-concurrency-limit [OPTIONS] NAME ``` Clear any concurrency limits from a work queue. The name or ID of the work queue to clear \[required] The name of the work pool that the work queue belongs to. ## `prefect work-queue pause` ```command theme={null} prefect work-queue pause [OPTIONS] NAME ``` Pause a work queue. The name or ID of the work queue to pause \[required] The name of the work pool that the work queue belongs to. ## `prefect work-queue resume` ```command theme={null} prefect work-queue resume [OPTIONS] NAME ``` Resume a paused work queue. The name or ID of the work queue to resume \[required] The name of the work pool that the work queue belongs to. ## `prefect work-queue inspect` ```command theme={null} prefect work-queue inspect [OPTIONS] [NAME] ``` Inspect a work queue by ID. The name or ID of the work queue to inspect The name of the work pool that the work queue belongs to. Specify an output format. Currently supports: json ## `prefect work-queue ls` ```command theme={null} prefect work-queue ls [OPTIONS] ``` View all work queues. Display more information. Will match work queues with names that start with the specified prefix string The name of the work pool containing the work queues to list. ## `prefect work-queue preview` ```command theme={null} prefect work-queue preview [OPTIONS] [NAME] ``` Preview a work queue. The name or ID of the work queue to preview The number of hours to look ahead; defaults to 1 hour The name of the work pool that the work queue belongs to. ## `prefect work-queue delete` ```command theme={null} prefect work-queue delete [OPTIONS] NAME ``` Delete a work queue by ID. The name or ID of the work queue to delete \[required] The name of the work pool containing the work queue to delete. ## `prefect work-queue read-runs` ```command theme={null} prefect work-queue read-runs [OPTIONS] NAME ``` Get runs in a work queue. Note that this will trigger an artificial poll of the work queue. The name or ID of the work queue to poll \[required] The name of the work pool containing the work queue to poll. # Source: https://docs.prefect.io/v3/api-ref/cli/worker # `prefect worker` ```command theme={null} prefect worker [OPTIONS] COMMAND [ARGS]... ``` Start and interact with workers. ## `prefect worker start` ```command theme={null} prefect worker start [OPTIONS] ``` Start a worker process to poll a work pool for flow runs. The name to give to the started worker. If not provided, a unique name will be generated. The work pool the started worker should poll. One or more work queue names for the worker to pull from. If not provided, the worker will pull from all work queues in the work pool. The type of worker to start. If not provided, the worker type will be inferred from the work pool. Number of seconds to look into the future for scheduled flow runs. Only run worker polling once. By default, the worker runs forever. Maximum number of flow runs to execute concurrently. Start a healthcheck server for the worker. Install policy to use workers from Prefect integration packages. The path to a JSON file containing the base job template to use. If unspecified, Prefect will use the default base job template for the given worker type. If the work pool already exists, this will be ignored. # Artifact and asset events Source: https://docs.prefect.io/v3/api-ref/events/artifact-asset-events Events emitted when artifacts are created or updated and when assets are referenced or materialized. Artifact events track artifact creation and updates. Asset events track data lineage through upstream references and downstream materializations. ## Artifact events ### `prefect.artifact.created` Emitted when a new artifact is created. #### Resource | Label | Description | | ----------------------- | ------------------------- | | `prefect.resource.id` | `prefect.artifact.{uuid}` | | `prefect.resource.name` | Artifact key (when set) | #### Related resources This event has no related resources. #### Payload | Field | Type | Description | | ------------- | -------------- | --------------------------------------------------------------------- | | `key` | string or null | Artifact key | | `type` | string or null | Artifact type (for example, `markdown`, `table`, `progress`, `image`) | | `description` | string or null | Artifact description | ### `prefect.artifact.updated` Emitted when an existing artifact is updated. #### Resource | Label | Description | | --------------------- | ------------------------- | | `prefect.resource.id` | `prefect.artifact.{uuid}` | #### Related resources This event has no related resources. #### Payload The payload contains the fields that were updated, serialized from the update model. Common fields include `data`, `description`, and `type`. ## Asset events For more on assets and data lineage, see [Assets](/v3/concepts/assets). ### `prefect.asset.referenced` Emitted for each upstream asset referenced during a task execution. One event is emitted per upstream asset. #### Resource | Label | Description | | --------------------------- | ------------------------------------------------------------------------------- | | `prefect.resource.id` | Asset key (the unique identifier for the asset, for example `s3://bucket/path`) | | `prefect.resource.name` | Asset name (when set via asset properties) | | `prefect.asset.description` | Asset description (when set) | | `prefect.asset.url` | Asset URL (when set) | | `prefect.asset.owners` | JSON-encoded list of asset owners (when set) | #### Related resources This event has no related resources. #### Payload This event has no payload. ### `prefect.asset.materialization.succeeded` Emitted when a downstream asset is successfully materialized (the task that produces it completes). #### Resource Same labels as [`prefect.asset.referenced`](#prefectassetreferenced). #### Related resources | Resource ID pattern | Role | When present | | -------------------------- | ----------------------- | -------------------------------------------------------- | | Asset key | `asset` | One entry per upstream asset (both direct and inherited) | | Materialized-by identifier | `asset-materialized-by` | When a `materialized_by` identifier was specified | #### Payload User-provided materialization metadata, if any was supplied via the asset's `materialization_metadata` parameter. This is an open-ended dictionary. ### `prefect.asset.materialization.failed` Emitted when a downstream asset fails to materialize (the task that produces it fails). Same structure as [`prefect.asset.materialization.succeeded`](#prefectassetmaterializationsucceeded). #### Resource Same labels as [`prefect.asset.referenced`](#prefectassetreferenced). #### Related resources Same as [`prefect.asset.materialization.succeeded`](#prefectassetmaterializationsucceeded). #### Payload Same as [`prefect.asset.materialization.succeeded`](#prefectassetmaterializationsucceeded). # Automation events Source: https://docs.prefect.io/v3/api-ref/events/automation-events Events emitted when automations trigger, resolve, and execute actions. Automation events track the lifecycle of automation triggers and the actions they execute. For more on automations, see [Automations](/v3/concepts/automations). ## Trigger state events ### `prefect.automation.triggered` Emitted when an automation's trigger condition is met and the automation enters the triggered state. #### Resource | Label | Description | | ----------------------- | ------------------------------------------------------------------------------- | | `prefect.resource.id` | `prefect.automation.{uuid}` | | `prefect.resource.name` | Automation name | | `prefect.posture` | Trigger posture: `Reactive`, `Proactive`, or `Metric` (for event triggers only) | #### Related resources | Resource ID pattern | Role | When present | | ---------------------- | ------------------ | ---------------------------------------------- | | `prefect.event.{uuid}` | `triggering-event` | When the trigger was fired by a specific event | #### Payload | Field | Type | Description | | ------------------- | -------------- | ------------------------------------------------------------------------------- | | `triggering_labels` | object | Labels from the triggering event that matched the trigger's `for_each` criteria | | `triggering_event` | object or null | Full serialized event that caused the trigger to fire | ### `prefect.automation.resolved` Emitted when an automation's trigger condition is no longer met and the automation returns to the resolved state (for example, a proactive trigger that previously fired because events stopped now sees events resume). #### Resource Same as [`prefect.automation.triggered`](#prefectautomationtriggered). #### Related resources Same as [`prefect.automation.triggered`](#prefectautomationtriggered). #### Payload Same as [`prefect.automation.triggered`](#prefectautomationtriggered). ## Action lifecycle events ### `prefect.automation.action.triggered` Emitted when an automation action begins execution. #### Resource | Label | Description | | ----------------------- | --------------------------------------------------------------------- | | `prefect.resource.id` | `prefect.automation.{uuid}` | | `prefect.resource.name` | Automation name | | `prefect.trigger-type` | Trigger type (for example, `event`, `compound`, `sequence`, `metric`) | | `prefect.posture` | Trigger posture (for event triggers only) | #### Related resources | Resource ID pattern | Role | When present | | ---------------------- | ---------------------------- | -------------------------------------------------------------------------------------------- | | `prefect.event.{uuid}` | `automation-triggered-event` | Links to the `automation.triggered` or `automation.resolved` event that prompted this action | | `prefect.event.{uuid}` | `triggering-event` | The original event that caused the automation to fire | #### Payload | Field | Type | Description | | -------------- | ------- | -------------------------------------------------------------------------------- | | `action_index` | integer | Index of the action within the automation's action list | | `action_type` | string | Action type (for example, `run-deployment`, `send-notification`, `call-webhook`) | | `invocation` | string | Unique ID for this action invocation | ### `prefect.automation.action.executed` Emitted when an automation action completes successfully. Uses the `follows` field to link back to the corresponding `action.triggered` event. #### Resource Same as [`prefect.automation.action.triggered`](#prefectautomationactiontriggered). #### Related resources Same as [`prefect.automation.action.triggered`](#prefectautomationactiontriggered). #### Payload | Field | Type | Description | | -------------- | ------- | ------------------- | | `action_index` | integer | Index of the action | | `action_type` | string | Action type | | `invocation` | string | Invocation ID | Additional fields vary by action type and include details about the action's result (for example, the flow run ID created by a `run-deployment` action). ### `prefect.automation.action.failed` Emitted when an automation action fails. Uses the `follows` field to link back to the corresponding `action.triggered` event. #### Resource Same as [`prefect.automation.action.triggered`](#prefectautomationactiontriggered). #### Related resources Same as [`prefect.automation.action.triggered`](#prefectautomationactiontriggered). #### Payload | Field | Type | Description | | -------------- | ------- | ------------------------------------ | | `action_index` | integer | Index of the action | | `action_type` | string | Action type | | `invocation` | string | Invocation ID | | `reason` | string | Description of why the action failed | ## Prefect Cloud automation events The following events are only available in Prefect Cloud. Prefect Cloud emits its own automation events using the `prefect-cloud` namespace. These are structurally identical to the OSS events above but use different event names. ### `prefect-cloud.automation.triggered` Cloud equivalent of `prefect.automation.triggered`. Emitted when an automation enters the triggered state. ### `prefect-cloud.automation.resolved` Cloud equivalent of `prefect.automation.resolved`. Emitted when an automation returns to the resolved state. ### `prefect-cloud.automation.action.triggered` Cloud equivalent of `prefect.automation.action.triggered`. ### `prefect-cloud.automation.action.executed` Cloud equivalent of `prefect.automation.action.executed`. ### `prefect-cloud.automation.action.failed` Cloud equivalent of `prefect.automation.action.failed`. ### `prefect-cloud.automation.action.disabled` Emitted when an automation action is disabled (for example, after repeated failures). This event is specific to Prefect Cloud. # Prefect Cloud events Source: https://docs.prefect.io/v3/api-ref/events/cloud-events Events emitted exclusively by Prefect Cloud for audit logging, security, billing, webhooks, and incidents. All events on this page are only available in Prefect Cloud. These events use the `prefect-cloud` namespace and cover audit logging, security, managed execution, billing, webhooks, and incident management. For Cloud automation events, see [Automation events](/v3/api-ref/events/automation-events#prefect-cloud-automation-events). ## Audit log events Prefect Cloud emits audit events for all CRUD operations on account and workspace resources. These events follow the pattern `prefect-cloud.{resource-type}.{action}` and power the [audit log](/v3/how-to-guides/cloud/manage-users/audit-logs). ### Resource types | Resource type | Description | | ---------------------- | --------------------------------------- | | `account` | Account settings and configuration | | `account-membership` | User membership in an account | | `account-role` | Custom account-level roles | | `account-invitation` | Invitations to join an account | | `account-domain-names` | Account domain name configuration | | `account-settings` | Account-level settings | | `user` | User accounts | | `service-account` | Service accounts (bots) | | `api-key` | API keys for users and service accounts | | `team` | Teams within an account | | `team-membership` | User membership in a team | | `workspace` | Workspaces | | `workspace-role` | Custom workspace-level roles | | `workspace-access` | Workspace access grants | | `workspace-invitation` | Invitations to join a workspace | | `workspace-settings` | Workspace-level settings | | `automation` | Automation rules | | `webhook` | Webhook endpoints | | `incident` | Incidents | | `support-access` | Support account access grants | ### Actions | Action | Description | Example event | | ----------------- | --------------------------------------- | --------------------------------------------- | | `created` | Resource was created | `prefect-cloud.workspace.created` | | `updated` | Resource was modified | `prefect-cloud.automation.updated` | | `deleted` | Resource was removed | `prefect-cloud.api-key.deleted` | | `rotated` | Credential was rotated (API keys) | `prefect-cloud.api-key.rotated` | | `logged-in` | User logged in | `prefect-cloud.user.logged-in` | | `logged-out` | User logged out | `prefect-cloud.user.logged-out` | | `accepted` | Invitation was accepted | `prefect-cloud.account-invitation.accepted` | | `rejected` | Invitation was rejected | `prefect-cloud.workspace-invitation.rejected` | | `transferred-in` | Workspace transferred into an account | `prefect-cloud.workspace.transferred-in` | | `transferred-out` | Workspace transferred out of an account | `prefect-cloud.workspace.transferred-out` | ### Common resource and related resource structure All audit events use: | Label | Description | | ----------------------- | -------------------------------------- | | `prefect.resource.id` | `prefect-cloud.{resource-type}.{uuid}` | | `prefect.resource.name` | Resource name (when applicable) | Common related resources include the actor (user or service account that performed the action), the account, and the workspace (for workspace-scoped resources). ## Security events ### `prefect-cloud.request.access-denied.ip-allowlist` Emitted when an API request is denied because the source IP address is not in the account's IP allowlist. ## Managed execution events ### `prefect-cloud.managed-execution.used` Emitted when managed execution compute is consumed against the account's quota. ### `prefect-cloud.managed-execution.exceeded` Emitted when managed execution usage exceeds the account's quota. ## Billing and subscription events ### `prefect-cloud.subscription.updated` Emitted when an account's subscription is updated. #### Resource | Label | Description | | --------------------- | ------------------------------ | | `prefect.resource.id` | `prefect-cloud.account.{uuid}` | #### Payload | Field | Type | Description | | ---------------------- | ------- | --------------------------------------------------------------------- | | `cancel_at_period_end` | boolean | Whether the subscription will cancel at the end of the billing period | ## Webhook events ### `prefect-cloud.webhook.received` Emitted when a webhook endpoint receives an HTTP request. #### Resource | Label | Description | | ----------------------- | ------------------------------ | | `prefect.resource.id` | `prefect-cloud.webhook.{uuid}` | | `prefect.resource.name` | Webhook name | ### `prefect-cloud.webhook.failed` Emitted when webhook processing fails (for example, the template could not be rendered or the resulting event was invalid). #### Resource Same as [`prefect-cloud.webhook.received`](#prefect-cloudwebhookreceived). ## Incident events ### `prefect-cloud.incident.declared` Emitted when an incident is declared. #### Resource | Label | Description | | ----------------------- | ------------------------------- | | `prefect.resource.id` | `prefect-cloud.incident.{uuid}` | | `prefect.resource.name` | Incident name | #### Payload | Field | Type | Description | | ---------- | ------ | --------------------------------------- | | `id` | string | Incident ID | | `name` | string | Incident name | | `status` | string | Incident status (for example, `active`) | | `severity` | string | Incident severity level | | `tags` | array | Tags associated with the incident | ### `prefect-cloud.incident.resolved` Emitted when an incident is resolved. #### Resource Same as [`prefect-cloud.incident.declared`](#prefect-cloudincidentdeclared). #### Payload Same as [`prefect-cloud.incident.declared`](#prefect-cloudincidentdeclared), with status reflecting the resolved state and `end_time` included. ### `prefect-cloud.incident.reopened` Emitted when a previously resolved incident is reopened. #### Resource Same as [`prefect-cloud.incident.declared`](#prefect-cloudincidentdeclared). ### `prefect-cloud.incident.comment.added` Emitted when a comment is added to an incident. #### Resource Same as [`prefect-cloud.incident.declared`](#prefect-cloudincidentdeclared). ### `prefect-cloud.incident.updated.{field}` Emitted when a specific incident field is updated. The `{field}` suffix identifies which field changed. #### Field variants | Event name | Description | | -------------------------------------------------- | ---------------------------------- | | `prefect-cloud.incident.updated.name` | Incident name changed | | `prefect-cloud.incident.updated.summary` | Incident summary changed | | `prefect-cloud.incident.updated.severity` | Incident severity changed | | `prefect-cloud.incident.updated.start_time` | Incident start time changed | | `prefect-cloud.incident.updated.related_resources` | Incident related resources changed | | `prefect-cloud.incident.updated.tags` | Incident tags changed | #### Resource Same as [`prefect-cloud.incident.declared`](#prefect-cloudincidentdeclared). # Concurrency events Source: https://docs.prefect.io/v3/api-ref/events/concurrency-events Events emitted when concurrency limit slots are acquired and released. Concurrency events track when slots are acquired and released for named concurrency limits. For more on concurrency limits, see [Global concurrency limits](/v3/concepts/global-concurrency-limits). ## `prefect.concurrency-limit.acquired` Emitted when concurrency slots are acquired for a named limit. ### Resource | Label | Description | | ----------------------- | ---------------------------------------- | | `prefect.resource.id` | `prefect.concurrency-limit.{uuid}` | | `prefect.resource.name` | Concurrency limit name | | `slots-acquired` | Number of slots acquired in this request | | `limit` | Maximum number of slots for this limit | ### Related resources | Resource ID pattern | Role | When present | | ---------------------------------- | ------------------- | ------------------------------------------------------------------------------------------------ | | `prefect.concurrency-limit.{uuid}` | `concurrency-limit` | One entry per other limit in the same acquisition batch (when acquiring multiple limits at once) | ### Payload This event has no payload. ## `prefect.concurrency-limit.released` Emitted when concurrency slots are released for a named limit. Uses the `follows` field to link back to the corresponding `acquired` event. ### Resource Same as [`prefect.concurrency-limit.acquired`](#prefectconcurrency-limitacquired). ### Related resources Same as [`prefect.concurrency-limit.acquired`](#prefectconcurrency-limitacquired). ### Payload This event has no payload. ## Legacy v1 concurrency events These events are emitted by the legacy tag-based concurrency limit system. For new concurrency limits, use global concurrency limits which emit the events documented above. ### `prefect.concurrency-limit.v1.acquired` Emitted when a legacy tag-based concurrency slot is acquired. #### Resource | Label | Description | | ----------------------- | ------------------------------------- | | `prefect.resource.id` | `prefect.concurrency-limit.v1.{uuid}` | | `prefect.resource.name` | Tag name | | `limit` | Maximum number of slots | | `task_run_id` | ID of the task run acquiring the slot | #### Related resources | Resource ID pattern | Role | When present | | ------------------------------------- | ------------------- | ------------------------------------------------------- | | `prefect.concurrency-limit.v1.{uuid}` | `concurrency-limit` | One entry per other limit in the same acquisition batch | #### Payload This event has no payload. ### `prefect.concurrency-limit.v1.released` Emitted when a legacy tag-based concurrency slot is released. #### Resource Same as [`prefect.concurrency-limit.v1.acquired`](#prefectconcurrency-limitv1acquired). #### Related resources Same as [`prefect.concurrency-limit.v1.acquired`](#prefectconcurrency-limitv1acquired). #### Payload This event has no payload. # Deployment events Source: https://docs.prefect.io/v3/api-ref/events/deployment-events Events emitted when deployments are created, updated, deleted, or change status. Deployment events track the lifecycle of deployments, including creation, updates, deletion, and status transitions. These events are emitted server-side. For more on deployments, see [Deployments](/v3/concepts/deployments). ## `prefect.deployment.created` Emitted when a new deployment is created. ### Resource | Label | Description | | ----------------------- | --------------------------- | | `prefect.resource.id` | `prefect.deployment.{uuid}` | | `prefect.resource.name` | Deployment name | ### Related resources | Resource ID pattern | Role | When present | | --------------------------- | ------------ | ---------------------------------------------------------------------- | | `prefect.flow.{uuid}` | `flow` | Always | | `prefect.work-queue.{uuid}` | `work-queue` | When a work queue is assigned | | `prefect.work-pool.{uuid}` | `work-pool` | When a work pool is assigned (includes `prefect.work-pool.type` label) | ### Payload This event has no payload. ## `prefect.deployment.updated` Emitted when one or more deployment fields are changed. ### Resource | Label | Description | | ----------------------- | --------------------------- | | `prefect.resource.id` | `prefect.deployment.{uuid}` | | `prefect.resource.name` | Deployment name | ### Related resources Same as [`prefect.deployment.created`](#prefectdeploymentcreated). ### Payload | Field | Type | Description | | ---------------- | ---------------- | --------------------------------------------------- | | `updated_fields` | array of strings | Names of fields that changed | | `updates` | object | Map of field name to `{"from": , "to": }` | ```json theme={null} { "occurred": "2026-03-31T18:30:00.000000Z", "event": "prefect.deployment.updated", "resource": { "prefect.resource.id": "prefect.deployment.b2c3d4e5-f6a7-8901-bcde-f12345678901", "prefect.resource.name": "my-etl-flow/production" }, "related": [ { "prefect.resource.id": "prefect.flow.a1b2c3d4-e5f6-7890-abcd-ef1234567890", "prefect.resource.name": "my-etl-flow", "prefect.resource.role": "flow" } ], "payload": { "updated_fields": ["is_schedule_active"], "updates": { "is_schedule_active": { "from": true, "to": false } } }, "id": "c3d4e5f6-a7b8-9012-cdef-123456789012" } ``` ## `prefect.deployment.deleted` Emitted when a deployment is deleted. ### Resource | Label | Description | | ----------------------- | --------------------------- | | `prefect.resource.id` | `prefect.deployment.{uuid}` | | `prefect.resource.name` | Deployment name | ### Related resources Same as [`prefect.deployment.created`](#prefectdeploymentcreated). ### Payload This event has no payload. ## `prefect.deployment.{status}` Emitted when a deployment's readiness status changes. The `{status}` suffix is the kebab-case status value. ### Status variants | Event name | Description | | ------------------------------ | ----------------------------------------------------------------------------------- | | `prefect.deployment.ready` | Deployment is ready to create runs | | `prefect.deployment.not-ready` | Deployment is not ready (for example, no active schedule or no available work pool) | ### Resource | Label | Description | | ----------------------- | --------------------------- | | `prefect.resource.id` | `prefect.deployment.{uuid}` | | `prefect.resource.name` | Deployment name | ### Related resources Same as [`prefect.deployment.created`](#prefectdeploymentcreated). ### Payload This event has no payload. # Flow run events Source: https://docs.prefect.io/v3/api-ref/events/flow-run-events Events emitted during flow run state transitions and heartbeats. Flow run events track the lifecycle of every flow run, from scheduling through completion. They are emitted on every state transition and, optionally, as periodic heartbeats during execution. ## `prefect.flow-run.{state}` Emitted each time a flow run transitions to a new state. The `{state}` suffix is the state name (for example, `prefect.flow-run.Running` or `prefect.flow-run.Completed`). ### State variants | State name | State type | Description | | ------------------------- | ---------- | ----------------------------------------------------- | | `Scheduled` | SCHEDULED | Run is scheduled for future execution | | `Late` | SCHEDULED | Scheduled run was not started on time | | `AwaitingConcurrencySlot` | SCHEDULED | Waiting for a concurrency slot | | `AwaitingRetry` | SCHEDULED | Waiting before a retry attempt | | `Pending` | PENDING | Run is ready to execute | | `Running` | RUNNING | Run is actively executing | | `Retrying` | RUNNING | Run is retrying after a failure | | `Completed` | COMPLETED | Run finished successfully | | `Failed` | FAILED | Run finished with an error | | `Crashed` | CRASHED | Run terminated unexpectedly (infrastructure failure) | | `Cancelled` | CANCELLED | Run was cancelled | | `Cancelling` | CANCELLING | Cancellation was requested but not yet confirmed | | `Paused` | PAUSED | Run is paused, waiting for input or manual resumption | | `Suspended` | PAUSED | Run is suspended and its infrastructure may be freed | ### Resource | Label | Description | | ------------------------- | ------------------------------------------------------------------------------ | | `prefect.resource.id` | `prefect.flow-run.{uuid}` | | `prefect.resource.name` | Flow run name (for example, `crimson-fox`) | | `prefect.run-count` | Number of times this run has been attempted | | `prefect.state-message` | Message associated with the state transition (truncated to 100,000 characters) | | `prefect.state-name` | State name (for example, `Running`) | | `prefect.state-timestamp` | ISO 8601 timestamp of the state transition | | `prefect.state-type` | State type enum value (for example, `RUNNING`, `COMPLETED`) | ### Related resources | Resource ID pattern | Role | When present | | ---------------------------------------------------------- | ------------ | ---------------------------------------------------------------------- | | `prefect.flow.{uuid}` | `flow` | Always | | `prefect.deployment.{uuid}` | `deployment` | When triggered by a deployment | | `prefect.work-queue.{uuid}` | `work-queue` | When a work queue is assigned | | `prefect.work-pool.{uuid}` | `work-pool` | When a work pool is assigned (includes `prefect.work-pool.type` label) | | `prefect.task-run.{uuid}` | `task-run` | When the flow run is a subflow called from a task | | `prefect.tag.{tag}` | `tag` | One entry per tag on the flow run or its flow/deployment | | `prefect.deployment.{uuid}` or `prefect.automation.{uuid}` | `creator` | Provenance: which deployment or automation created this run | ### Payload | Field | Type | Description | | ----------------- | -------------- | --------------------------------------------------------------------------- | | `intended.from` | string or null | State type of the initial state (null for the first transition) | | `intended.to` | string | State type of the validated (new) state | | `initial_state` | object or null | Previous state: `type`, `name`, `message`, and `pause_reschedule` if paused | | `validated_state` | object | New state: `type`, `name`, `message`, and `pause_reschedule` if paused | ```json theme={null} { "occurred": "2026-03-31T18:30:00.000000Z", "event": "prefect.flow-run.Running", "resource": { "prefect.resource.id": "prefect.flow-run.e3755d32-cec5-42ca-9bcd-af236e308ba6", "prefect.resource.name": "crimson-fox", "prefect.run-count": "1", "prefect.state-message": "", "prefect.state-name": "Running", "prefect.state-timestamp": "2026-03-31T18:30:00.000000Z", "prefect.state-type": "RUNNING" }, "related": [ { "prefect.resource.id": "prefect.flow.a1b2c3d4-e5f6-7890-abcd-ef1234567890", "prefect.resource.name": "my-etl-flow", "prefect.resource.role": "flow" }, { "prefect.resource.id": "prefect.deployment.b2c3d4e5-f6a7-8901-bcde-f12345678901", "prefect.resource.name": "my-etl-flow/production", "prefect.resource.role": "deployment" }, { "prefect.resource.id": "prefect.work-pool.c3d4e5f6-a7b8-9012-cdef-123456789012", "prefect.resource.name": "my-k8s-pool", "prefect.work-pool.type": "kubernetes", "prefect.resource.role": "work-pool" }, { "prefect.resource.id": "prefect.tag.production", "prefect.resource.role": "tag" } ], "payload": { "intended": { "from": "PENDING", "to": "RUNNING" }, "initial_state": { "type": "PENDING", "name": "Pending" }, "validated_state": { "type": "RUNNING", "name": "Running" } }, "id": "f6a7b890-1234-5678-9012-abcdef345678", "follows": "d4e5f6a7-b890-1234-5678-9012abcdef34" } ``` ## `prefect.flow-run.heartbeat` Emitted periodically during flow run execution when heartbeat frequency is configured. Used for liveness detection, particularly with [zombie flow detection](/v3/advanced/detect-zombie-flows). ### Resource | Label | Description | | ----------------------- | ------------------------- | | `prefect.resource.id` | `prefect.flow-run.{uuid}` | | `prefect.resource.name` | Flow run name | | `prefect.version` | Prefect SDK version | ### Related resources | Resource ID pattern | Role | When present | | --------------------------- | ------------ | ------------------------------ | | `prefect.flow.{uuid}` | `flow` | Always | | `prefect.deployment.{uuid}` | `deployment` | When triggered by a deployment | | `prefect.tag.{tag}` | `tag` | One entry per tag | ### Payload This event has no payload. # Events reference Source: https://docs.prefect.io/v3/api-ref/events/index Reference for all events emitted by Prefect, Prefect Cloud, and integrations. This section catalogs every event that Prefect emits, organized by the resource type they concern. For a conceptual overview of events, resources, and related resources, see [Events](/v3/concepts/events). ## Event model Every event follows this schema: | Field | Type | Required | Description | | ---------- | -------- | -------- | ------------------------------------------------------------------------- | | `occurred` | datetime | yes | When the event happened | | `event` | string | yes | Name of the event (for example, `prefect.flow-run.Running`) | | `resource` | object | yes | Primary [resource](#resources) this event concerns | | `related` | array | no | Additional [related resources](#related-resources) involved in this event | | `payload` | object | no | Open-ended data describing what happened | | `id` | UUID | yes | Client-provided identifier for this event | | `follows` | UUID | no | ID of a preceding event, used to establish ordering | ## Resources Every event has a primary resource represented as a set of string key-value labels. Every resource must include `prefect.resource.id`, a dot-delimited quasi-stable identifier like `prefect.flow-run.e3755d32-cec5-42ca-9bcd-af236e308ba6`. Resources may also carry `prefect.resource.name` and any number of additional labels. ## Related resources Events may include related resources that provide context about other objects involved. Each related resource carries all the same labels as a resource, plus a required `prefect.resource.role` label describing its relationship to the event (for example, `flow`, `deployment`, `work-pool`, or `tag`). ## Namespace conventions Prefect uses dot-delimited event names with reserved prefixes for system-emitted events: | Prefix | Origin | | ---------------------- | -------------------------------------------- | | `prefect.*` | Prefect open source (OSS) and Prefect server | | `prefect-cloud.*` | Prefect Cloud only | | `prefect.docker.*` | prefect-docker integration | | `prefect.kubernetes.*` | prefect-kubernetes integration | | `prefect.ecs.*` | prefect-aws integration (ECS observer) | The `prefect.*` and `prefect-cloud.*` namespaces are reserved for events emitted by Prefect itself. Your own events can use any other namespace you like. For example, `acme.data-pipeline.completed` or `myteam.model.trained` are perfectly valid event names. See [custom event grammar](/v3/advanced/use-custom-event-grammar) for more on emitting your own events. ## Quick reference ### Orchestration events | Event | Description | Page | | ----------------------------- | --------------------------------- | ----------------------------------------------------------------------- | | `prefect.flow-run.{state}` | Flow run state transitions | [Flow run events](/v3/api-ref/events/flow-run-events) | | `prefect.flow-run.heartbeat` | Periodic flow run liveness signal | [Flow run events](/v3/api-ref/events/flow-run-events) | | `prefect.task-run.{state}` | Task run state transitions | [Task run events](/v3/api-ref/events/task-run-events) | | `prefect.deployment.created` | Deployment created | [Deployment events](/v3/api-ref/events/deployment-events) | | `prefect.deployment.updated` | Deployment fields changed | [Deployment events](/v3/api-ref/events/deployment-events) | | `prefect.deployment.deleted` | Deployment deleted | [Deployment events](/v3/api-ref/events/deployment-events) | | `prefect.deployment.{status}` | Deployment status transitions | [Deployment events](/v3/api-ref/events/deployment-events) | | `prefect.work-pool.{status}` | Work pool status transitions | [Work pool and queue events](/v3/api-ref/events/work-pool-queue-events) | | `prefect.work-pool.updated` | Work pool fields changed | [Work pool and queue events](/v3/api-ref/events/work-pool-queue-events) | | `prefect.work-queue.{status}` | Work queue status transitions | [Work pool and queue events](/v3/api-ref/events/work-pool-queue-events) | | `prefect.work-queue.updated` | Work queue fields changed | [Work pool and queue events](/v3/api-ref/events/work-pool-queue-events) | ### Execution events | Event | Description | Page | | ----------------------------------- | ------------------------------------ | ------------------------------------------------------------------- | | `prefect.worker.started` | Worker process started | [Worker and runner events](/v3/api-ref/events/worker-runner-events) | | `prefect.worker.stopped` | Worker process stopped | [Worker and runner events](/v3/api-ref/events/worker-runner-events) | | `prefect.worker.submitted-flow-run` | Worker submitted a flow run | [Worker and runner events](/v3/api-ref/events/worker-runner-events) | | `prefect.worker.executed-flow-run` | Worker finished executing a flow run | [Worker and runner events](/v3/api-ref/events/worker-runner-events) | | `prefect.runner.cancelled-flow-run` | Runner cancelled a flow run | [Worker and runner events](/v3/api-ref/events/worker-runner-events) | ### Data events | Event | Description | Page | | ----------------------------------------- | ------------------------------- | --------------------------------------------------------------------- | | `prefect.artifact.created` | Artifact created | [Artifact and asset events](/v3/api-ref/events/artifact-asset-events) | | `prefect.artifact.updated` | Artifact updated | [Artifact and asset events](/v3/api-ref/events/artifact-asset-events) | | `prefect.asset.referenced` | Upstream asset referenced | [Artifact and asset events](/v3/api-ref/events/artifact-asset-events) | | `prefect.asset.materialization.succeeded` | Asset materialization succeeded | [Artifact and asset events](/v3/api-ref/events/artifact-asset-events) | | `prefect.asset.materialization.failed` | Asset materialization failed | [Artifact and asset events](/v3/api-ref/events/artifact-asset-events) | ### Concurrency events | Event | Description | Page | | ------------------------------------ | -------------------------- | ----------------------------------------------------------- | | `prefect.concurrency-limit.acquired` | Concurrency slots acquired | [Concurrency events](/v3/api-ref/events/concurrency-events) | | `prefect.concurrency-limit.released` | Concurrency slots released | [Concurrency events](/v3/api-ref/events/concurrency-events) | ### Automation events | Event | Description | Page | | ------------------------------------- | --------------------------- | --------------------------------------------------------- | | `prefect.automation.triggered` | Automation trigger fired | [Automation events](/v3/api-ref/events/automation-events) | | `prefect.automation.resolved` | Automation trigger resolved | [Automation events](/v3/api-ref/events/automation-events) | | `prefect.automation.action.triggered` | Automation action started | [Automation events](/v3/api-ref/events/automation-events) | | `prefect.automation.action.executed` | Automation action completed | [Automation events](/v3/api-ref/events/automation-events) | | `prefect.automation.action.failed` | Automation action failed | [Automation events](/v3/api-ref/events/automation-events) | ### Infrastructure events | Event | Description | Page | | -------------------------------------- | ------------------------------ | ----------------------------------------------------------------- | | `prefect.block.{type}.loaded` | Block loaded from server | [Infrastructure events](/v3/api-ref/events/infrastructure-events) | | `prefect.flow-run.pull-step.executed` | Deployment pull step succeeded | [Infrastructure events](/v3/api-ref/events/infrastructure-events) | | `prefect.flow-run.pull-step.failed` | Deployment pull step failed | [Infrastructure events](/v3/api-ref/events/infrastructure-events) | | `prefect.workspace.transfer.started` | Workspace transfer started | [Infrastructure events](/v3/api-ref/events/infrastructure-events) | | `prefect.workspace.transfer.completed` | Workspace transfer completed | [Infrastructure events](/v3/api-ref/events/infrastructure-events) | ### Prefect Cloud events | Event | Description | Page | | ------------------------------- | --------------------------- | --------------------------------------------------------- | | `prefect-cloud.automation.*` | Cloud automation lifecycle | [Automation events](/v3/api-ref/events/automation-events) | | `prefect-cloud.incident.*` | Incident management | [Cloud events](/v3/api-ref/events/cloud-events) | | `prefect-cloud.webhook.*` | Webhook reception | [Cloud events](/v3/api-ref/events/cloud-events) | | `prefect-cloud.{type}.{action}` | Audit log (CRUD operations) | [Cloud events](/v3/api-ref/events/cloud-events) | ### Integration events | Event | Description | Page | | ----------------------------------- | ------------------------------- | --------------------------------------------------------------------------------------------------- | | `prefect.docker.container.{status}` | Docker container status changes | [Infrastructure events](/v3/api-ref/events/infrastructure-events#integration-infrastructure-events) | | `prefect.kubernetes.pod.{phase}` | Kubernetes pod phase changes | [Infrastructure events](/v3/api-ref/events/infrastructure-events#integration-infrastructure-events) | | `prefect.ecs.task.{status}` | ECS task status changes | [Infrastructure events](/v3/api-ref/events/infrastructure-events#integration-infrastructure-events) | ## Using events with automations Events are the foundation of [automations](/v3/concepts/automations). You can configure [event triggers](/v3/concepts/event-triggers) to match any event in this reference by its name, resource labels, or related resources. See [how to create automations](/v3/how-to-guides/automations/creating-automations) to get started. # Infrastructure events Source: https://docs.prefect.io/v3/api-ref/events/infrastructure-events Events emitted for blocks, deployment pull steps, workspace transfers, and integration infrastructure (Docker, Kubernetes, ECS). Infrastructure events cover block loading, deployment pull step execution, workspace resource transfers, and infrastructure status tracking from integration packages. For more on blocks, see [Blocks](/v3/concepts/blocks). ## Block events ### `prefect.block.{type}.loaded` Emitted when a block is loaded from the server. The `{type}` portion is the block type slug (for example, `prefect.block.slack-webhook.loaded` or `prefect.block.s3-bucket.loaded`). #### Resource | Label | Description | | ----------------------- | ------------------------------- | | `prefect.resource.id` | `prefect.block-document.{uuid}` | | `prefect.resource.name` | Block document name | #### Related resources | Resource ID pattern | Role | When present | | --------------------------- | ------------ | ------------ | | `prefect.block-type.{slug}` | `block-type` | Always | #### Payload This event has no payload. Block subclasses can override the `_event_method_called_resources()` method to customize the resource and related resources for their events. ## Deployment pull step events ### `prefect.flow-run.pull-step.executed` Emitted when a deployment pull step completes successfully during flow run infrastructure setup. #### Resource | Label | Description | | --------------------- | ------------------------- | | `prefect.resource.id` | `prefect.flow-run.{uuid}` | #### Related resources | Resource ID pattern | Role | When present | | --------------------------- | ------------ | ---------------------------- | | `prefect.deployment.{uuid}` | `deployment` | When the deployment is known | #### Payload The payload contains the serialized step definition (the step's fully qualified name and its input parameters as defined in `prefect.yaml`). ### `prefect.flow-run.pull-step.failed` Emitted when a deployment pull step fails during flow run infrastructure setup. #### Resource Same as [`prefect.flow-run.pull-step.executed`](#prefectflow-runpull-stepexecuted). #### Related resources Same as [`prefect.flow-run.pull-step.executed`](#prefectflow-runpull-stepexecuted). #### Payload Same as [`prefect.flow-run.pull-step.executed`](#prefectflow-runpull-stepexecuted). ## Workspace transfer events ### `prefect.workspace.transfer.started` Emitted when a workspace-to-workspace resource transfer begins (via `prefect transfer`). #### Resource | Label | Description | | ----------------------- | --------------------------------------------------------------------------------- | | `prefect.resource.id` | `prefect.workspace.transfer.{uuid}` | | `prefect.resource.name` | Description of the transfer direction (for example, `local → cloud/my-workspace`) | #### Related resources This event has no related resources. #### Payload | Field | Type | Description | | ----------------- | ------- | ------------------------------------- | | `source_profile` | string | Name of the source Prefect profile | | `target_profile` | string | Name of the target Prefect profile | | `source_url` | string | API URL of the source | | `target_url` | string | API URL of the target | | `total_resources` | integer | Total number of resources to transfer | ### `prefect.workspace.transfer.completed` Emitted when a workspace transfer completes. #### Resource Same as [`prefect.workspace.transfer.started`](#prefectworkspacetransferstarted). #### Related resources This event has no related resources. #### Payload | Field | Type | Description | | ----------------- | ------- | -------------------------------------------- | | `source_profile` | string | Name of the source Prefect profile | | `target_profile` | string | Name of the target Prefect profile | | `source_url` | string | API URL of the source | | `target_url` | string | API URL of the target | | `total_resources` | integer | Total number of resources transferred | | `succeeded` | integer | Number of resources transferred successfully | | `failed` | integer | Number of resources that failed to transfer | | `skipped` | integer | Number of resources skipped | ## Integration infrastructure events The following events are emitted by Prefect integration packages to track infrastructure status changes during flow run execution. Each requires the respective integration package to be installed. ### Docker events (prefect-docker) The Docker worker emits events when container status changes during flow run execution. #### `prefect.docker.container.{status}` Emitted when a Docker container's status changes. Events are chained using the `follows` field to track status progression. ##### Status variants | Event name | Description | | ------------------------------------- | ---------------------------- | | `prefect.docker.container.created` | Container has been created | | `prefect.docker.container.running` | Container is running | | `prefect.docker.container.paused` | Container is paused | | `prefect.docker.container.restarting` | Container is restarting | | `prefect.docker.container.removing` | Container is being removed | | `prefect.docker.container.exited` | Container has exited | | `prefect.docker.container.dead` | Container is in a dead state | ##### Resource | Label | Description | | ----------------------- | ----------------------------------------- | | `prefect.resource.id` | `prefect.docker.container.{container-id}` | | `prefect.resource.name` | Container name | ##### Related resources | Resource ID pattern | Role | When present | | --------------------------- | ------------ | ---------------------------------------------- | | `prefect.flow-run.{uuid}` | `flow-run` | When available from worker configuration | | `prefect.flow.{uuid}` | `flow` | When available | | `prefect.deployment.{uuid}` | `deployment` | When available | | `prefect.work-pool.{uuid}` | `work-pool` | When the worker is attached to a work pool | | Worker resource | `worker` | Always (the worker that manages the container) | ##### Payload This event has no payload. #### `prefect.docker.container.creation-failed` Emitted when a Docker container fails to be created. Same resource and related resources as `prefect.docker.container.{status}`. ### Kubernetes events (prefect-kubernetes) The Kubernetes observer emits events when pod phase changes are detected for pods labeled with `prefect.io/flow-run-id`. #### `prefect.kubernetes.pod.{phase}` Emitted when a Kubernetes pod transitions to a new phase. Events use deterministic IDs based on the pod UID, phase, and restart count for deduplication. Events within 5 minutes of each other are chained using the `follows` field. ##### Phase variants | Event name | Description | | ---------------------------------- | ----------------------------- | | `prefect.kubernetes.pod.pending` | Pod is in the Pending phase | | `prefect.kubernetes.pod.running` | Pod is in the Running phase | | `prefect.kubernetes.pod.succeeded` | Pod completed successfully | | `prefect.kubernetes.pod.failed` | Pod terminated with an error | | `prefect.kubernetes.pod.unknown` | Pod is in an unknown state | | `prefect.kubernetes.pod.evicted` | Pod was evicted from the node | ##### Resource | Label | Description | | ----------------------- | --------------------------------------- | | `prefect.resource.id` | `prefect.kubernetes.pod.{uid}` | | `prefect.resource.name` | Pod name | | `kubernetes.namespace` | Kubernetes namespace | | `kubernetes.reason` | Eviction reason (only for evicted pods) | ##### Related resources | Resource ID pattern | Role | When present | | --------------------------- | ------------ | ------------------------------------------------------------------------------------------- | | `prefect.flow-run.{uuid}` | `flow-run` | Always (from `prefect.io/flow-run-id` pod label) | | `prefect.deployment.{uuid}` | `deployment` | When `prefect.io/deployment-id` pod label is set | | `prefect.flow.{uuid}` | `flow` | When `prefect.io/flow-id` pod label is set | | `prefect.work-pool.{uuid}` | `work-pool` | When `prefect.io/work-pool-id` pod label is set | | Worker name | `worker` | When `prefect.io/worker-name` pod label is set (includes `prefect.worker-type: kubernetes`) | ##### Payload This event has no payload. ```json theme={null} { "occurred": "2026-03-31T18:30:00.000000Z", "event": "prefect.kubernetes.pod.running", "resource": { "prefect.resource.id": "prefect.kubernetes.pod.a1b2c3d4-e5f6-7890-abcd-ef1234567890", "prefect.resource.name": "prefect-flow-run-abc123", "kubernetes.namespace": "prefect" }, "related": [ { "prefect.resource.id": "prefect.flow-run.e3755d32-cec5-42ca-9bcd-af236e308ba6", "prefect.resource.role": "flow-run" }, { "prefect.resource.id": "prefect.work-pool.c3d4e5f6-a7b8-9012-cdef-123456789012", "prefect.resource.role": "work-pool" }, { "prefect.resource.id": "my-k8s-worker", "prefect.resource.role": "worker", "prefect.worker-type": "kubernetes" } ], "id": "b890c123-4567-89ab-cdef-0123456789ab", "follows": "a7890123-bcde-f456-7890-123456789abc" } ``` ### AWS ECS events (prefect-aws) The ECS observer emits events when ECS task status changes are detected via SQS for tasks tagged with `prefect.io/flow-run-id`. #### `prefect.ecs.task.{status}` Emitted when an ECS task transitions to a new status. Events use the AWS EventBridge event UUID for deterministic tracking. Events within 5 minutes of each other are chained using the `follows` field. ##### Status variants | Event name | Description | | --------------------------------- | ---------------------------------------------------- | | `prefect.ecs.task.provisioning` | Task resources are being provisioned | | `prefect.ecs.task.pending` | Task is waiting to be placed on a container instance | | `prefect.ecs.task.activating` | Task is being activated | | `prefect.ecs.task.running` | Task is running | | `prefect.ecs.task.deactivating` | Task is being deactivated | | `prefect.ecs.task.stopping` | Task is being stopped | | `prefect.ecs.task.deprovisioning` | Task resources are being deprovisioned | | `prefect.ecs.task.stopped` | Task has stopped | | `prefect.ecs.task.deleted` | Task has been deleted | ##### Resource | Label | Description | | ----------------------- | ---------------------------------------- | | `prefect.resource.id` | `prefect.ecs.task.{task-id}` | | `ecs.taskArn` | Full ECS task ARN | | `ecs.clusterArn` | ECS cluster ARN (when available) | | `ecs.taskDefinitionArn` | ECS task definition ARN (when available) | ##### Related resources | Resource ID pattern | Role | When present | | --------------------------- | ------------ | ----------------------------------------------------------------------------------- | | `prefect.flow-run.{uuid}` | `flow-run` | Always (from `prefect.io/flow-run-id` task tag) | | `prefect.deployment.{uuid}` | `deployment` | When `prefect.io/deployment-id` task tag is set | | `prefect.flow.{uuid}` | `flow` | When `prefect.io/flow-id` task tag is set | | `prefect.work-pool.{uuid}` | `work-pool` | When `prefect.io/work-pool-id` task tag is set | | Worker name | `worker` | When `prefect.io/worker-name` task tag is set (includes `prefect.worker-type: ecs`) | ##### Payload This event has no payload. # Task run events Source: https://docs.prefect.io/v3/api-ref/events/task-run-events Events emitted during task run state transitions. Task run events track the lifecycle of every task run within a flow. They are emitted on every state transition, mirroring the structure of [flow run state events](/v3/api-ref/events/flow-run-events). ## `prefect.task-run.{state}` Emitted each time a task run transitions to a new state. The `{state}` suffix is the state name (for example, `prefect.task-run.Running` or `prefect.task-run.Completed`). Task state transitions are managed locally by the task engine, not proposed to the server like flow run states. The task engine emits these events and delivers them via the events system. ### State variants | State name | State type | Description | | ------------------------- | ---------- | ------------------------------------------------ | | `Scheduled` | SCHEDULED | Task is scheduled for execution | | `Late` | SCHEDULED | Scheduled task was not started on time | | `AwaitingConcurrencySlot` | SCHEDULED | Waiting for a concurrency slot | | `AwaitingRetry` | SCHEDULED | Waiting before a retry attempt | | `Pending` | PENDING | Task is ready to execute | | `Running` | RUNNING | Task is actively executing | | `Retrying` | RUNNING | Task is retrying after a failure | | `Completed` | COMPLETED | Task finished successfully | | `Failed` | FAILED | Task finished with an error | | `Crashed` | CRASHED | Task terminated unexpectedly | | `Cancelled` | CANCELLED | Task was cancelled | | `Cancelling` | CANCELLING | Cancellation was requested but not yet confirmed | | `Paused` | PAUSED | Task is paused | | `Suspended` | PAUSED | Task is suspended | ### Resource | Label | Description | | ------------------------- | ------------------------------------------------------------------------------ | | `prefect.resource.id` | `prefect.task-run.{uuid}` | | `prefect.resource.name` | Task run name | | `prefect.run-count` | Number of times this task run has been attempted | | `prefect.state-message` | Message associated with the state transition (truncated to 100,000 characters) | | `prefect.state-name` | State name (for example, `Running`) | | `prefect.state-timestamp` | ISO 8601 timestamp of the state transition | | `prefect.state-type` | State type enum value (for example, `RUNNING`, `COMPLETED`) | | `prefect.orchestration` | Always `client` (task state transitions are managed locally) | ### Related resources | Resource ID pattern | Role | When present | | ------------------- | ----- | --------------------------------- | | `prefect.tag.{tag}` | `tag` | One entry per tag on the task run | Unlike flow run events, task run state events do not include the parent flow run, flow, or deployment as related resources. Task run context (including the parent flow run ID) is available in the event payload under `task_run`. ### Payload | Field | Type | Description | | ----------------- | -------------- | ---------------------------------------------------------------------------------------------------------------------- | | `intended.from` | string or null | State type of the initial state (null for the first transition) | | `intended.to` | string | State type of the validated (new) state | | `initial_state` | object or null | Previous state: `type`, `name`, `message`, `state_details` | | `validated_state` | object | New state: `type`, `name`, `message`, `state_details`, and `data` (result metadata when result persistence is enabled) | | `task_run` | object | Task run metadata including `name`, `task_key`, `dynamic_key`, `flow_run_id`, `tags`, and `task_inputs` | ```json theme={null} { "occurred": "2026-03-31T18:31:05.000000Z", "event": "prefect.task-run.Completed", "resource": { "prefect.resource.id": "prefect.task-run.a1b2c3d4-e5f6-7890-abcd-ef1234567890", "prefect.resource.name": "extract-data-0", "prefect.run-count": "1", "prefect.state-message": "", "prefect.state-name": "Completed", "prefect.state-timestamp": "2026-03-31T18:31:05.000000Z", "prefect.state-type": "COMPLETED", "prefect.orchestration": "client" }, "related": [ { "prefect.resource.id": "prefect.tag.etl", "prefect.resource.role": "tag" } ], "payload": { "intended": { "from": "RUNNING", "to": "COMPLETED" }, "initial_state": { "type": "RUNNING", "name": "Running" }, "validated_state": { "type": "COMPLETED", "name": "Completed" }, "task_run": { "name": "extract-data-0", "task_key": "extract_data", "dynamic_key": "0", "flow_run_id": "e3755d32-cec5-42ca-9bcd-af236e308ba6", "tags": ["etl"] } }, "id": "b890c123-4567-89ab-cdef-0123456789ab" } ``` # Work pool and work queue events Source: https://docs.prefect.io/v3/api-ref/events/work-pool-queue-events Events emitted when work pools and work queues change status or are updated. Work pool and work queue events track readiness status transitions and field updates. These events are emitted server-side. For more on work pools, see [Work pools](/v3/concepts/work-pools). ## Work pool events ### `prefect.work-pool.{status}` Emitted when a work pool's status transitions. #### Status variants | Event name | Description | | ----------------------------- | ------------------------------------------------------- | | `prefect.work-pool.ready` | Work pool is ready to accept work | | `prefect.work-pool.not-ready` | Work pool is not ready (for example, no online workers) | | `prefect.work-pool.paused` | Work pool has been paused | #### Resource | Label | Description | | ------------------------ | --------------------------------------------------------------- | | `prefect.resource.id` | `prefect.work-pool.{uuid}` | | `prefect.resource.name` | Work pool name | | `prefect.work-pool.type` | Work pool type (for example, `kubernetes`, `process`, `docker`) | #### Related resources This event has no related resources. #### Payload This event has no payload. ### `prefect.work-pool.updated` Emitted when one or more work pool fields are changed (excluding status transitions, which emit `prefect.work-pool.{status}` instead). #### Resource | Label | Description | | ------------------------ | -------------------------- | | `prefect.resource.id` | `prefect.work-pool.{uuid}` | | `prefect.resource.name` | Work pool name | | `prefect.work-pool.type` | Work pool type | #### Related resources This event has no related resources. #### Payload | Field | Type | Description | | ---------------- | ---------------- | --------------------------------------------------- | | `updated_fields` | array of strings | Names of fields that changed | | `updates` | object | Map of field name to `{"from": , "to": }` | ## Work queue events ### `prefect.work-queue.{status}` Emitted when a work queue's status transitions. #### Status variants | Event name | Description | | ------------------------------ | ------------------------------------ | | `prefect.work-queue.ready` | Work queue is ready to dispatch work | | `prefect.work-queue.not-ready` | Work queue is not ready | | `prefect.work-queue.paused` | Work queue has been paused | #### Resource | Label | Description | | ----------------------- | --------------------------- | | `prefect.resource.id` | `prefect.work-queue.{uuid}` | | `prefect.resource.name` | Work queue name | #### Related resources | Resource ID pattern | Role | When present | | -------------------------- | ----------- | ------------------------------------------------------------------------------------ | | `prefect.work-pool.{uuid}` | `work-pool` | When the work queue belongs to a work pool (includes `prefect.work-pool.type` label) | #### Payload This event has no payload. ### `prefect.work-queue.updated` Emitted when one or more work queue fields are changed (excluding status transitions). #### Resource | Label | Description | | ----------------------- | --------------------------- | | `prefect.resource.id` | `prefect.work-queue.{uuid}` | | `prefect.resource.name` | Work queue name | #### Related resources | Resource ID pattern | Role | When present | | -------------------------- | ----------- | ------------------------------------------------------------------------------------ | | `prefect.work-pool.{uuid}` | `work-pool` | When the work queue belongs to a work pool (includes `prefect.work-pool.type` label) | #### Payload | Field | Type | Description | | ---------------- | ---------------- | --------------------------------------------------- | | `updated_fields` | array of strings | Names of fields that changed | | `updates` | object | Map of field name to `{"from": , "to": }` | # Worker and runner events Source: https://docs.prefect.io/v3/api-ref/events/worker-runner-events Events emitted by workers and runners during flow run execution. Worker events track the lifecycle of worker processes and the flow runs they execute. Runner events cover flow run cancellation by locally-served deployments. For more on workers, see [Workers](/v3/concepts/workers). ## Worker events ### `prefect.worker.started` Emitted when a worker process starts polling for work. #### Resource | Label | Description | | ----------------------- | ------------------------------------------------------------ | | `prefect.resource.id` | `prefect.worker.{type}.{name-slug}` | | `prefect.resource.name` | Worker name | | `prefect.version` | Prefect SDK version | | `prefect.worker-type` | Worker type (for example, `kubernetes`, `process`, `docker`) | #### Related resources | Resource ID pattern | Role | When present | | -------------------------- | ----------- | ------------------------------------------ | | `prefect.work-pool.{uuid}` | `work-pool` | When the worker is attached to a work pool | #### Payload This event has no payload. ### `prefect.worker.stopped` Emitted when a worker process shuts down. Uses the `follows` field to link back to the corresponding `started` event. #### Resource Same as [`prefect.worker.started`](#prefectworkerstarted). #### Related resources Same as [`prefect.worker.started`](#prefectworkerstarted). #### Payload This event has no payload. ### `prefect.worker.submitted-flow-run` Emitted when a worker submits a flow run for execution on infrastructure. #### Resource Same as [`prefect.worker.started`](#prefectworkerstarted). #### Related resources | Resource ID pattern | Role | When present | | --------------------------- | ------------ | ------------------------------ | | `prefect.flow-run.{uuid}` | `flow-run` | Always | | `prefect.flow.{uuid}` | `flow` | Always | | `prefect.deployment.{uuid}` | `deployment` | When triggered by a deployment | | `prefect.work-pool.{uuid}` | `work-pool` | When attached to a work pool | | `prefect.tag.{tag}` | `tag` | One entry per tag | #### Payload This event has no payload. ### `prefect.worker.executed-flow-run` Emitted when a worker finishes executing a flow run (regardless of outcome). Uses the `follows` field to link back to the corresponding `submitted-flow-run` event. #### Resource Same as [`prefect.worker.started`](#prefectworkerstarted). #### Related resources Same as [`prefect.worker.submitted-flow-run`](#prefectworkersubmitted-flow-run), with additional labels on the `flow-run` related resource: | Additional label | Description | | ------------------------------------ | ------------------------------------------------------------------------------- | | `prefect.infrastructure.identifier` | Infrastructure-specific identifier for the execution (for example, process PID) | | `prefect.infrastructure.status-code` | Exit status code of the infrastructure process | #### Payload This event has no payload. ```json theme={null} { "occurred": "2026-03-31T18:35:00.000000Z", "event": "prefect.worker.executed-flow-run", "resource": { "prefect.resource.id": "prefect.worker.kubernetes.my-k8s-worker", "prefect.resource.name": "my-k8s-worker", "prefect.version": "3.6.0", "prefect.worker-type": "kubernetes" }, "related": [ { "prefect.resource.id": "prefect.flow-run.e3755d32-cec5-42ca-9bcd-af236e308ba6", "prefect.resource.name": "crimson-fox", "prefect.resource.role": "flow-run", "prefect.infrastructure.identifier": "my-job-abc123", "prefect.infrastructure.status-code": "0" }, { "prefect.resource.id": "prefect.work-pool.c3d4e5f6-a7b8-9012-cdef-123456789012", "prefect.resource.name": "my-k8s-pool", "prefect.resource.role": "work-pool" } ], "id": "d4e5f6a7-b890-1234-5678-9012abcdef34", "follows": "c3d4e5f6-a7b8-9012-cdef-123456789012" } ``` ## Runner events ### `prefect.runner.cancelled-flow-run` Emitted when a runner cancels a flow run that was being served locally via `flow.serve()` or the `Runner` API. #### Resource | Label | Description | | ----------------------- | ---------------------------- | | `prefect.resource.id` | `prefect.runner.{name-slug}` | | `prefect.resource.name` | Runner name | | `prefect.version` | Prefect SDK version | #### Related resources | Resource ID pattern | Role | When present | | --------------------------- | ------------ | -------------------------------------------------- | | `prefect.deployment.{uuid}` | `deployment` | When the flow run was triggered by a deployment | | `prefect.flow.{uuid}` | `flow` | When the flow is known | | `prefect.flow-run.{uuid}` | `flow-run` | Always | | `prefect.tag.{tag}` | `tag` | One entry per tag from the flow run and deployment | #### Payload This event has no payload. # API & SDK References Source: https://docs.prefect.io/v3/api-ref/index Explore Prefect's auto-generated API & SDK reference documentation. Prefect auto-generates reference documentation for the following components: * **[Prefect Python SDK](/v3/api-ref/python)**: used to build, test, and execute workflows. * **[Prefect REST API](/v3/api-ref/rest-api)**: used by workflow clients and the Prefect UI for orchestration and data retrieval. * **[Events reference](/v3/api-ref/events)**: catalog of all events emitted by Prefect, Prefect Cloud, and integrations. * Prefect Cloud REST API documentation: [https://app.prefect.cloud/api/docs](https://app.prefect.cloud/api/docs). * Self-hosted Prefect server [REST API documentation](/v3/api-ref/rest-api/server/). Additionally, if self-hosting a Prefect server instance, you can access REST API documentation at the `/docs` endpoint of your [`PREFECT_API_URL`](/v3/develop/settings-and-profiles/). For example, if you run `prefect server start` with no additional configuration you can find this reference at [http://localhost:4200/docs](http://localhost:4200/docs). # artifacts Source: https://docs.prefect.io/v3/api-ref/python/prefect-artifacts # `prefect.artifacts` Interface for creating and reading artifacts. ## Functions ### `acreate_link_artifact` ```python theme={null} acreate_link_artifact(link: str, link_text: str | None = None, key: str | None = None, description: str | None = None, client: 'PrefectClient | None' = None) -> UUID ``` Create a link artifact. **Args:** * `link`: The link to create. * `link_text`: The link text. * `key`: A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes. * `description`: A user-specified description of the artifact. **Returns:** * The table artifact ID. ### `create_link_artifact` ```python theme={null} create_link_artifact(link: str, link_text: str | None = None, key: str | None = None, description: str | None = None, client: 'PrefectClient | None' = None) -> UUID ``` Create a link artifact. **Args:** * `link`: The link to create. * `link_text`: The link text. * `key`: A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes. * `description`: A user-specified description of the artifact. **Returns:** * The table artifact ID. ### `acreate_markdown_artifact` ```python theme={null} acreate_markdown_artifact(markdown: str, key: str | None = None, description: str | None = None) -> UUID ``` Create a markdown artifact. **Args:** * `markdown`: The markdown to create. * `key`: A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes. * `description`: A user-specified description of the artifact. **Returns:** * The table artifact ID. ### `create_markdown_artifact` ```python theme={null} create_markdown_artifact(markdown: str, key: str | None = None, description: str | None = None) -> UUID ``` Create a markdown artifact. **Args:** * `markdown`: The markdown to create. * `key`: A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes. * `description`: A user-specified description of the artifact. **Returns:** * The table artifact ID. ### `acreate_table_artifact` ```python theme={null} acreate_table_artifact(table: dict[str, list[Any]] | list[dict[str, Any]] | list[list[Any]], key: str | None = None, description: str | None = None) -> UUID ``` Create a table artifact asynchronously. **Args:** * `table`: The table to create. * `key`: A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes. * `description`: A user-specified description of the artifact. **Returns:** * The table artifact ID. ### `create_table_artifact` ```python theme={null} create_table_artifact(table: dict[str, list[Any]] | list[dict[str, Any]] | list[list[Any]], key: str | None = None, description: str | None = None) -> UUID ``` Create a table artifact. **Args:** * `table`: The table to create. * `key`: A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes. * `description`: A user-specified description of the artifact. **Returns:** * The table artifact ID. ### `acreate_progress_artifact` ```python theme={null} acreate_progress_artifact(progress: float, key: str | None = None, description: str | None = None) -> UUID ``` Create a progress artifact asynchronously. **Args:** * `progress`: The percentage of progress represented by a float between 0 and 100. * `key`: A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes. * `description`: A user-specified description of the artifact. **Returns:** * The progress artifact ID. ### `create_progress_artifact` ```python theme={null} create_progress_artifact(progress: float, key: str | None = None, description: str | None = None) -> UUID ``` Create a progress artifact. **Args:** * `progress`: The percentage of progress represented by a float between 0 and 100. * `key`: A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes. * `description`: A user-specified description of the artifact. **Returns:** * The progress artifact ID. ### `aupdate_progress_artifact` ```python theme={null} aupdate_progress_artifact(artifact_id: UUID, progress: float, description: str | None = None, client: 'PrefectClient | None' = None) -> UUID ``` Update a progress artifact asynchronously. **Args:** * `artifact_id`: The ID of the artifact to update. * `progress`: The percentage of progress represented by a float between 0 and 100. * `description`: A user-specified description of the artifact. **Returns:** * The progress artifact ID. ### `update_progress_artifact` ```python theme={null} update_progress_artifact(artifact_id: UUID, progress: float, description: str | None = None, client: 'PrefectClient | None' = None) -> UUID ``` Update a progress artifact. **Args:** * `artifact_id`: The ID of the artifact to update. * `progress`: The percentage of progress represented by a float between 0 and 100. * `description`: A user-specified description of the artifact. **Returns:** * The progress artifact ID. ### `acreate_image_artifact` ```python theme={null} acreate_image_artifact(image_url: str, key: str | None = None, description: str | None = None) -> UUID ``` Create an image artifact asynchronously. **Args:** * `image_url`: The URL of the image to display. * `key`: A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes. * `description`: A user-specified description of the artifact. **Returns:** * The image artifact ID. ### `create_image_artifact` ```python theme={null} create_image_artifact(image_url: str, key: str | None = None, description: str | None = None) -> UUID ``` Create an image artifact. **Args:** * `image_url`: The URL of the image to display. * `key`: A user-provided string identifier. Required for the artifact to show in the Artifacts page in the UI. The key must only contain lowercase letters, numbers, and dashes. * `description`: A user-specified description of the artifact. **Returns:** * The image artifact ID. ## Classes ### `Artifact` An artifact is a piece of data that is created by a flow or task run. [https://docs.prefect.io/latest/develop/artifacts](https://docs.prefect.io/latest/develop/artifacts) **Args:** * `type`: A string identifying the type of artifact. * `key`: A user-provided string identifier. The key must only contain lowercase letters, numbers, and dashes. * `description`: A user-specified description of the artifact. * `data`: A JSON payload that allows for a result to be retrieved. **Methods:** #### `acreate` ```python theme={null} acreate(self, client: 'PrefectClient | None' = None) -> 'ArtifactResponse' ``` An async method to create an artifact. **Args:** * `client`: The PrefectClient **Returns:** * The created artifact. #### `aformat` ```python theme={null} aformat(self) -> str | float | int | dict[str, Any] ``` #### `aget` ```python theme={null} aget(cls, key: str | None = None, client: 'PrefectClient | None' = None) -> 'ArtifactResponse | None' ``` A async method to get an artifact. **Args:** * `key`: The key of the artifact to get. * `client`: A client to use when calling the Prefect API. **Returns:** * The artifact (if found). #### `aget_or_create` ```python theme={null} aget_or_create(cls, key: str | None = None, description: str | None = None, data: dict[str, Any] | Any | None = None, client: 'PrefectClient | None' = None, **kwargs: Any) -> tuple['ArtifactResponse', bool] ``` A async method to get or create an artifact. **Args:** * `key`: The key of the artifact to get or create. * `description`: The description of the artifact to create. * `data`: The data of the artifact to create. * `client`: The PrefectClient * `**kwargs`: Additional keyword arguments to use when creating the artifact. **Returns:** * The artifact, either retrieved or created. #### `create` ```python theme={null} create(self: Self, client: 'PrefectClient | None' = None) -> 'ArtifactResponse' ``` A method to create an artifact. **Args:** * `client`: The PrefectClient **Returns:** * The created artifact. #### `format` ```python theme={null} format(self) -> str | float | int | dict[str, Any] ``` #### `get` ```python theme={null} get(cls, key: str | None = None, client: 'PrefectClient | None' = None) -> 'ArtifactResponse | None' ``` A method to get an artifact. **Args:** * `key`: The key of the artifact to get. * `client`: A client to use when calling the Prefect API. **Returns:** * The artifact (if found). #### `get_or_create` ```python theme={null} get_or_create(cls, key: str | None = None, description: str | None = None, data: dict[str, Any] | Any | None = None, client: 'PrefectClient | None' = None, **kwargs: Any) -> tuple['ArtifactResponse', bool] ``` A method to get or create an artifact. **Args:** * `key`: The key of the artifact to get or create. * `description`: The description of the artifact to create. * `data`: The data of the artifact to create. * `client`: The PrefectClient * `**kwargs`: Additional keyword arguments to use when creating the artifact. **Returns:** * The artifact, either retrieved or created. ### `LinkArtifact` **Methods:** #### `acreate` ```python theme={null} acreate(self, client: 'PrefectClient | None' = None) -> 'ArtifactResponse' ``` An async method to create an artifact. **Args:** * `client`: The PrefectClient **Returns:** * The created artifact. #### `aformat` ```python theme={null} aformat(self) -> str ``` #### `aformat` ```python theme={null} aformat(self) -> str | float | int | dict[str, Any] ``` #### `aget` ```python theme={null} aget(cls, key: str | None = None, client: 'PrefectClient | None' = None) -> 'ArtifactResponse | None' ``` A async method to get an artifact. **Args:** * `key`: The key of the artifact to get. * `client`: A client to use when calling the Prefect API. **Returns:** * The artifact (if found). #### `aget_or_create` ```python theme={null} aget_or_create(cls, key: str | None = None, description: str | None = None, data: dict[str, Any] | Any | None = None, client: 'PrefectClient | None' = None, **kwargs: Any) -> tuple['ArtifactResponse', bool] ``` A async method to get or create an artifact. **Args:** * `key`: The key of the artifact to get or create. * `description`: The description of the artifact to create. * `data`: The data of the artifact to create. * `client`: The PrefectClient * `**kwargs`: Additional keyword arguments to use when creating the artifact. **Returns:** * The artifact, either retrieved or created. #### `create` ```python theme={null} create(self: Self, client: 'PrefectClient | None' = None) -> 'ArtifactResponse' ``` A method to create an artifact. **Args:** * `client`: The PrefectClient **Returns:** * The created artifact. #### `format` ```python theme={null} format(self) -> str ``` #### `format` ```python theme={null} format(self) -> str | float | int | dict[str, Any] ``` #### `get` ```python theme={null} get(cls, key: str | None = None, client: 'PrefectClient | None' = None) -> 'ArtifactResponse | None' ``` A method to get an artifact. **Args:** * `key`: The key of the artifact to get. * `client`: A client to use when calling the Prefect API. **Returns:** * The artifact (if found). #### `get_or_create` ```python theme={null} get_or_create(cls, key: str | None = None, description: str | None = None, data: dict[str, Any] | Any | None = None, client: 'PrefectClient | None' = None, **kwargs: Any) -> tuple['ArtifactResponse', bool] ``` A method to get or create an artifact. **Args:** * `key`: The key of the artifact to get or create. * `description`: The description of the artifact to create. * `data`: The data of the artifact to create. * `client`: The PrefectClient * `**kwargs`: Additional keyword arguments to use when creating the artifact. **Returns:** * The artifact, either retrieved or created. ### `MarkdownArtifact` **Methods:** #### `acreate` ```python theme={null} acreate(self, client: 'PrefectClient | None' = None) -> 'ArtifactResponse' ``` An async method to create an artifact. **Args:** * `client`: The PrefectClient **Returns:** * The created artifact. #### `aformat` ```python theme={null} aformat(self) -> str ``` #### `aformat` ```python theme={null} aformat(self) -> str | float | int | dict[str, Any] ``` #### `aget` ```python theme={null} aget(cls, key: str | None = None, client: 'PrefectClient | None' = None) -> 'ArtifactResponse | None' ``` A async method to get an artifact. **Args:** * `key`: The key of the artifact to get. * `client`: A client to use when calling the Prefect API. **Returns:** * The artifact (if found). #### `aget_or_create` ```python theme={null} aget_or_create(cls, key: str | None = None, description: str | None = None, data: dict[str, Any] | Any | None = None, client: 'PrefectClient | None' = None, **kwargs: Any) -> tuple['ArtifactResponse', bool] ``` A async method to get or create an artifact. **Args:** * `key`: The key of the artifact to get or create. * `description`: The description of the artifact to create. * `data`: The data of the artifact to create. * `client`: The PrefectClient * `**kwargs`: Additional keyword arguments to use when creating the artifact. **Returns:** * The artifact, either retrieved or created. #### `create` ```python theme={null} create(self: Self, client: 'PrefectClient | None' = None) -> 'ArtifactResponse' ``` A method to create an artifact. **Args:** * `client`: The PrefectClient **Returns:** * The created artifact. #### `format` ```python theme={null} format(self) -> str ``` #### `format` ```python theme={null} format(self) -> str | float | int | dict[str, Any] ``` #### `get` ```python theme={null} get(cls, key: str | None = None, client: 'PrefectClient | None' = None) -> 'ArtifactResponse | None' ``` A method to get an artifact. **Args:** * `key`: The key of the artifact to get. * `client`: A client to use when calling the Prefect API. **Returns:** * The artifact (if found). #### `get_or_create` ```python theme={null} get_or_create(cls, key: str | None = None, description: str | None = None, data: dict[str, Any] | Any | None = None, client: 'PrefectClient | None' = None, **kwargs: Any) -> tuple['ArtifactResponse', bool] ``` A method to get or create an artifact. **Args:** * `key`: The key of the artifact to get or create. * `description`: The description of the artifact to create. * `data`: The data of the artifact to create. * `client`: The PrefectClient * `**kwargs`: Additional keyword arguments to use when creating the artifact. **Returns:** * The artifact, either retrieved or created. ### `TableArtifact` **Methods:** #### `acreate` ```python theme={null} acreate(self, client: 'PrefectClient | None' = None) -> 'ArtifactResponse' ``` An async method to create an artifact. **Args:** * `client`: The PrefectClient **Returns:** * The created artifact. #### `aformat` ```python theme={null} aformat(self) -> str ``` #### `aformat` ```python theme={null} aformat(self) -> str | float | int | dict[str, Any] ``` #### `aget` ```python theme={null} aget(cls, key: str | None = None, client: 'PrefectClient | None' = None) -> 'ArtifactResponse | None' ``` A async method to get an artifact. **Args:** * `key`: The key of the artifact to get. * `client`: A client to use when calling the Prefect API. **Returns:** * The artifact (if found). #### `aget_or_create` ```python theme={null} aget_or_create(cls, key: str | None = None, description: str | None = None, data: dict[str, Any] | Any | None = None, client: 'PrefectClient | None' = None, **kwargs: Any) -> tuple['ArtifactResponse', bool] ``` A async method to get or create an artifact. **Args:** * `key`: The key of the artifact to get or create. * `description`: The description of the artifact to create. * `data`: The data of the artifact to create. * `client`: The PrefectClient * `**kwargs`: Additional keyword arguments to use when creating the artifact. **Returns:** * The artifact, either retrieved or created. #### `create` ```python theme={null} create(self: Self, client: 'PrefectClient | None' = None) -> 'ArtifactResponse' ``` A method to create an artifact. **Args:** * `client`: The PrefectClient **Returns:** * The created artifact. #### `format` ```python theme={null} format(self) -> str ``` #### `format` ```python theme={null} format(self) -> str | float | int | dict[str, Any] ``` #### `get` ```python theme={null} get(cls, key: str | None = None, client: 'PrefectClient | None' = None) -> 'ArtifactResponse | None' ``` A method to get an artifact. **Args:** * `key`: The key of the artifact to get. * `client`: A client to use when calling the Prefect API. **Returns:** * The artifact (if found). #### `get_or_create` ```python theme={null} get_or_create(cls, key: str | None = None, description: str | None = None, data: dict[str, Any] | Any | None = None, client: 'PrefectClient | None' = None, **kwargs: Any) -> tuple['ArtifactResponse', bool] ``` A method to get or create an artifact. **Args:** * `key`: The key of the artifact to get or create. * `description`: The description of the artifact to create. * `data`: The data of the artifact to create. * `client`: The PrefectClient * `**kwargs`: Additional keyword arguments to use when creating the artifact. **Returns:** * The artifact, either retrieved or created. ### `ProgressArtifact` **Methods:** #### `acreate` ```python theme={null} acreate(self, client: 'PrefectClient | None' = None) -> 'ArtifactResponse' ``` An async method to create an artifact. **Args:** * `client`: The PrefectClient **Returns:** * The created artifact. #### `aformat` ```python theme={null} aformat(self) -> float ``` #### `aformat` ```python theme={null} aformat(self) -> str | float | int | dict[str, Any] ``` #### `aget` ```python theme={null} aget(cls, key: str | None = None, client: 'PrefectClient | None' = None) -> 'ArtifactResponse | None' ``` A async method to get an artifact. **Args:** * `key`: The key of the artifact to get. * `client`: A client to use when calling the Prefect API. **Returns:** * The artifact (if found). #### `aget_or_create` ```python theme={null} aget_or_create(cls, key: str | None = None, description: str | None = None, data: dict[str, Any] | Any | None = None, client: 'PrefectClient | None' = None, **kwargs: Any) -> tuple['ArtifactResponse', bool] ``` A async method to get or create an artifact. **Args:** * `key`: The key of the artifact to get or create. * `description`: The description of the artifact to create. * `data`: The data of the artifact to create. * `client`: The PrefectClient * `**kwargs`: Additional keyword arguments to use when creating the artifact. **Returns:** * The artifact, either retrieved or created. #### `create` ```python theme={null} create(self: Self, client: 'PrefectClient | None' = None) -> 'ArtifactResponse' ``` A method to create an artifact. **Args:** * `client`: The PrefectClient **Returns:** * The created artifact. #### `format` ```python theme={null} format(self) -> float ``` #### `format` ```python theme={null} format(self) -> str | float | int | dict[str, Any] ``` #### `get` ```python theme={null} get(cls, key: str | None = None, client: 'PrefectClient | None' = None) -> 'ArtifactResponse | None' ``` A method to get an artifact. **Args:** * `key`: The key of the artifact to get. * `client`: A client to use when calling the Prefect API. **Returns:** * The artifact (if found). #### `get_or_create` ```python theme={null} get_or_create(cls, key: str | None = None, description: str | None = None, data: dict[str, Any] | Any | None = None, client: 'PrefectClient | None' = None, **kwargs: Any) -> tuple['ArtifactResponse', bool] ``` A method to get or create an artifact. **Args:** * `key`: The key of the artifact to get or create. * `description`: The description of the artifact to create. * `data`: The data of the artifact to create. * `client`: The PrefectClient * `**kwargs`: Additional keyword arguments to use when creating the artifact. **Returns:** * The artifact, either retrieved or created. ### `ImageArtifact` An artifact that will display an image from a publicly accessible URL in the UI. **Args:** * `image_url`: The URL of the image to display. **Methods:** #### `acreate` ```python theme={null} acreate(self, client: 'PrefectClient | None' = None) -> 'ArtifactResponse' ``` An async method to create an artifact. **Args:** * `client`: The PrefectClient **Returns:** * The created artifact. #### `aformat` ```python theme={null} aformat(self) -> str ``` #### `aformat` ```python theme={null} aformat(self) -> str | float | int | dict[str, Any] ``` #### `aget` ```python theme={null} aget(cls, key: str | None = None, client: 'PrefectClient | None' = None) -> 'ArtifactResponse | None' ``` A async method to get an artifact. **Args:** * `key`: The key of the artifact to get. * `client`: A client to use when calling the Prefect API. **Returns:** * The artifact (if found). #### `aget_or_create` ```python theme={null} aget_or_create(cls, key: str | None = None, description: str | None = None, data: dict[str, Any] | Any | None = None, client: 'PrefectClient | None' = None, **kwargs: Any) -> tuple['ArtifactResponse', bool] ``` A async method to get or create an artifact. **Args:** * `key`: The key of the artifact to get or create. * `description`: The description of the artifact to create. * `data`: The data of the artifact to create. * `client`: The PrefectClient * `**kwargs`: Additional keyword arguments to use when creating the artifact. **Returns:** * The artifact, either retrieved or created. #### `create` ```python theme={null} create(self: Self, client: 'PrefectClient | None' = None) -> 'ArtifactResponse' ``` A method to create an artifact. **Args:** * `client`: The PrefectClient **Returns:** * The created artifact. #### `format` ```python theme={null} format(self) -> str ``` This method is used to format the artifact data so it can be properly sent to the API when the .create() method is called. **Returns:** * The image URL. #### `format` ```python theme={null} format(self) -> str | float | int | dict[str, Any] ``` #### `get` ```python theme={null} get(cls, key: str | None = None, client: 'PrefectClient | None' = None) -> 'ArtifactResponse | None' ``` A method to get an artifact. **Args:** * `key`: The key of the artifact to get. * `client`: A client to use when calling the Prefect API. **Returns:** * The artifact (if found). #### `get_or_create` ```python theme={null} get_or_create(cls, key: str | None = None, description: str | None = None, data: dict[str, Any] | Any | None = None, client: 'PrefectClient | None' = None, **kwargs: Any) -> tuple['ArtifactResponse', bool] ``` A method to get or create an artifact. **Args:** * `key`: The key of the artifact to get or create. * `description`: The description of the artifact to create. * `data`: The data of the artifact to create. * `client`: The PrefectClient * `**kwargs`: Additional keyword arguments to use when creating the artifact. **Returns:** * The artifact, either retrieved or created. # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-assets-__init__ # `prefect.assets` *This module is empty or contains only private/internal implementations.* # core Source: https://docs.prefect.io/v3/api-ref/python/prefect-assets-core # `prefect.assets.core` ## Functions ### `add_asset_metadata` ```python theme={null} add_asset_metadata(asset: str | Asset, metadata: dict[str, Any]) -> None ``` ## Classes ### `AssetProperties` Metadata properties to configure on an Asset **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `Asset` Assets are objects that represent materialized data, providing a way to track lineage and dependencies. **Methods:** #### `add_metadata` ```python theme={null} add_metadata(self, metadata: dict[str, Any]) -> None ``` #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. # materialize Source: https://docs.prefect.io/v3/api-ref/python/prefect-assets-materialize # `prefect.assets.materialize` ## Functions ### `materialize` ```python theme={null} materialize(*assets: Union[str, Asset], **task_kwargs: Unpack[TaskOptions]) -> Callable[[Callable[P, R]], MaterializingTask[P, R]] ``` Decorator for materializing assets. **Args:** * `*assets`: Assets to materialize * `by`: An optional tool that is ultimately responsible for materializing the asset e.g. "dbt" or "spark" * `**task_kwargs`: Additional task configuration # automations Source: https://docs.prefect.io/v3/api-ref/python/prefect-automations # `prefect.automations` ## Classes ### `Automation` **Methods:** #### `acreate` ```python theme={null} acreate(self: Self) -> Self ``` Asynchronously create a new automation. Examples: ```python theme={null} auto_to_create = Automation( name="woodchonk", trigger=EventTrigger( expect={"animal.walked"}, match={ "genus": "Marmota", "species": "monax", }, posture="Reactive", threshold=3, within=timedelta(seconds=10), ), actions=[CancelFlowRun()] ) created_automation = await auto_to_create.acreate() ``` #### `adelete` ```python theme={null} adelete(self: Self) -> bool ``` Asynchronously delete an automation. Examples: ```python theme={null} auto = Automation.read(id = 123) await auto.adelete() ``` #### `adisable` ```python theme={null} adisable(self: Self) -> bool ``` Asynchronously disable an automation. **Raises:** * `ValueError`: If the automation does not have an id * `PrefectHTTPStatusError`: If the automation cannot be disabled Example: ```python theme={null} auto = await Automation.aread(id = 123) await auto.adisable() ``` #### `aenable` ```python theme={null} aenable(self: Self) -> bool ``` Asynchronously enable an automation. **Raises:** * `ValueError`: If the automation does not have an id * `PrefectHTTPStatusError`: If the automation cannot be enabled Example: ```python theme={null} auto = await Automation.aread(id = 123) await auto.aenable() ``` #### `aread` ```python theme={null} aread(cls, id: UUID, name: Optional[str] = ...) -> Self ``` #### `aread` ```python theme={null} aread(cls, id: None = None, name: str = ...) -> Self ``` #### `aread` ```python theme={null} aread(cls, id: Optional[UUID] = None, name: Optional[str] = None) -> Self ``` Asynchronously read an automation by ID or name. Examples: ```python theme={null} automation = await Automation.aread(name="woodchonk") ``` ```python theme={null} automation = await Automation.aread(id=UUID("b3514963-02b1-47a5-93d1-6eeb131041cb")) ``` #### `aupdate` ```python theme={null} aupdate(self: Self) -> None ``` Updates an existing automation. Examples: ```python theme={null} auto = Automation.read(id=123) auto.name = "new name" auto.update() ``` #### `create` ```python theme={null} create(self: Self) -> Self ``` Create a new automation. Examples: ```python theme={null} auto_to_create = Automation( name="woodchonk", trigger=EventTrigger( expect={"animal.walked"}, match={ "genus": "Marmota", "species": "monax", }, posture="Reactive", threshold=3, within=timedelta(seconds=10), ), actions=[CancelFlowRun()] ) created_automation = auto_to_create.create() ``` #### `delete` ```python theme={null} delete(self: Self) -> bool ``` Delete an automation. Examples: ```python theme={null} auto = Automation.read(id = 123) auto.delete() ``` #### `disable` ```python theme={null} disable(self: Self) -> bool ``` Disable an automation. **Raises:** * `ValueError`: If the automation does not have an id * `PrefectHTTPStatusError`: If the automation cannot be disabled Example: ```python theme={null} auto = Automation.read(id = 123) auto.disable() ``` #### `enable` ```python theme={null} enable(self: Self) -> bool ``` Enable an automation. **Raises:** * `ValueError`: If the automation does not have an id * `PrefectHTTPStatusError`: If the automation cannot be enabled Example: ```python theme={null} auto = Automation.read(id = 123) auto.enable() ``` #### `read` ```python theme={null} read(cls, id: UUID, name: Optional[str] = ...) -> Self ``` #### `read` ```python theme={null} read(cls, id: None = None, name: str = ...) -> Self ``` #### `read` ```python theme={null} read(cls, id: Optional[UUID] = None, name: Optional[str] = None) -> Self ``` Read an automation by ID or name. Examples: ```python theme={null} automation = Automation.read(name="woodchonk") ``` ```python theme={null} automation = Automation.read(id=UUID("b3514963-02b1-47a5-93d1-6eeb131041cb")) ``` #### `update` ```python theme={null} update(self: Self) ``` Updates an existing automation. Examples: ```python theme={null} auto = Automation.read(id=123) auto.name = "new name" auto.update() ``` # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-blocks-__init__ # `prefect.blocks` *This module is empty or contains only private/internal implementations.* # abstract Source: https://docs.prefect.io/v3/api-ref/python/prefect-blocks-abstract # `prefect.blocks.abstract` ## Classes ### `CredentialsBlock` Stores credentials for an external system and exposes a client for interacting with that system. Can also hold config that is tightly coupled to credentials (domain, endpoint, account ID, etc.) Will often be composed with other blocks. Parent block should rely on the client provided by a credentials block for interacting with the corresponding external system. **Methods:** #### `adelete` ```python theme={null} adelete(cls, name: str, client: Optional['PrefectClient'] = None) -> None ``` Asynchronously deletes the block document with the specified name. **Args:** * `name`: The name of the block document to delete. * `client`: The client to use to delete the block document. If not provided, the default client will be injected. #### `aload` ```python theme={null} aload(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `aload`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Custom.aload("my-custom-message") ``` Load from Block with a block document slug: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Block.aload("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python theme={null} # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = await Custom.aload("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `aload_from_ref` ```python theme={null} aload_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Asynchronously retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `aload_from_ref`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload_from_ref` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `annotation_refers_to_block_class` ```python theme={null} annotation_refers_to_block_class(annotation: Any) -> bool ``` #### `aregister_type_and_schema` ```python theme={null} aregister_type_and_schema(cls, client: Optional['PrefectClient'] = None) -> None ``` Asynchronously makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. #### `asave` ```python theme={null} asave(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) -> UUID ``` Asynchronously saves the values of a block as a block document. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. * `client`: The client to use to save the block document. If not provided, the default client will be injected. **Returns:** * The ID of the saved block document. #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` #### `delete` ```python theme={null} delete(cls, name: str, client: Optional['PrefectClient'] = None) -> None ``` Deletes the block document with the specified name. This function will dispatch to `adelete` when called from an async context. **Args:** * `name`: The name of the block document to delete. * `client`: The client to use to delete the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. #### `get_block_capabilities` ```python theme={null} get_block_capabilities(cls) -> FrozenSet[str] ``` Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset. #### `get_block_class_from_key` ```python theme={null} get_block_class_from_key(cls: type[Self], key: str) -> type[Self] ``` Retrieve the block class implementation given a key. #### `get_block_class_from_schema` ```python theme={null} get_block_class_from_schema(cls: type[Self], schema: BlockSchema) -> type[Self] ``` Retrieve the block class implementation given a schema. #### `get_block_placeholder` ```python theme={null} get_block_placeholder(self) -> str ``` Returns the block placeholder for the current block which can be used for templating. **Returns:** * The block placeholder for the current block in the format `prefect.blocks.{block_type_name}.{block_document_name}` **Raises:** * `BlockNotSavedError`: Raised if the block has not been saved. If a block has not been saved, the return value will be `None`. #### `get_block_schema_version` ```python theme={null} get_block_schema_version(cls) -> str ``` #### `get_block_type_name` ```python theme={null} get_block_type_name(cls) -> str ``` #### `get_block_type_slug` ```python theme={null} get_block_type_slug(cls) -> str ``` #### `get_client` ```python theme={null} get_client(self, *args: Any, **kwargs: Any) -> Any ``` Returns a client for interacting with the external system. If a service offers various clients, this method can accept a `client_type` keyword argument to get the desired client within the service. #### `get_code_example` ```python theme={null} get_code_example(cls) -> Optional[str] ``` Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided. #### `get_description` ```python theme={null} get_description(cls) -> Optional[str] ``` Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined. #### `is_block_class` ```python theme={null} is_block_class(block: Any) -> TypeGuard[type['Block']] ``` #### `load` ```python theme={null} load(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Custom.load("my-custom-message") ``` Load from Block with a block document slug: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Block.load("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python theme={null} # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = Custom.load("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `load_from_ref` ```python theme={null} load_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. This function will dispatch to `aload_from_ref` when called from an async context. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `load_from_ref`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load_from_ref` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `logger` ```python theme={null} logger(self) -> LoggerOrAdapter ``` Returns a logger based on whether the CredentialsBlock is called from within a flow or task run context. If a run context is present, the logger property returns a run logger. Else, it returns a default logger labeled with the class's name. **Returns:** * The run logger or a default logger with the class's name. #### `model_dump` ```python theme={null} model_dump(self) -> dict[str, Any] ``` #### `model_json_schema` ```python theme={null} model_json_schema(cls, by_alias: bool = True, ref_template: str = '#/definitions/{model}', schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema, mode: Literal['validation', 'serialization'] = 'validation') -> dict[str, Any] ``` TODO: stop overriding this method - use GenerateSchema in ConfigDict instead? #### `model_validate` ```python theme={null} model_validate(cls: type[Self], obj: dict[str, Any] | Any) -> Self ``` #### `register_type_and_schema` ```python theme={null} register_type_and_schema(cls, client: Optional['PrefectClient'] = None) -> None ``` Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. This function will dispatch to `aregister_type_and_schema` when called from an async context. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. This is ignored when called from a synchronous context. #### `save` ```python theme={null} save(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) -> UUID ``` Saves the values of a block as a block document. This function will dispatch to `asave` when called from an async context. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. * `client`: The client to use to save the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. **Returns:** * The ID of the saved block document. #### `ser_model` ```python theme={null} ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `validate_block_type_slug` ```python theme={null} validate_block_type_slug(cls, values: Any) -> Any ``` Validates that the `block_type_slug` in the input values matches the expected block type slug for the class. This helps pydantic to correctly discriminate between different Block subclasses when validating Union types of Blocks. ### `NotificationError` Raised if a notification block fails to send a notification. ### `NotificationBlock` Block that represents a resource in an external system that is able to send notifications. **Methods:** #### `adelete` ```python theme={null} adelete(cls, name: str, client: Optional['PrefectClient'] = None) -> None ``` Asynchronously deletes the block document with the specified name. **Args:** * `name`: The name of the block document to delete. * `client`: The client to use to delete the block document. If not provided, the default client will be injected. #### `aload` ```python theme={null} aload(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `aload`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Custom.aload("my-custom-message") ``` Load from Block with a block document slug: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Block.aload("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python theme={null} # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = await Custom.aload("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `aload_from_ref` ```python theme={null} aload_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Asynchronously retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `aload_from_ref`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload_from_ref` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `annotation_refers_to_block_class` ```python theme={null} annotation_refers_to_block_class(annotation: Any) -> bool ``` #### `aregister_type_and_schema` ```python theme={null} aregister_type_and_schema(cls, client: Optional['PrefectClient'] = None) -> None ``` Asynchronously makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. #### `asave` ```python theme={null} asave(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) -> UUID ``` Asynchronously saves the values of a block as a block document. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. * `client`: The client to use to save the block document. If not provided, the default client will be injected. **Returns:** * The ID of the saved block document. #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` #### `delete` ```python theme={null} delete(cls, name: str, client: Optional['PrefectClient'] = None) -> None ``` Deletes the block document with the specified name. This function will dispatch to `adelete` when called from an async context. **Args:** * `name`: The name of the block document to delete. * `client`: The client to use to delete the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. #### `get_block_capabilities` ```python theme={null} get_block_capabilities(cls) -> FrozenSet[str] ``` Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset. #### `get_block_class_from_key` ```python theme={null} get_block_class_from_key(cls: type[Self], key: str) -> type[Self] ``` Retrieve the block class implementation given a key. #### `get_block_class_from_schema` ```python theme={null} get_block_class_from_schema(cls: type[Self], schema: BlockSchema) -> type[Self] ``` Retrieve the block class implementation given a schema. #### `get_block_placeholder` ```python theme={null} get_block_placeholder(self) -> str ``` Returns the block placeholder for the current block which can be used for templating. **Returns:** * The block placeholder for the current block in the format `prefect.blocks.{block_type_name}.{block_document_name}` **Raises:** * `BlockNotSavedError`: Raised if the block has not been saved. If a block has not been saved, the return value will be `None`. #### `get_block_schema_version` ```python theme={null} get_block_schema_version(cls) -> str ``` #### `get_block_type_name` ```python theme={null} get_block_type_name(cls) -> str ``` #### `get_block_type_slug` ```python theme={null} get_block_type_slug(cls) -> str ``` #### `get_code_example` ```python theme={null} get_code_example(cls) -> Optional[str] ``` Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided. #### `get_description` ```python theme={null} get_description(cls) -> Optional[str] ``` Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined. #### `is_block_class` ```python theme={null} is_block_class(block: Any) -> TypeGuard[type['Block']] ``` #### `load` ```python theme={null} load(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Custom.load("my-custom-message") ``` Load from Block with a block document slug: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Block.load("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python theme={null} # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = Custom.load("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `load_from_ref` ```python theme={null} load_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. This function will dispatch to `aload_from_ref` when called from an async context. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `load_from_ref`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load_from_ref` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `logger` ```python theme={null} logger(self) -> LoggerOrAdapter ``` Returns a logger based on whether the NotificationBlock is called from within a flow or task run context. If a run context is present, the logger property returns a run logger. Else, it returns a default logger labeled with the class's name. **Returns:** * The run logger or a default logger with the class's name. #### `model_dump` ```python theme={null} model_dump(self) -> dict[str, Any] ``` #### `model_json_schema` ```python theme={null} model_json_schema(cls, by_alias: bool = True, ref_template: str = '#/definitions/{model}', schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema, mode: Literal['validation', 'serialization'] = 'validation') -> dict[str, Any] ``` TODO: stop overriding this method - use GenerateSchema in ConfigDict instead? #### `model_validate` ```python theme={null} model_validate(cls: type[Self], obj: dict[str, Any] | Any) -> Self ``` #### `notify` ```python theme={null} notify(self, body: str, subject: str | None = None) -> None ``` Send a notification. **Args:** * `body`: The body of the notification. * `subject`: The subject of the notification. #### `raise_on_failure` ```python theme={null} raise_on_failure(self) -> Generator[None, None, None] ``` Context manager that, while active, causes the block to raise errors if it encounters a failure sending notifications. #### `register_type_and_schema` ```python theme={null} register_type_and_schema(cls, client: Optional['PrefectClient'] = None) -> None ``` Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. This function will dispatch to `aregister_type_and_schema` when called from an async context. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. This is ignored when called from a synchronous context. #### `save` ```python theme={null} save(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) -> UUID ``` Saves the values of a block as a block document. This function will dispatch to `asave` when called from an async context. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. * `client`: The client to use to save the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. **Returns:** * The ID of the saved block document. #### `ser_model` ```python theme={null} ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `validate_block_type_slug` ```python theme={null} validate_block_type_slug(cls, values: Any) -> Any ``` Validates that the `block_type_slug` in the input values matches the expected block type slug for the class. This helps pydantic to correctly discriminate between different Block subclasses when validating Union types of Blocks. ### `JobRun` Represents a job run in an external system. Allows waiting for the job run's completion and fetching its results. **Methods:** #### `fetch_result` ```python theme={null} fetch_result(self) -> T ``` Retrieve the results of the job run and return them. #### `logger` ```python theme={null} logger(self) -> LoggerOrAdapter ``` Returns a logger based on whether the JobRun is called from within a flow or task run context. If a run context is present, the logger property returns a run logger. Else, it returns a default logger labeled with the class's name. **Returns:** * The run logger or a default logger with the class's name. #### `wait_for_completion` ```python theme={null} wait_for_completion(self) -> Logger ``` Wait for the job run to complete. ### `JobBlock` Block that represents an entity in an external service that can trigger a long running execution. **Methods:** #### `adelete` ```python theme={null} adelete(cls, name: str, client: Optional['PrefectClient'] = None) -> None ``` Asynchronously deletes the block document with the specified name. **Args:** * `name`: The name of the block document to delete. * `client`: The client to use to delete the block document. If not provided, the default client will be injected. #### `aload` ```python theme={null} aload(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `aload`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Custom.aload("my-custom-message") ``` Load from Block with a block document slug: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Block.aload("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python theme={null} # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = await Custom.aload("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `aload_from_ref` ```python theme={null} aload_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Asynchronously retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `aload_from_ref`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload_from_ref` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `annotation_refers_to_block_class` ```python theme={null} annotation_refers_to_block_class(annotation: Any) -> bool ``` #### `aregister_type_and_schema` ```python theme={null} aregister_type_and_schema(cls, client: Optional['PrefectClient'] = None) -> None ``` Asynchronously makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. #### `asave` ```python theme={null} asave(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) -> UUID ``` Asynchronously saves the values of a block as a block document. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. * `client`: The client to use to save the block document. If not provided, the default client will be injected. **Returns:** * The ID of the saved block document. #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` #### `delete` ```python theme={null} delete(cls, name: str, client: Optional['PrefectClient'] = None) -> None ``` Deletes the block document with the specified name. This function will dispatch to `adelete` when called from an async context. **Args:** * `name`: The name of the block document to delete. * `client`: The client to use to delete the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. #### `get_block_capabilities` ```python theme={null} get_block_capabilities(cls) -> FrozenSet[str] ``` Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset. #### `get_block_class_from_key` ```python theme={null} get_block_class_from_key(cls: type[Self], key: str) -> type[Self] ``` Retrieve the block class implementation given a key. #### `get_block_class_from_schema` ```python theme={null} get_block_class_from_schema(cls: type[Self], schema: BlockSchema) -> type[Self] ``` Retrieve the block class implementation given a schema. #### `get_block_placeholder` ```python theme={null} get_block_placeholder(self) -> str ``` Returns the block placeholder for the current block which can be used for templating. **Returns:** * The block placeholder for the current block in the format `prefect.blocks.{block_type_name}.{block_document_name}` **Raises:** * `BlockNotSavedError`: Raised if the block has not been saved. If a block has not been saved, the return value will be `None`. #### `get_block_schema_version` ```python theme={null} get_block_schema_version(cls) -> str ``` #### `get_block_type_name` ```python theme={null} get_block_type_name(cls) -> str ``` #### `get_block_type_slug` ```python theme={null} get_block_type_slug(cls) -> str ``` #### `get_code_example` ```python theme={null} get_code_example(cls) -> Optional[str] ``` Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided. #### `get_description` ```python theme={null} get_description(cls) -> Optional[str] ``` Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined. #### `is_block_class` ```python theme={null} is_block_class(block: Any) -> TypeGuard[type['Block']] ``` #### `load` ```python theme={null} load(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Custom.load("my-custom-message") ``` Load from Block with a block document slug: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Block.load("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python theme={null} # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = Custom.load("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `load_from_ref` ```python theme={null} load_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. This function will dispatch to `aload_from_ref` when called from an async context. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `load_from_ref`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load_from_ref` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `logger` ```python theme={null} logger(self) -> LoggerOrAdapter ``` Returns a logger based on whether the JobBlock is called from within a flow or task run context. If a run context is present, the logger property returns a run logger. Else, it returns a default logger labeled with the class's name. **Returns:** * The run logger or a default logger with the class's name. #### `model_dump` ```python theme={null} model_dump(self) -> dict[str, Any] ``` #### `model_json_schema` ```python theme={null} model_json_schema(cls, by_alias: bool = True, ref_template: str = '#/definitions/{model}', schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema, mode: Literal['validation', 'serialization'] = 'validation') -> dict[str, Any] ``` TODO: stop overriding this method - use GenerateSchema in ConfigDict instead? #### `model_validate` ```python theme={null} model_validate(cls: type[Self], obj: dict[str, Any] | Any) -> Self ``` #### `register_type_and_schema` ```python theme={null} register_type_and_schema(cls, client: Optional['PrefectClient'] = None) -> None ``` Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. This function will dispatch to `aregister_type_and_schema` when called from an async context. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. This is ignored when called from a synchronous context. #### `save` ```python theme={null} save(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) -> UUID ``` Saves the values of a block as a block document. This function will dispatch to `asave` when called from an async context. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. * `client`: The client to use to save the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. **Returns:** * The ID of the saved block document. #### `ser_model` ```python theme={null} ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `trigger` ```python theme={null} trigger(self) -> JobRun[T] ``` Triggers a job run in an external service and returns a JobRun object to track the execution of the run. #### `validate_block_type_slug` ```python theme={null} validate_block_type_slug(cls, values: Any) -> Any ``` Validates that the `block_type_slug` in the input values matches the expected block type slug for the class. This helps pydantic to correctly discriminate between different Block subclasses when validating Union types of Blocks. ### `DatabaseBlock` An abstract block type that represents a database and provides an interface for interacting with it. Blocks that implement this interface have the option to accept credentials directly via attributes or via a nested `CredentialsBlock`. Use of a nested credentials block is recommended unless credentials are tightly coupled to database connection configuration. Implementing either sync or async context management on `DatabaseBlock` implementations is recommended. **Methods:** #### `adelete` ```python theme={null} adelete(cls, name: str, client: Optional['PrefectClient'] = None) -> None ``` Asynchronously deletes the block document with the specified name. **Args:** * `name`: The name of the block document to delete. * `client`: The client to use to delete the block document. If not provided, the default client will be injected. #### `aload` ```python theme={null} aload(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `aload`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Custom.aload("my-custom-message") ``` Load from Block with a block document slug: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Block.aload("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python theme={null} # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = await Custom.aload("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `aload_from_ref` ```python theme={null} aload_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Asynchronously retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `aload_from_ref`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload_from_ref` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `annotation_refers_to_block_class` ```python theme={null} annotation_refers_to_block_class(annotation: Any) -> bool ``` #### `aregister_type_and_schema` ```python theme={null} aregister_type_and_schema(cls, client: Optional['PrefectClient'] = None) -> None ``` Asynchronously makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. #### `asave` ```python theme={null} asave(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) -> UUID ``` Asynchronously saves the values of a block as a block document. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. * `client`: The client to use to save the block document. If not provided, the default client will be injected. **Returns:** * The ID of the saved block document. #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` #### `delete` ```python theme={null} delete(cls, name: str, client: Optional['PrefectClient'] = None) -> None ``` Deletes the block document with the specified name. This function will dispatch to `adelete` when called from an async context. **Args:** * `name`: The name of the block document to delete. * `client`: The client to use to delete the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. #### `execute` ```python theme={null} execute(self, operation: str, parameters: dict[str, Any] | None = None, **execution_kwargs: Any) -> None ``` Executes an operation on the database. This method is intended to be used for operations that do not return data, such as INSERT, UPDATE, or DELETE. **Args:** * `operation`: The SQL query or other operation to be executed. * `parameters`: The parameters for the operation. * `**execution_kwargs`: Additional keyword arguments to pass to execute. #### `execute_many` ```python theme={null} execute_many(self, operation: str, seq_of_parameters: list[dict[str, Any]], **execution_kwargs: Any) -> None ``` Executes multiple operations on the database. This method is intended to be used for operations that do not return data, such as INSERT, UPDATE, or DELETE. **Args:** * `operation`: The SQL query or other operation to be executed. * `seq_of_parameters`: The sequence of parameters for the operation. * `**execution_kwargs`: Additional keyword arguments to pass to execute. #### `fetch_all` ```python theme={null} fetch_all(self, operation: str, parameters: dict[str, Any] | None = None, **execution_kwargs: Any) -> list[tuple[Any, ...]] ``` Fetch all results from the database. **Args:** * `operation`: The SQL query or other operation to be executed. * `parameters`: The parameters for the operation. * `**execution_kwargs`: Additional keyword arguments to pass to execute. **Returns:** * A list of tuples containing the data returned by the database, where each row is a tuple and each column is a value in the tuple. #### `fetch_many` ```python theme={null} fetch_many(self, operation: str, parameters: dict[str, Any] | None = None, size: int | None = None, **execution_kwargs: Any) -> list[tuple[Any, ...]] ``` Fetch a limited number of results from the database. **Args:** * `operation`: The SQL query or other operation to be executed. * `parameters`: The parameters for the operation. * `size`: The number of results to return. * `**execution_kwargs`: Additional keyword arguments to pass to execute. **Returns:** * A list of tuples containing the data returned by the database, where each row is a tuple and each column is a value in the tuple. #### `fetch_one` ```python theme={null} fetch_one(self, operation: str, parameters: dict[str, Any] | None = None, **execution_kwargs: Any) -> tuple[Any, ...] ``` Fetch a single result from the database. **Args:** * `operation`: The SQL query or other operation to be executed. * `parameters`: The parameters for the operation. * `**execution_kwargs`: Additional keyword arguments to pass to execute. **Returns:** * A list of tuples containing the data returned by the database, where each row is a tuple and each column is a value in the tuple. #### `get_block_capabilities` ```python theme={null} get_block_capabilities(cls) -> FrozenSet[str] ``` Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset. #### `get_block_class_from_key` ```python theme={null} get_block_class_from_key(cls: type[Self], key: str) -> type[Self] ``` Retrieve the block class implementation given a key. #### `get_block_class_from_schema` ```python theme={null} get_block_class_from_schema(cls: type[Self], schema: BlockSchema) -> type[Self] ``` Retrieve the block class implementation given a schema. #### `get_block_placeholder` ```python theme={null} get_block_placeholder(self) -> str ``` Returns the block placeholder for the current block which can be used for templating. **Returns:** * The block placeholder for the current block in the format `prefect.blocks.{block_type_name}.{block_document_name}` **Raises:** * `BlockNotSavedError`: Raised if the block has not been saved. If a block has not been saved, the return value will be `None`. #### `get_block_schema_version` ```python theme={null} get_block_schema_version(cls) -> str ``` #### `get_block_type_name` ```python theme={null} get_block_type_name(cls) -> str ``` #### `get_block_type_slug` ```python theme={null} get_block_type_slug(cls) -> str ``` #### `get_code_example` ```python theme={null} get_code_example(cls) -> Optional[str] ``` Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided. #### `get_description` ```python theme={null} get_description(cls) -> Optional[str] ``` Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined. #### `is_block_class` ```python theme={null} is_block_class(block: Any) -> TypeGuard[type['Block']] ``` #### `load` ```python theme={null} load(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Custom.load("my-custom-message") ``` Load from Block with a block document slug: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Block.load("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python theme={null} # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = Custom.load("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `load_from_ref` ```python theme={null} load_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. This function will dispatch to `aload_from_ref` when called from an async context. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `load_from_ref`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load_from_ref` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `logger` ```python theme={null} logger(self) -> LoggerOrAdapter ``` Returns a logger based on whether the DatabaseBlock is called from within a flow or task run context. If a run context is present, the logger property returns a run logger. Else, it returns a default logger labeled with the class's name. **Returns:** * The run logger or a default logger with the class's name. #### `model_dump` ```python theme={null} model_dump(self) -> dict[str, Any] ``` #### `model_json_schema` ```python theme={null} model_json_schema(cls, by_alias: bool = True, ref_template: str = '#/definitions/{model}', schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema, mode: Literal['validation', 'serialization'] = 'validation') -> dict[str, Any] ``` TODO: stop overriding this method - use GenerateSchema in ConfigDict instead? #### `model_validate` ```python theme={null} model_validate(cls: type[Self], obj: dict[str, Any] | Any) -> Self ``` #### `register_type_and_schema` ```python theme={null} register_type_and_schema(cls, client: Optional['PrefectClient'] = None) -> None ``` Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. This function will dispatch to `aregister_type_and_schema` when called from an async context. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. This is ignored when called from a synchronous context. #### `save` ```python theme={null} save(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) -> UUID ``` Saves the values of a block as a block document. This function will dispatch to `asave` when called from an async context. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. * `client`: The client to use to save the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. **Returns:** * The ID of the saved block document. #### `ser_model` ```python theme={null} ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `validate_block_type_slug` ```python theme={null} validate_block_type_slug(cls, values: Any) -> Any ``` Validates that the `block_type_slug` in the input values matches the expected block type slug for the class. This helps pydantic to correctly discriminate between different Block subclasses when validating Union types of Blocks. ### `ObjectStorageBlock` Block that represents a resource in an external service that can store objects. **Methods:** #### `adelete` ```python theme={null} adelete(cls, name: str, client: Optional['PrefectClient'] = None) -> None ``` Asynchronously deletes the block document with the specified name. **Args:** * `name`: The name of the block document to delete. * `client`: The client to use to delete the block document. If not provided, the default client will be injected. #### `aload` ```python theme={null} aload(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `aload`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Custom.aload("my-custom-message") ``` Load from Block with a block document slug: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Block.aload("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python theme={null} # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = await Custom.aload("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `aload_from_ref` ```python theme={null} aload_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Asynchronously retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `aload_from_ref`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload_from_ref` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `annotation_refers_to_block_class` ```python theme={null} annotation_refers_to_block_class(annotation: Any) -> bool ``` #### `aregister_type_and_schema` ```python theme={null} aregister_type_and_schema(cls, client: Optional['PrefectClient'] = None) -> None ``` Asynchronously makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. #### `asave` ```python theme={null} asave(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) -> UUID ``` Asynchronously saves the values of a block as a block document. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. * `client`: The client to use to save the block document. If not provided, the default client will be injected. **Returns:** * The ID of the saved block document. #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` #### `delete` ```python theme={null} delete(cls, name: str, client: Optional['PrefectClient'] = None) -> None ``` Deletes the block document with the specified name. This function will dispatch to `adelete` when called from an async context. **Args:** * `name`: The name of the block document to delete. * `client`: The client to use to delete the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. #### `download_folder_to_path` ```python theme={null} download_folder_to_path(self, from_folder: str, to_folder: str | Path, **download_kwargs: Any) -> Path ``` Downloads a folder from the object storage service to a path. **Args:** * `from_folder`: The path to the folder to download from. * `to_folder`: The path to download the folder to. * `**download_kwargs`: Additional keyword arguments to pass to download. **Returns:** * The path that the folder was downloaded to. #### `download_object_to_file_object` ```python theme={null} download_object_to_file_object(self, from_path: str, to_file_object: BinaryIO, **download_kwargs: Any) -> BinaryIO ``` Downloads an object from the object storage service to a file-like object, which can be a BytesIO object or a BufferedWriter. **Args:** * `from_path`: The path to download from. * `to_file_object`: The file-like object to download to. * `**download_kwargs`: Additional keyword arguments to pass to download. **Returns:** * The file-like object that the object was downloaded to. #### `download_object_to_path` ```python theme={null} download_object_to_path(self, from_path: str, to_path: str | Path, **download_kwargs: Any) -> Path ``` Downloads an object from the object storage service to a path. **Args:** * `from_path`: The path to download from. * `to_path`: The path to download to. * `**download_kwargs`: Additional keyword arguments to pass to download. **Returns:** * The path that the object was downloaded to. #### `get_block_capabilities` ```python theme={null} get_block_capabilities(cls) -> FrozenSet[str] ``` Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset. #### `get_block_class_from_key` ```python theme={null} get_block_class_from_key(cls: type[Self], key: str) -> type[Self] ``` Retrieve the block class implementation given a key. #### `get_block_class_from_schema` ```python theme={null} get_block_class_from_schema(cls: type[Self], schema: BlockSchema) -> type[Self] ``` Retrieve the block class implementation given a schema. #### `get_block_placeholder` ```python theme={null} get_block_placeholder(self) -> str ``` Returns the block placeholder for the current block which can be used for templating. **Returns:** * The block placeholder for the current block in the format `prefect.blocks.{block_type_name}.{block_document_name}` **Raises:** * `BlockNotSavedError`: Raised if the block has not been saved. If a block has not been saved, the return value will be `None`. #### `get_block_schema_version` ```python theme={null} get_block_schema_version(cls) -> str ``` #### `get_block_type_name` ```python theme={null} get_block_type_name(cls) -> str ``` #### `get_block_type_slug` ```python theme={null} get_block_type_slug(cls) -> str ``` #### `get_code_example` ```python theme={null} get_code_example(cls) -> Optional[str] ``` Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided. #### `get_description` ```python theme={null} get_description(cls) -> Optional[str] ``` Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined. #### `is_block_class` ```python theme={null} is_block_class(block: Any) -> TypeGuard[type['Block']] ``` #### `load` ```python theme={null} load(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Custom.load("my-custom-message") ``` Load from Block with a block document slug: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Block.load("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python theme={null} # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = Custom.load("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `load_from_ref` ```python theme={null} load_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. This function will dispatch to `aload_from_ref` when called from an async context. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `load_from_ref`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load_from_ref` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `logger` ```python theme={null} logger(self) -> LoggerOrAdapter ``` Returns a logger based on whether the ObjectStorageBlock is called from within a flow or task run context. If a run context is present, the logger property returns a run logger. Else, it returns a default logger labeled with the class's name. **Returns:** * The run logger or a default logger with the class's name. #### `model_dump` ```python theme={null} model_dump(self) -> dict[str, Any] ``` #### `model_json_schema` ```python theme={null} model_json_schema(cls, by_alias: bool = True, ref_template: str = '#/definitions/{model}', schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema, mode: Literal['validation', 'serialization'] = 'validation') -> dict[str, Any] ``` TODO: stop overriding this method - use GenerateSchema in ConfigDict instead? #### `model_validate` ```python theme={null} model_validate(cls: type[Self], obj: dict[str, Any] | Any) -> Self ``` #### `register_type_and_schema` ```python theme={null} register_type_and_schema(cls, client: Optional['PrefectClient'] = None) -> None ``` Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. This function will dispatch to `aregister_type_and_schema` when called from an async context. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. This is ignored when called from a synchronous context. #### `save` ```python theme={null} save(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) -> UUID ``` Saves the values of a block as a block document. This function will dispatch to `asave` when called from an async context. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. * `client`: The client to use to save the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. **Returns:** * The ID of the saved block document. #### `ser_model` ```python theme={null} ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `upload_from_file_object` ```python theme={null} upload_from_file_object(self, from_file_object: BinaryIO, to_path: str, **upload_kwargs: Any) -> str ``` Uploads an object to the object storage service from a file-like object, which can be a BytesIO object or a BufferedReader. **Args:** * `from_file_object`: The file-like object to upload from. * `to_path`: The path to upload the object to. * `**upload_kwargs`: Additional keyword arguments to pass to upload. **Returns:** * The path that the object was uploaded to. #### `upload_from_folder` ```python theme={null} upload_from_folder(self, from_folder: str | Path, to_folder: str, **upload_kwargs: Any) -> str ``` Uploads a folder to the object storage service from a path. **Args:** * `from_folder`: The path to the folder to upload from. * `to_folder`: The path to upload the folder to. * `**upload_kwargs`: Additional keyword arguments to pass to upload. **Returns:** * The path that the folder was uploaded to. #### `upload_from_path` ```python theme={null} upload_from_path(self, from_path: str | Path, to_path: str, **upload_kwargs: Any) -> str ``` Uploads an object from a path to the object storage service. **Args:** * `from_path`: The path to the file to upload from. * `to_path`: The path to upload the file to. * `**upload_kwargs`: Additional keyword arguments to pass to upload. **Returns:** * The path that the object was uploaded to. #### `validate_block_type_slug` ```python theme={null} validate_block_type_slug(cls, values: Any) -> Any ``` Validates that the `block_type_slug` in the input values matches the expected block type slug for the class. This helps pydantic to correctly discriminate between different Block subclasses when validating Union types of Blocks. ### `SecretBlock` Block that represents a resource that can store and retrieve secrets. **Methods:** #### `adelete` ```python theme={null} adelete(cls, name: str, client: Optional['PrefectClient'] = None) -> None ``` Asynchronously deletes the block document with the specified name. **Args:** * `name`: The name of the block document to delete. * `client`: The client to use to delete the block document. If not provided, the default client will be injected. #### `aload` ```python theme={null} aload(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `aload`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Custom.aload("my-custom-message") ``` Load from Block with a block document slug: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Block.aload("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python theme={null} # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = await Custom.aload("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `aload_from_ref` ```python theme={null} aload_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Asynchronously retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `aload_from_ref`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload_from_ref` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `annotation_refers_to_block_class` ```python theme={null} annotation_refers_to_block_class(annotation: Any) -> bool ``` #### `aregister_type_and_schema` ```python theme={null} aregister_type_and_schema(cls, client: Optional['PrefectClient'] = None) -> None ``` Asynchronously makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. #### `asave` ```python theme={null} asave(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) -> UUID ``` Asynchronously saves the values of a block as a block document. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. * `client`: The client to use to save the block document. If not provided, the default client will be injected. **Returns:** * The ID of the saved block document. #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` #### `delete` ```python theme={null} delete(cls, name: str, client: Optional['PrefectClient'] = None) -> None ``` Deletes the block document with the specified name. This function will dispatch to `adelete` when called from an async context. **Args:** * `name`: The name of the block document to delete. * `client`: The client to use to delete the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. #### `get_block_capabilities` ```python theme={null} get_block_capabilities(cls) -> FrozenSet[str] ``` Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset. #### `get_block_class_from_key` ```python theme={null} get_block_class_from_key(cls: type[Self], key: str) -> type[Self] ``` Retrieve the block class implementation given a key. #### `get_block_class_from_schema` ```python theme={null} get_block_class_from_schema(cls: type[Self], schema: BlockSchema) -> type[Self] ``` Retrieve the block class implementation given a schema. #### `get_block_placeholder` ```python theme={null} get_block_placeholder(self) -> str ``` Returns the block placeholder for the current block which can be used for templating. **Returns:** * The block placeholder for the current block in the format `prefect.blocks.{block_type_name}.{block_document_name}` **Raises:** * `BlockNotSavedError`: Raised if the block has not been saved. If a block has not been saved, the return value will be `None`. #### `get_block_schema_version` ```python theme={null} get_block_schema_version(cls) -> str ``` #### `get_block_type_name` ```python theme={null} get_block_type_name(cls) -> str ``` #### `get_block_type_slug` ```python theme={null} get_block_type_slug(cls) -> str ``` #### `get_code_example` ```python theme={null} get_code_example(cls) -> Optional[str] ``` Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided. #### `get_description` ```python theme={null} get_description(cls) -> Optional[str] ``` Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined. #### `is_block_class` ```python theme={null} is_block_class(block: Any) -> TypeGuard[type['Block']] ``` #### `load` ```python theme={null} load(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Custom.load("my-custom-message") ``` Load from Block with a block document slug: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Block.load("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python theme={null} # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = Custom.load("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `load_from_ref` ```python theme={null} load_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. This function will dispatch to `aload_from_ref` when called from an async context. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `load_from_ref`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load_from_ref` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `logger` ```python theme={null} logger(self) -> LoggerOrAdapter ``` Returns a logger based on whether the SecretBlock is called from within a flow or task run context. If a run context is present, the logger property returns a run logger. Else, it returns a default logger labeled with the class's name. **Returns:** * The run logger or a default logger with the class's name. #### `model_dump` ```python theme={null} model_dump(self) -> dict[str, Any] ``` #### `model_json_schema` ```python theme={null} model_json_schema(cls, by_alias: bool = True, ref_template: str = '#/definitions/{model}', schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema, mode: Literal['validation', 'serialization'] = 'validation') -> dict[str, Any] ``` TODO: stop overriding this method - use GenerateSchema in ConfigDict instead? #### `model_validate` ```python theme={null} model_validate(cls: type[Self], obj: dict[str, Any] | Any) -> Self ``` #### `read_secret` ```python theme={null} read_secret(self) -> bytes ``` Reads the configured secret from the secret storage service. **Returns:** * The secret data. #### `register_type_and_schema` ```python theme={null} register_type_and_schema(cls, client: Optional['PrefectClient'] = None) -> None ``` Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. This function will dispatch to `aregister_type_and_schema` when called from an async context. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. This is ignored when called from a synchronous context. #### `save` ```python theme={null} save(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) -> UUID ``` Saves the values of a block as a block document. This function will dispatch to `asave` when called from an async context. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. * `client`: The client to use to save the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. **Returns:** * The ID of the saved block document. #### `ser_model` ```python theme={null} ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `validate_block_type_slug` ```python theme={null} validate_block_type_slug(cls, values: Any) -> Any ``` Validates that the `block_type_slug` in the input values matches the expected block type slug for the class. This helps pydantic to correctly discriminate between different Block subclasses when validating Union types of Blocks. #### `write_secret` ```python theme={null} write_secret(self, secret_data: bytes) -> str ``` Writes secret data to the configured secret in the secret storage service. **Args:** * `secret_data`: The secret data to write. **Returns:** * The key of the secret that can be used for retrieval. # core Source: https://docs.prefect.io/v3/api-ref/python/prefect-blocks-core # `prefect.blocks.core` ## Functions ### `block_schema_to_key` ```python theme={null} block_schema_to_key(schema: BlockSchema) -> str ``` Defines the unique key used to lookup the Block class for a given schema. ### `schema_extra` ```python theme={null} schema_extra(schema: dict[str, Any], model: type['Block']) -> None ``` Customizes Pydantic's schema generation feature to add blocks related information. ## Classes ### `InvalidBlockRegistration` Raised on attempted registration of the base Block class or a Block interface class ### `UnknownBlockType` Raised when a block type is not found in the registry. ### `BlockNotSavedError` Raised when a given block is not saved and an operation that requires the block to be saved is attempted. ### `Block` A base class for implementing a block that wraps an external service. This class can be defined with an arbitrary set of fields and methods, and couples business logic with data contained in an block document. `_block_document_name`, `_block_document_id`, `_block_schema_id`, and `_block_type_id` are reserved by Prefect as Block metadata fields, but otherwise a Block can implement arbitrary logic. Blocks can be instantiated without populating these metadata fields, but can only be used interactively, not with the Prefect API. Instead of the **init** method, a block implementation allows the definition of a `block_initialization` method that is called after initialization. **Methods:** #### `adelete` ```python theme={null} adelete(cls, name: str, client: Optional['PrefectClient'] = None) -> None ``` Asynchronously deletes the block document with the specified name. **Args:** * `name`: The name of the block document to delete. * `client`: The client to use to delete the block document. If not provided, the default client will be injected. #### `aload` ```python theme={null} aload(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `aload`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Custom.aload("my-custom-message") ``` Load from Block with a block document slug: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Block.aload("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python theme={null} # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = await Custom.aload("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `aload_from_ref` ```python theme={null} aload_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Asynchronously retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `aload_from_ref`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload_from_ref` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `annotation_refers_to_block_class` ```python theme={null} annotation_refers_to_block_class(annotation: Any) -> bool ``` #### `aregister_type_and_schema` ```python theme={null} aregister_type_and_schema(cls, client: Optional['PrefectClient'] = None) -> None ``` Asynchronously makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. #### `asave` ```python theme={null} asave(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) -> UUID ``` Asynchronously saves the values of a block as a block document. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. * `client`: The client to use to save the block document. If not provided, the default client will be injected. **Returns:** * The ID of the saved block document. #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` #### `delete` ```python theme={null} delete(cls, name: str, client: Optional['PrefectClient'] = None) -> None ``` Deletes the block document with the specified name. This function will dispatch to `adelete` when called from an async context. **Args:** * `name`: The name of the block document to delete. * `client`: The client to use to delete the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. #### `get_block_capabilities` ```python theme={null} get_block_capabilities(cls) -> FrozenSet[str] ``` Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset. #### `get_block_class_from_key` ```python theme={null} get_block_class_from_key(cls: type[Self], key: str) -> type[Self] ``` Retrieve the block class implementation given a key. #### `get_block_class_from_schema` ```python theme={null} get_block_class_from_schema(cls: type[Self], schema: BlockSchema) -> type[Self] ``` Retrieve the block class implementation given a schema. #### `get_block_placeholder` ```python theme={null} get_block_placeholder(self) -> str ``` Returns the block placeholder for the current block which can be used for templating. **Returns:** * The block placeholder for the current block in the format `prefect.blocks.{block_type_name}.{block_document_name}` **Raises:** * `BlockNotSavedError`: Raised if the block has not been saved. If a block has not been saved, the return value will be `None`. #### `get_block_schema_version` ```python theme={null} get_block_schema_version(cls) -> str ``` #### `get_block_type_name` ```python theme={null} get_block_type_name(cls) -> str ``` #### `get_block_type_slug` ```python theme={null} get_block_type_slug(cls) -> str ``` #### `get_code_example` ```python theme={null} get_code_example(cls) -> Optional[str] ``` Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided. #### `get_description` ```python theme={null} get_description(cls) -> Optional[str] ``` Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined. #### `is_block_class` ```python theme={null} is_block_class(block: Any) -> TypeGuard[type['Block']] ``` #### `load` ```python theme={null} load(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Custom.load("my-custom-message") ``` Load from Block with a block document slug: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Block.load("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python theme={null} # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = Custom.load("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `load_from_ref` ```python theme={null} load_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. This function will dispatch to `aload_from_ref` when called from an async context. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `load_from_ref`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load_from_ref` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `model_dump` ```python theme={null} model_dump(self) -> dict[str, Any] ``` #### `model_json_schema` ```python theme={null} model_json_schema(cls, by_alias: bool = True, ref_template: str = '#/definitions/{model}', schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema, mode: Literal['validation', 'serialization'] = 'validation') -> dict[str, Any] ``` TODO: stop overriding this method - use GenerateSchema in ConfigDict instead? #### `model_validate` ```python theme={null} model_validate(cls: type[Self], obj: dict[str, Any] | Any) -> Self ``` #### `register_type_and_schema` ```python theme={null} register_type_and_schema(cls, client: Optional['PrefectClient'] = None) -> None ``` Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. This function will dispatch to `aregister_type_and_schema` when called from an async context. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. This is ignored when called from a synchronous context. #### `save` ```python theme={null} save(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) -> UUID ``` Saves the values of a block as a block document. This function will dispatch to `asave` when called from an async context. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. * `client`: The client to use to save the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. **Returns:** * The ID of the saved block document. #### `ser_model` ```python theme={null} ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `validate_block_type_slug` ```python theme={null} validate_block_type_slug(cls, values: Any) -> Any ``` Validates that the `block_type_slug` in the input values matches the expected block type slug for the class. This helps pydantic to correctly discriminate between different Block subclasses when validating Union types of Blocks. # fields Source: https://docs.prefect.io/v3/api-ref/python/prefect-blocks-fields # `prefect.blocks.fields` *This module is empty or contains only private/internal implementations.* # notifications Source: https://docs.prefect.io/v3/api-ref/python/prefect-blocks-notifications # `prefect.blocks.notifications` ## Classes ### `AbstractAppriseNotificationBlock` An abstract class for sending notifications using Apprise. **Methods:** #### `anotify` ```python theme={null} anotify(self, body: str, subject: str | None = None) -> None ``` #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` #### `logger` ```python theme={null} logger(self) -> LoggerOrAdapter ``` Returns a logger based on whether the NotificationBlock is called from within a flow or task run context. If a run context is present, the logger property returns a run logger. Else, it returns a default logger labeled with the class's name. **Returns:** * The run logger or a default logger with the class's name. #### `notify` ```python theme={null} notify(self, body: str, subject: str | None = None) -> None ``` #### `notify` ```python theme={null} notify(self, body: str, subject: str | None = None) -> None ``` Send a notification. **Args:** * `body`: The body of the notification. * `subject`: The subject of the notification. #### `raise_on_failure` ```python theme={null} raise_on_failure(self) -> Generator[None, None, None] ``` Context manager that, while active, causes the block to raise errors if it encounters a failure sending notifications. ### `AppriseNotificationBlock` A base class for sending notifications using Apprise, through webhook URLs. **Methods:** #### `anotify` ```python theme={null} anotify(self, body: str, subject: str | None = None) -> None ``` #### `anotify` ```python theme={null} anotify(self, body: str, subject: str | None = None) -> None ``` #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` #### `notify` ```python theme={null} notify(self, body: str, subject: str | None = None) -> None ``` #### `notify` ```python theme={null} notify(self, body: str, subject: str | None = None) -> None ``` ### `SlackWebhook` Enables sending notifications via a provided Slack webhook. Supports both standard Slack webhooks (hooks.slack.com) and Slack GovCloud webhooks (hooks.slack-gov.com). **Examples:** Load a saved Slack webhook and send a message: ```python theme={null} from prefect.blocks.notifications import SlackWebhook slack_webhook_block = SlackWebhook.load("BLOCK_NAME") slack_webhook_block.notify("Hello from Prefect!") ``` **Methods:** #### `anotify` ```python theme={null} anotify(self, body: str, subject: str | None = None) -> None ``` #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` Initialize the Slack webhook client. This method handles both standard Slack webhooks and Slack GovCloud webhooks. Apprise's built-in Slack plugin only supports hooks.slack.com, so we need to manually construct the NotifySlack instance for slack-gov.com URLs to ensure notifications are sent to the correct host. See: [https://github.com/caronc/apprise/issues/XXXX](https://github.com/caronc/apprise/issues/XXXX) (upstream issue) #### `notify` ```python theme={null} notify(self, body: str, subject: str | None = None) -> None ``` ### `MicrosoftTeamsWebhook` Enables sending notifications via a provided Microsoft Teams webhook. **Examples:** Load a saved Teams webhook and send a message: ```python theme={null} from prefect.blocks.notifications import MicrosoftTeamsWebhook teams_webhook_block = MicrosoftTeamsWebhook.load("BLOCK_NAME") teams_webhook_block.notify("Hello from Prefect!") ``` **Methods:** #### `anotify` ```python theme={null} anotify(self, body: str, subject: str | None = None) -> None ``` #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` see [https://github.com/caronc/apprise/pull/1172](https://github.com/caronc/apprise/pull/1172) #### `notify` ```python theme={null} notify(self, body: str, subject: str | None = None) -> None ``` ### `PagerDutyWebHook` Enables sending notifications via a provided PagerDuty webhook. See [Apprise notify\_pagerduty docs](https://github.com/caronc/apprise/wiki/Notify_pagerduty) for more info on formatting the URL. **Examples:** Load a saved PagerDuty webhook and send a message: ```python theme={null} from prefect.blocks.notifications import PagerDutyWebHook pagerduty_webhook_block = PagerDutyWebHook.load("BLOCK_NAME") pagerduty_webhook_block.notify("Hello from Prefect!") ``` **Methods:** #### `anotify` ```python theme={null} anotify(self, body: str, subject: str | None = None) -> None ``` Apprise will combine subject and body by default, so we need to move the body into the custom\_details field. custom\_details is part of the webhook url, so we need to update the url and restart the client. #### `anotify` ```python theme={null} anotify(self, body: str, subject: str | None = None) -> None ``` #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` #### `notify` ```python theme={null} notify(self, body: str, subject: str | None = None) -> None ``` Apprise will combine subject and body by default, so we need to move the body into the custom\_details field. custom\_details is part of the webhook url, so we need to update the url and restart the client. #### `notify` ```python theme={null} notify(self, body: str, subject: str | None = None) -> None ``` ### `TwilioSMS` Enables sending notifications via Twilio SMS. Find more on sending Twilio SMS messages in the [docs](https://www.twilio.com/docs/sms). **Examples:** Load a saved `TwilioSMS` block and send a message: ```python theme={null} from prefect.blocks.notifications import TwilioSMS twilio_webhook_block = TwilioSMS.load("BLOCK_NAME") twilio_webhook_block.notify("Hello from Prefect!") ``` **Methods:** #### `anotify` ```python theme={null} anotify(self, body: str, subject: str | None = None) -> None ``` #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` #### `notify` ```python theme={null} notify(self, body: str, subject: str | None = None) -> None ``` ### `OpsgenieWebhook` Enables sending notifications via a provided Opsgenie webhook. See [Apprise notify\_opsgenie docs](https://github.com/caronc/apprise/wiki/Notify_opsgenie) for more info on formatting the URL. **Examples:** Load a saved Opsgenie webhook and send a message: ```python theme={null} from prefect.blocks.notifications import OpsgenieWebhook opsgenie_webhook_block = OpsgenieWebhook.load("BLOCK_NAME") opsgenie_webhook_block.notify("Hello from Prefect!") ``` **Methods:** #### `anotify` ```python theme={null} anotify(self, body: str, subject: str | None = None) -> None ``` #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` #### `notify` ```python theme={null} notify(self, body: str, subject: str | None = None) -> None ``` ### `MattermostWebhook` Enables sending notifications via a provided Mattermost webhook. See [Apprise notify\_Mattermost docs](https://github.com/caronc/apprise/wiki/Notify_Mattermost) # noqa **Examples:** Load a saved Mattermost webhook and send a message: ```python theme={null} from prefect.blocks.notifications import MattermostWebhook mattermost_webhook_block = MattermostWebhook.load("BLOCK_NAME") mattermost_webhook_block.notify("Hello from Prefect!") ``` **Methods:** #### `anotify` ```python theme={null} anotify(self, body: str, subject: str | None = None) -> None ``` #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` #### `notify` ```python theme={null} notify(self, body: str, subject: str | None = None) -> None ``` ### `DiscordWebhook` Enables sending notifications via a provided Discord webhook. See [Apprise notify\_Discord docs](https://github.com/caronc/apprise/wiki/Notify_Discord) # noqa **Examples:** Load a saved Discord webhook and send a message: ```python theme={null} from prefect.blocks.notifications import DiscordWebhook discord_webhook_block = DiscordWebhook.load("BLOCK_NAME") discord_webhook_block.notify("Hello from Prefect!") ``` **Methods:** #### `anotify` ```python theme={null} anotify(self, body: str, subject: str | None = None) -> None ``` #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` #### `notify` ```python theme={null} notify(self, body: str, subject: str | None = None) -> None ``` ### `CustomWebhookNotificationBlock` Enables sending notifications via any custom webhook. All nested string param contains `{{key}}` will be substituted with value from context/secrets. Context values include: `subject`, `body` and `name`. **Examples:** Load a saved custom webhook and send a message: ```python theme={null} from prefect.blocks.notifications import CustomWebhookNotificationBlock custom_webhook_block = CustomWebhookNotificationBlock.load("BLOCK_NAME") custom_webhook_block.notify("Hello from Prefect!") ``` **Methods:** #### `anotify` ```python theme={null} anotify(self, body: str, subject: str | None = None) -> None ``` #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` #### `logger` ```python theme={null} logger(self) -> LoggerOrAdapter ``` Returns a logger based on whether the NotificationBlock is called from within a flow or task run context. If a run context is present, the logger property returns a run logger. Else, it returns a default logger labeled with the class's name. **Returns:** * The run logger or a default logger with the class's name. #### `notify` ```python theme={null} notify(self, body: str, subject: str | None = None) -> None ``` #### `notify` ```python theme={null} notify(self, body: str, subject: str | None = None) -> None ``` Send a notification. **Args:** * `body`: The body of the notification. * `subject`: The subject of the notification. #### `raise_on_failure` ```python theme={null} raise_on_failure(self) -> Generator[None, None, None] ``` Context manager that, while active, causes the block to raise errors if it encounters a failure sending notifications. ### `SendgridEmail` Enables sending notifications via any sendgrid account. See [Apprise Notify\_sendgrid docs](https://github.com/caronc/apprise/wiki/Notify_Sendgrid) **Examples:** Load a saved Sendgrid and send a email message: ```python theme={null} from prefect.blocks.notifications import SendgridEmail sendgrid_block = SendgridEmail.load("BLOCK_NAME") sendgrid_block.notify("Hello from Prefect!") ``` **Methods:** #### `anotify` ```python theme={null} anotify(self, body: str, subject: str | None = None) -> None ``` #### `anotify` ```python theme={null} anotify(self, body: str, subject: str | None = None) -> None ``` #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` #### `notify` ```python theme={null} notify(self, body: str, subject: str | None = None) -> None ``` #### `notify` ```python theme={null} notify(self, body: str, subject: str | None = None) -> None ``` # redis Source: https://docs.prefect.io/v3/api-ref/python/prefect-blocks-redis # `prefect.blocks.redis` ## Classes ### `RedisStorageContainer` Block used to interact with Redis as a filesystem **Attributes:** * `host`: The value to store. * `port`: The value to store. * `db`: The value to store. * `username`: The value to store. * `password`: The value to store. * `connection_string`: The value to store. **Methods:** #### `aread_path` ```python theme={null} aread_path(self, path: Path | str) -> Optional[bytes] ``` Read the redis content at `path` **Args:** * `path`: Redis key to read from **Returns:** * Contents at key as bytes, or None if key does not exist #### `awrite_path` ```python theme={null} awrite_path(self, path: Path | str, content: bytes) -> bool ``` Write `content` to the redis at `path` **Args:** * `path`: Redis key to write to * `content`: Binary object to write **Returns:** * True if the key was set successfully #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` #### `from_connection_string` ```python theme={null} from_connection_string(cls, connection_string: str | SecretStr) -> Self ``` Create block from a Redis connection string Supports the following URL schemes: * `redis://` creates a TCP socket connection * `rediss://` creates a SSL wrapped TCP socket connection * `unix://` creates a Unix Domain Socket connection See \[Redis docs]\([https://redis.readthedocs.io/en/stable/examples](https://redis.readthedocs.io/en/stable/examples) /connection\_examples.html#Connecting-to-Redis-instances-by-specifying-a-URL -scheme.) for more info. **Args:** * `connection_string`: Redis connection string **Returns:** * `RedisStorageContainer` instance #### `from_host` ```python theme={null} from_host(cls, host: str, port: int = 6379, db: int = 0, username: None | str | SecretStr = None, password: None | str | SecretStr = None) -> Self ``` Create block from hostname, username and password **Args:** * `host`: Redis hostname * `username`: Redis username * `password`: Redis password * `port`: Redis port **Returns:** * `RedisStorageContainer` instance #### `read_path` ```python theme={null} read_path(self, path: Path | str) -> Optional[bytes] ``` Read the redis content at `path` **Args:** * `path`: Redis key to read from **Returns:** * Contents at key as bytes, or None if key does not exist #### `read_path` ```python theme={null} read_path(self, path: str) -> bytes ``` #### `write_path` ```python theme={null} write_path(self, path: Path | str, content: bytes) -> bool ``` Write `content` to the redis at `path` **Args:** * `path`: Redis key to write to * `content`: Binary object to write **Returns:** * True if the key was set successfully #### `write_path` ```python theme={null} write_path(self, path: str, content: bytes) -> None ``` # system Source: https://docs.prefect.io/v3/api-ref/python/prefect-blocks-system # `prefect.blocks.system` ## Classes ### `Secret` A block that represents a secret value. The value stored in this block will be obfuscated when this block is viewed or edited in the UI. **Attributes:** * `value`: A value that should be kept secret. **Methods:** #### `adelete` ```python theme={null} adelete(cls, name: str, client: Optional['PrefectClient'] = None) -> None ``` Asynchronously deletes the block document with the specified name. **Args:** * `name`: The name of the block document to delete. * `client`: The client to use to delete the block document. If not provided, the default client will be injected. #### `aload` ```python theme={null} aload(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `aload`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Custom.aload("my-custom-message") ``` Load from Block with a block document slug: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Block.aload("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python theme={null} # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = await Custom.aload("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `aload_from_ref` ```python theme={null} aload_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Asynchronously retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `aload_from_ref`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload_from_ref` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `annotation_refers_to_block_class` ```python theme={null} annotation_refers_to_block_class(annotation: Any) -> bool ``` #### `aregister_type_and_schema` ```python theme={null} aregister_type_and_schema(cls, client: Optional['PrefectClient'] = None) -> None ``` Asynchronously makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. #### `asave` ```python theme={null} asave(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) -> UUID ``` Asynchronously saves the values of a block as a block document. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. * `client`: The client to use to save the block document. If not provided, the default client will be injected. **Returns:** * The ID of the saved block document. #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` #### `delete` ```python theme={null} delete(cls, name: str, client: Optional['PrefectClient'] = None) -> None ``` Deletes the block document with the specified name. This function will dispatch to `adelete` when called from an async context. **Args:** * `name`: The name of the block document to delete. * `client`: The client to use to delete the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. #### `get` ```python theme={null} get(self) -> T | str ``` #### `get_block_capabilities` ```python theme={null} get_block_capabilities(cls) -> FrozenSet[str] ``` Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset. #### `get_block_class_from_key` ```python theme={null} get_block_class_from_key(cls: type[Self], key: str) -> type[Self] ``` Retrieve the block class implementation given a key. #### `get_block_class_from_schema` ```python theme={null} get_block_class_from_schema(cls: type[Self], schema: BlockSchema) -> type[Self] ``` Retrieve the block class implementation given a schema. #### `get_block_placeholder` ```python theme={null} get_block_placeholder(self) -> str ``` Returns the block placeholder for the current block which can be used for templating. **Returns:** * The block placeholder for the current block in the format `prefect.blocks.{block_type_name}.{block_document_name}` **Raises:** * `BlockNotSavedError`: Raised if the block has not been saved. If a block has not been saved, the return value will be `None`. #### `get_block_schema_version` ```python theme={null} get_block_schema_version(cls) -> str ``` #### `get_block_type_name` ```python theme={null} get_block_type_name(cls) -> str ``` #### `get_block_type_slug` ```python theme={null} get_block_type_slug(cls) -> str ``` #### `get_code_example` ```python theme={null} get_code_example(cls) -> Optional[str] ``` Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided. #### `get_description` ```python theme={null} get_description(cls) -> Optional[str] ``` Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined. #### `is_block_class` ```python theme={null} is_block_class(block: Any) -> TypeGuard[type['Block']] ``` #### `load` ```python theme={null} load(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Custom.load("my-custom-message") ``` Load from Block with a block document slug: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Block.load("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python theme={null} # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = Custom.load("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `load_from_ref` ```python theme={null} load_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. This function will dispatch to `aload_from_ref` when called from an async context. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `load_from_ref`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load_from_ref` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `model_dump` ```python theme={null} model_dump(self) -> dict[str, Any] ``` #### `model_json_schema` ```python theme={null} model_json_schema(cls, by_alias: bool = True, ref_template: str = '#/definitions/{model}', schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema, mode: Literal['validation', 'serialization'] = 'validation') -> dict[str, Any] ``` TODO: stop overriding this method - use GenerateSchema in ConfigDict instead? #### `model_validate` ```python theme={null} model_validate(cls: type[Self], obj: dict[str, Any] | Any) -> Self ``` #### `register_type_and_schema` ```python theme={null} register_type_and_schema(cls, client: Optional['PrefectClient'] = None) -> None ``` Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. This function will dispatch to `aregister_type_and_schema` when called from an async context. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. This is ignored when called from a synchronous context. #### `save` ```python theme={null} save(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) -> UUID ``` Saves the values of a block as a block document. This function will dispatch to `asave` when called from an async context. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. * `client`: The client to use to save the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. **Returns:** * The ID of the saved block document. #### `ser_model` ```python theme={null} ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `validate_block_type_slug` ```python theme={null} validate_block_type_slug(cls, values: Any) -> Any ``` Validates that the `block_type_slug` in the input values matches the expected block type slug for the class. This helps pydantic to correctly discriminate between different Block subclasses when validating Union types of Blocks. #### `validate_value` ```python theme={null} validate_value(cls, value: Union[T, SecretStr, PydanticSecret[T]]) -> Union[SecretStr, PydanticSecret[T]] ``` # webhook Source: https://docs.prefect.io/v3/api-ref/python/prefect-blocks-webhook # `prefect.blocks.webhook` ## Classes ### `Webhook` Block that enables calling webhooks. **Methods:** #### `adelete` ```python theme={null} adelete(cls, name: str, client: Optional['PrefectClient'] = None) -> None ``` Asynchronously deletes the block document with the specified name. **Args:** * `name`: The name of the block document to delete. * `client`: The client to use to delete the block document. If not provided, the default client will be injected. #### `aload` ```python theme={null} aload(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `aload`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Custom.aload("my-custom-message") ``` Load from Block with a block document slug: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Block.aload("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python theme={null} # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = await Custom.aload("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `aload_from_ref` ```python theme={null} aload_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Asynchronously retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `aload_from_ref`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload_from_ref` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `annotation_refers_to_block_class` ```python theme={null} annotation_refers_to_block_class(annotation: Any) -> bool ``` #### `aregister_type_and_schema` ```python theme={null} aregister_type_and_schema(cls, client: Optional['PrefectClient'] = None) -> None ``` Asynchronously makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. #### `asave` ```python theme={null} asave(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) -> UUID ``` Asynchronously saves the values of a block as a block document. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. * `client`: The client to use to save the block document. If not provided, the default client will be injected. **Returns:** * The ID of the saved block document. #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` #### `call` ```python theme={null} call(self, payload: dict[str, Any] | str | None = None) -> Response ``` Call the webhook. **Args:** * `payload`: an optional payload to send when calling the webhook. #### `delete` ```python theme={null} delete(cls, name: str, client: Optional['PrefectClient'] = None) -> None ``` Deletes the block document with the specified name. This function will dispatch to `adelete` when called from an async context. **Args:** * `name`: The name of the block document to delete. * `client`: The client to use to delete the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. #### `get_block_capabilities` ```python theme={null} get_block_capabilities(cls) -> FrozenSet[str] ``` Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset. #### `get_block_class_from_key` ```python theme={null} get_block_class_from_key(cls: type[Self], key: str) -> type[Self] ``` Retrieve the block class implementation given a key. #### `get_block_class_from_schema` ```python theme={null} get_block_class_from_schema(cls: type[Self], schema: BlockSchema) -> type[Self] ``` Retrieve the block class implementation given a schema. #### `get_block_placeholder` ```python theme={null} get_block_placeholder(self) -> str ``` Returns the block placeholder for the current block which can be used for templating. **Returns:** * The block placeholder for the current block in the format `prefect.blocks.{block_type_name}.{block_document_name}` **Raises:** * `BlockNotSavedError`: Raised if the block has not been saved. If a block has not been saved, the return value will be `None`. #### `get_block_schema_version` ```python theme={null} get_block_schema_version(cls) -> str ``` #### `get_block_type_name` ```python theme={null} get_block_type_name(cls) -> str ``` #### `get_block_type_slug` ```python theme={null} get_block_type_slug(cls) -> str ``` #### `get_code_example` ```python theme={null} get_code_example(cls) -> Optional[str] ``` Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided. #### `get_description` ```python theme={null} get_description(cls) -> Optional[str] ``` Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined. #### `is_block_class` ```python theme={null} is_block_class(block: Any) -> TypeGuard[type['Block']] ``` #### `load` ```python theme={null} load(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Custom.load("my-custom-message") ``` Load from Block with a block document slug: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Block.load("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python theme={null} # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = Custom.load("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `load_from_ref` ```python theme={null} load_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. This function will dispatch to `aload_from_ref` when called from an async context. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `load_from_ref`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load_from_ref` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `model_dump` ```python theme={null} model_dump(self) -> dict[str, Any] ``` #### `model_json_schema` ```python theme={null} model_json_schema(cls, by_alias: bool = True, ref_template: str = '#/definitions/{model}', schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema, mode: Literal['validation', 'serialization'] = 'validation') -> dict[str, Any] ``` TODO: stop overriding this method - use GenerateSchema in ConfigDict instead? #### `model_validate` ```python theme={null} model_validate(cls: type[Self], obj: dict[str, Any] | Any) -> Self ``` #### `register_type_and_schema` ```python theme={null} register_type_and_schema(cls, client: Optional['PrefectClient'] = None) -> None ``` Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. This function will dispatch to `aregister_type_and_schema` when called from an async context. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. This is ignored when called from a synchronous context. #### `save` ```python theme={null} save(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) -> UUID ``` Saves the values of a block as a block document. This function will dispatch to `asave` when called from an async context. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. * `client`: The client to use to save the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. **Returns:** * The ID of the saved block document. #### `ser_model` ```python theme={null} ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `validate_block_type_slug` ```python theme={null} validate_block_type_slug(cls, values: Any) -> Any ``` Validates that the `block_type_slug` in the input values matches the expected block type slug for the class. This helps pydantic to correctly discriminate between different Block subclasses when validating Union types of Blocks. # cache_policies Source: https://docs.prefect.io/v3/api-ref/python/prefect-cache_policies # `prefect.cache_policies` ## Classes ### `CachePolicy` Base class for all cache policies. **Methods:** #### `compute_key` ```python theme={null} compute_key(self, task_ctx: TaskRunContext, inputs: dict[str, Any], flow_parameters: dict[str, Any], **kwargs: Any) -> Optional[str] ``` #### `configure` ```python theme={null} configure(self, key_storage: Union['WritableFileSystem', str, Path, None] = None, lock_manager: Optional['LockManager'] = None, isolation_level: Union[Literal['READ_COMMITTED', 'SERIALIZABLE'], 'IsolationLevel', None] = None) -> Self ``` Configure the cache policy with the given key storage, lock manager, and isolation level. **Args:** * `key_storage`: The storage to use for cache keys. If not provided, the current key storage will be used. * `lock_manager`: The lock manager to use for the cache policy. If not provided, the current lock manager will be used. * `isolation_level`: The isolation level to use for the cache policy. If not provided, the current isolation level will be used. **Returns:** * A new cache policy with the given key storage, lock manager, and isolation level. #### `from_cache_key_fn` ```python theme={null} from_cache_key_fn(cls, cache_key_fn: Callable[['TaskRunContext', Dict[str, Any]], Optional[str]]) -> 'CacheKeyFnPolicy' ``` Given a function generates a key policy. ### `CacheKeyFnPolicy` This policy accepts a custom function with signature f(task\_run\_context, task\_parameters, flow\_parameters) -> str and uses it to compute a task run cache key. **Methods:** #### `compute_key` ```python theme={null} compute_key(self, task_ctx: TaskRunContext, inputs: dict[str, Any], flow_parameters: dict[str, Any], **kwargs: Any) -> Optional[str] ``` #### `compute_key` ```python theme={null} compute_key(self, task_ctx: TaskRunContext, inputs: dict[str, Any], flow_parameters: dict[str, Any], **kwargs: Any) -> Optional[str] ``` #### `configure` ```python theme={null} configure(self, key_storage: Union['WritableFileSystem', str, Path, None] = None, lock_manager: Optional['LockManager'] = None, isolation_level: Union[Literal['READ_COMMITTED', 'SERIALIZABLE'], 'IsolationLevel', None] = None) -> Self ``` Configure the cache policy with the given key storage, lock manager, and isolation level. **Args:** * `key_storage`: The storage to use for cache keys. If not provided, the current key storage will be used. * `lock_manager`: The lock manager to use for the cache policy. If not provided, the current lock manager will be used. * `isolation_level`: The isolation level to use for the cache policy. If not provided, the current isolation level will be used. **Returns:** * A new cache policy with the given key storage, lock manager, and isolation level. #### `from_cache_key_fn` ```python theme={null} from_cache_key_fn(cls, cache_key_fn: Callable[['TaskRunContext', Dict[str, Any]], Optional[str]]) -> 'CacheKeyFnPolicy' ``` Given a function generates a key policy. ### `CompoundCachePolicy` This policy is constructed from two or more other cache policies and works by computing the keys for each policy individually, and then hashing a sorted tuple of all computed keys. Any keys that return `None` will be ignored. **Methods:** #### `compute_key` ```python theme={null} compute_key(self, task_ctx: TaskRunContext, inputs: dict[str, Any], flow_parameters: dict[str, Any], **kwargs: Any) -> Optional[str] ``` #### `compute_key` ```python theme={null} compute_key(self, task_ctx: TaskRunContext, inputs: dict[str, Any], flow_parameters: dict[str, Any], **kwargs: Any) -> Optional[str] ``` #### `configure` ```python theme={null} configure(self, key_storage: Union['WritableFileSystem', str, Path, None] = None, lock_manager: Optional['LockManager'] = None, isolation_level: Union[Literal['READ_COMMITTED', 'SERIALIZABLE'], 'IsolationLevel', None] = None) -> Self ``` Configure the cache policy with the given key storage, lock manager, and isolation level. **Args:** * `key_storage`: The storage to use for cache keys. If not provided, the current key storage will be used. * `lock_manager`: The lock manager to use for the cache policy. If not provided, the current lock manager will be used. * `isolation_level`: The isolation level to use for the cache policy. If not provided, the current isolation level will be used. **Returns:** * A new cache policy with the given key storage, lock manager, and isolation level. #### `from_cache_key_fn` ```python theme={null} from_cache_key_fn(cls, cache_key_fn: Callable[['TaskRunContext', Dict[str, Any]], Optional[str]]) -> 'CacheKeyFnPolicy' ``` Given a function generates a key policy. ### `TaskSource` Policy for computing a cache key based on the source code of the task. This policy only considers raw lines of code in the task, and not the source code of nested tasks. **Methods:** #### `compute_key` ```python theme={null} compute_key(self, task_ctx: TaskRunContext, inputs: Optional[dict[str, Any]], flow_parameters: Optional[dict[str, Any]], **kwargs: Any) -> Optional[str] ``` #### `compute_key` ```python theme={null} compute_key(self, task_ctx: TaskRunContext, inputs: dict[str, Any], flow_parameters: dict[str, Any], **kwargs: Any) -> Optional[str] ``` #### `configure` ```python theme={null} configure(self, key_storage: Union['WritableFileSystem', str, Path, None] = None, lock_manager: Optional['LockManager'] = None, isolation_level: Union[Literal['READ_COMMITTED', 'SERIALIZABLE'], 'IsolationLevel', None] = None) -> Self ``` Configure the cache policy with the given key storage, lock manager, and isolation level. **Args:** * `key_storage`: The storage to use for cache keys. If not provided, the current key storage will be used. * `lock_manager`: The lock manager to use for the cache policy. If not provided, the current lock manager will be used. * `isolation_level`: The isolation level to use for the cache policy. If not provided, the current isolation level will be used. **Returns:** * A new cache policy with the given key storage, lock manager, and isolation level. #### `from_cache_key_fn` ```python theme={null} from_cache_key_fn(cls, cache_key_fn: Callable[['TaskRunContext', Dict[str, Any]], Optional[str]]) -> 'CacheKeyFnPolicy' ``` Given a function generates a key policy. ### `FlowParameters` Policy that computes the cache key based on a hash of the flow parameters. **Methods:** #### `compute_key` ```python theme={null} compute_key(self, task_ctx: TaskRunContext, inputs: dict[str, Any], flow_parameters: dict[str, Any], **kwargs: Any) -> Optional[str] ``` #### `compute_key` ```python theme={null} compute_key(self, task_ctx: TaskRunContext, inputs: dict[str, Any], flow_parameters: dict[str, Any], **kwargs: Any) -> Optional[str] ``` #### `configure` ```python theme={null} configure(self, key_storage: Union['WritableFileSystem', str, Path, None] = None, lock_manager: Optional['LockManager'] = None, isolation_level: Union[Literal['READ_COMMITTED', 'SERIALIZABLE'], 'IsolationLevel', None] = None) -> Self ``` Configure the cache policy with the given key storage, lock manager, and isolation level. **Args:** * `key_storage`: The storage to use for cache keys. If not provided, the current key storage will be used. * `lock_manager`: The lock manager to use for the cache policy. If not provided, the current lock manager will be used. * `isolation_level`: The isolation level to use for the cache policy. If not provided, the current isolation level will be used. **Returns:** * A new cache policy with the given key storage, lock manager, and isolation level. #### `from_cache_key_fn` ```python theme={null} from_cache_key_fn(cls, cache_key_fn: Callable[['TaskRunContext', Dict[str, Any]], Optional[str]]) -> 'CacheKeyFnPolicy' ``` Given a function generates a key policy. ### `RunId` Returns either the prevailing flow run ID, or if not found, the prevailing task run ID. **Methods:** #### `compute_key` ```python theme={null} compute_key(self, task_ctx: TaskRunContext, inputs: dict[str, Any], flow_parameters: dict[str, Any], **kwargs: Any) -> Optional[str] ``` #### `compute_key` ```python theme={null} compute_key(self, task_ctx: TaskRunContext, inputs: dict[str, Any], flow_parameters: dict[str, Any], **kwargs: Any) -> Optional[str] ``` #### `configure` ```python theme={null} configure(self, key_storage: Union['WritableFileSystem', str, Path, None] = None, lock_manager: Optional['LockManager'] = None, isolation_level: Union[Literal['READ_COMMITTED', 'SERIALIZABLE'], 'IsolationLevel', None] = None) -> Self ``` Configure the cache policy with the given key storage, lock manager, and isolation level. **Args:** * `key_storage`: The storage to use for cache keys. If not provided, the current key storage will be used. * `lock_manager`: The lock manager to use for the cache policy. If not provided, the current lock manager will be used. * `isolation_level`: The isolation level to use for the cache policy. If not provided, the current isolation level will be used. **Returns:** * A new cache policy with the given key storage, lock manager, and isolation level. #### `from_cache_key_fn` ```python theme={null} from_cache_key_fn(cls, cache_key_fn: Callable[['TaskRunContext', Dict[str, Any]], Optional[str]]) -> 'CacheKeyFnPolicy' ``` Given a function generates a key policy. ### `Inputs` Policy that computes a cache key based on a hash of the runtime inputs provided to the task.. **Methods:** #### `compute_key` ```python theme={null} compute_key(self, task_ctx: TaskRunContext, inputs: dict[str, Any], flow_parameters: dict[str, Any], **kwargs: Any) -> Optional[str] ``` #### `compute_key` ```python theme={null} compute_key(self, task_ctx: TaskRunContext, inputs: dict[str, Any], flow_parameters: dict[str, Any], **kwargs: Any) -> Optional[str] ``` #### `configure` ```python theme={null} configure(self, key_storage: Union['WritableFileSystem', str, Path, None] = None, lock_manager: Optional['LockManager'] = None, isolation_level: Union[Literal['READ_COMMITTED', 'SERIALIZABLE'], 'IsolationLevel', None] = None) -> Self ``` Configure the cache policy with the given key storage, lock manager, and isolation level. **Args:** * `key_storage`: The storage to use for cache keys. If not provided, the current key storage will be used. * `lock_manager`: The lock manager to use for the cache policy. If not provided, the current lock manager will be used. * `isolation_level`: The isolation level to use for the cache policy. If not provided, the current isolation level will be used. **Returns:** * A new cache policy with the given key storage, lock manager, and isolation level. #### `from_cache_key_fn` ```python theme={null} from_cache_key_fn(cls, cache_key_fn: Callable[['TaskRunContext', Dict[str, Any]], Optional[str]]) -> 'CacheKeyFnPolicy' ``` Given a function generates a key policy. # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-cli-__init__ # `prefect.cli` *This module is empty or contains only private/internal implementations.* # api Source: https://docs.prefect.io/v3/api-ref/python/prefect-cli-api # `prefect.cli.api` API command — native cyclopts implementation. Make direct requests to the Prefect API. ## Functions ### `api_request` ```python theme={null} api_request(method: str, path: str) ``` Make a direct request to the Prefect API. # artifact Source: https://docs.prefect.io/v3/api-ref/python/prefect-cli-artifact # `prefect.cli.artifact` Artifact command — native cyclopts implementation. Inspect and delete artifacts. ## Functions ### `list_artifacts` ```python theme={null} list_artifacts() ``` List artifacts. ### `inspect` ```python theme={null} inspect(key: str) ``` View details about an artifact. ### `delete` ```python theme={null} delete(key: Annotated[Optional[str], cyclopts.Parameter(help='The key of the artifact to delete.')] = None) ``` Delete an artifact. # automation Source: https://docs.prefect.io/v3/api-ref/python/prefect-cli-automation # `prefect.cli.automation` Automation command — native cyclopts implementation. Manage automations. ## Functions ### `ls` ```python theme={null} ls() ``` List all automations. ### `inspect` ```python theme={null} inspect(name: Annotated[Optional[str], cyclopts.Parameter(show=False)] = None) ``` Inspect an automation. ### `resume` ```python theme={null} resume(name: Annotated[Optional[str], cyclopts.Parameter(show=False)] = None) ``` Resume an automation. ### `pause` ```python theme={null} pause(name: Annotated[Optional[str], cyclopts.Parameter(show=False)] = None) ``` Pause an automation. ### `delete` ```python theme={null} delete(name: Annotated[Optional[str], cyclopts.Parameter(show=False)] = None) ``` Delete an automation. ### `create` ```python theme={null} create() ``` Create one or more automations from a file or JSON string. ### `update` ```python theme={null} update() ``` Update an existing automation from a file or JSON string. # block Source: https://docs.prefect.io/v3/api-ref/python/prefect-cli-block # `prefect.cli.block` Block command — native cyclopts implementation. Manage blocks and block types. ## Functions ### `register` ```python theme={null} register() ``` Register blocks types within a module or file. This makes the blocks available for configuration via the UI. If a block type has already been registered, its registration will be updated to match the block's current definition. ### `block_ls` ```python theme={null} block_ls() ``` View all configured blocks. ### `block_delete` ```python theme={null} block_delete(slug: Annotated[Optional[str], cyclopts.Parameter(help="A block slug. Formatted as '/'")] = None) ``` Delete a configured block. ### `block_create` ```python theme={null} block_create(block_type_slug: Annotated[str, cyclopts.Parameter(help='A block type slug. View available types with: prefect block type ls', show_default=False)]) ``` Generate a link to the Prefect UI to create a block. ### `block_inspect` ```python theme={null} block_inspect(slug: Annotated[Optional[str], cyclopts.Parameter(help='A Block slug: /')] = None) ``` Displays details about a configured block. ### `list_types` ```python theme={null} list_types() ``` List all block types. ### `blocktype_inspect` ```python theme={null} blocktype_inspect(slug: Annotated[str, cyclopts.Parameter(help='A block type slug')]) ``` Display details about a block type. ### `blocktype_delete` ```python theme={null} blocktype_delete(slug: Annotated[str, cyclopts.Parameter(help='A Block type slug')]) ``` Delete an unprotected Block Type. # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-cli-cloud-__init__ # `prefect.cli.cloud` Cloud command — authenticate and interact with Prefect Cloud. ## Functions ### `login` ```python theme={null} login() ``` Log in to Prefect Cloud. Creates a new profile configured to use the specified PREFECT\_API\_KEY. Uses a previously configured profile if it exists. ### `logout` ```python theme={null} logout() ``` Logout the current workspace. Reset PREFECT\_API\_KEY and PREFECT\_API\_URL to default. ### `workspace_ls` ```python theme={null} workspace_ls() ``` List available workspaces. ### `workspace_set` ```python theme={null} workspace_set() ``` Set current workspace. Shows a workspace picker if no workspace is specified. # asset Source: https://docs.prefect.io/v3/api-ref/python/prefect-cli-cloud-asset # `prefect.cli.cloud.asset` Manage Prefect Cloud assets. ## Functions ### `asset_ls` ```python theme={null} asset_ls() ``` List assets in the current workspace. ### `asset_delete` ```python theme={null} asset_delete(key: Annotated[str, cyclopts.Parameter(help='The key of the asset to delete')]) ``` Delete an asset by its key. The key should be the full asset URI (e.g., 's3://bucket/data.csv'). # ip_allowlist Source: https://docs.prefect.io/v3/api-ref/python/prefect-cli-cloud-ip_allowlist # `prefect.cli.cloud.ip_allowlist` Manage Prefect Cloud IP Allowlists. ## Functions ### `ip_allowlist_enable` ```python theme={null} ip_allowlist_enable() ``` Enable the IP allowlist for your account. When enabled, if the allowlist is non-empty, then access to your Prefect Cloud account will be restricted to only those IP addresses on the allowlist. ### `ip_allowlist_disable` ```python theme={null} ip_allowlist_disable() ``` Disable the IP allowlist for your account. When disabled, all IP addresses will be allowed to access your Prefect Cloud account. ### `ip_allowlist_ls` ```python theme={null} ip_allowlist_ls() ``` Fetch and list all IP allowlist entries in your account. ### `ip_allowlist_add` ```python theme={null} ip_allowlist_add(ip_address_or_range: Annotated[str, cyclopts.Parameter(help='An IP address or range in CIDR notation. E.g. 192.168.1.0 or 192.168.1.0/24')]) ``` Add a new IP entry to your account IP allowlist. ### `ip_allowlist_remove` ```python theme={null} ip_allowlist_remove(ip_address_or_range: Annotated[str, cyclopts.Parameter(help='An IP address or range in CIDR notation. E.g. 192.168.1.0 or 192.168.1.0/24')]) ``` Remove an IP entry from your account IP allowlist. ### `ip_allowlist_toggle` ```python theme={null} ip_allowlist_toggle(ip_address_or_range: Annotated[str, cyclopts.Parameter(help='An IP address or range in CIDR notation. E.g. 192.168.1.0 or 192.168.1.0/24')]) ``` Toggle the enabled status of an individual IP entry in your account IP allowlist. # webhook Source: https://docs.prefect.io/v3/api-ref/python/prefect-cli-cloud-webhook # `prefect.cli.cloud.webhook` Manage Prefect Cloud Webhooks. ## Functions ### `webhook_ls` ```python theme={null} webhook_ls() ``` Fetch and list all webhooks in your workspace. ### `webhook_get` ```python theme={null} webhook_get(webhook_id: Annotated[UUID, cyclopts.Parameter(help='The webhook ID to retrieve.')]) ``` Retrieve a webhook by ID. ### `webhook_create` ```python theme={null} webhook_create(webhook_name: Annotated[str, cyclopts.Parameter(help='The name of the webhook.')]) ``` Create a new Cloud webhook. ### `webhook_rotate` ```python theme={null} webhook_rotate(webhook_id: Annotated[UUID, cyclopts.Parameter(help='The webhook ID to rotate.')]) ``` Rotate url for an existing Cloud webhook, in case it has been compromised. ### `webhook_toggle` ```python theme={null} webhook_toggle(webhook_id: Annotated[UUID, cyclopts.Parameter(help='The webhook ID to toggle.')]) ``` Toggle the enabled status of an existing Cloud webhook. ### `webhook_update` ```python theme={null} webhook_update(webhook_id: Annotated[UUID, cyclopts.Parameter(help='The webhook ID to update.')]) ``` Partially update an existing Cloud webhook. ### `webhook_delete` ```python theme={null} webhook_delete(webhook_id: Annotated[UUID, cyclopts.Parameter(help='The webhook ID to delete.')]) ``` Delete an existing Cloud webhook. # concurrency_limit Source: https://docs.prefect.io/v3/api-ref/python/prefect-cli-concurrency_limit # `prefect.cli.concurrency_limit` Concurrency-limit command — native cyclopts implementation. Manage task-level concurrency limits. ## Functions ### `create` ```python theme={null} create(tag: str, concurrency_limit: int) ``` Create a concurrency limit against a tag. ### `inspect` ```python theme={null} inspect(tag: str) ``` View details about a concurrency limit. ### `ls` ```python theme={null} ls() ``` View all concurrency limits. ### `reset` ```python theme={null} reset(tag: str) ``` Resets the concurrency limit slots set on the specified tag. ### `delete` ```python theme={null} delete(tag: str) ``` Delete the concurrency limit set on the specified tag. # config Source: https://docs.prefect.io/v3/api-ref/python/prefect-cli-config # `prefect.cli.config` Config command — native cyclopts implementation. Manages Prefect settings and profiles. ## Functions ### `set_` ```python theme={null} set_(settings: Annotated[list[str], cyclopts.Parameter(help='Settings in VAR=VAL format')]) -> None ``` Change the value for a setting by setting the value in the current profile. ### `validate` ```python theme={null} validate() -> None ``` Read and validate the current profile. Deprecated settings will be automatically converted to new names. ### `unset` ```python theme={null} unset(setting_names: Annotated[list[str], cyclopts.Parameter(help='Setting names to unset')]) -> None ``` Restore the default value for a setting. Removes the setting from the current profile. ### `view` ```python theme={null} view() -> None ``` Display the current settings. # dashboard Source: https://docs.prefect.io/v3/api-ref/python/prefect-cli-dashboard # `prefect.cli.dashboard` Dashboard command — native cyclopts implementation. Open the Prefect UI in the browser. ## Functions ### `open_dashboard` ```python theme={null} open_dashboard() ``` Open the Prefect UI in the browser. # deployment Source: https://docs.prefect.io/v3/api-ref/python/prefect-cli-deployment # `prefect.cli.deployment` Deployment command — native cyclopts implementation. Manage deployments and deployment schedules. ## Functions ### `inspect` ```python theme={null} inspect(name: str) ``` View details about a deployment. ### `ls` ```python theme={null} ls(flow_name: Annotated[Optional[list[str]], cyclopts.Parameter('flow_name', help='One or more flow names to filter deployments by.')] = None) ``` View all deployments or deployments for specific flows. ### `run` ```python theme={null} run(name: Annotated[Optional[str], cyclopts.Parameter('name', help="A deployed flow's name: /")] = None) ``` Create a flow run for the given flow and deployment. The flow run will be scheduled to run immediately unless `--start-in` or `--start-at` is specified. The flow run will not execute until a worker starts. To watch the flow run until it reaches a terminal state, use the `--watch` flag. ### `delete` ```python theme={null} delete(name: Annotated[Optional[str], cyclopts.Parameter('name', help="A deployed flow's name: /")] = None) ``` Delete a deployment. ### `create_schedule` ```python theme={null} create_schedule(name: str) ``` Create a schedule for a given deployment. ### `delete_schedule` ```python theme={null} delete_schedule(deployment_name: str, schedule_id: UUID) ``` Delete a deployment schedule. ### `pause_schedule` ```python theme={null} pause_schedule(deployment_name: Annotated[Optional[str], cyclopts.Parameter('deployment_name')] = None, schedule_id: Annotated[Optional[UUID], cyclopts.Parameter('schedule_id')] = None) ``` Pause deployment schedules. ### `resume_schedule` ```python theme={null} resume_schedule(deployment_name: Annotated[Optional[str], cyclopts.Parameter('deployment_name')] = None, schedule_id: Annotated[Optional[UUID], cyclopts.Parameter('schedule_id')] = None) ``` Resume deployment schedules. ### `list_schedules` ```python theme={null} list_schedules(deployment_name: str) ``` View all schedules for a deployment. ### `clear_schedules` ```python theme={null} clear_schedules(deployment_name: str) ``` Clear all schedules for a deployment. # dev Source: https://docs.prefect.io/v3/api-ref/python/prefect-cli-dev # `prefect.cli.dev` Dev command — native cyclopts implementation. Internal Prefect development commands. ## Functions ### `build_docs` ```python theme={null} build_docs(schema_path: Optional[str] = None) ``` Builds REST API reference documentation for static display. ### `build_ui` ```python theme={null} build_ui(no_install: bool = False) ``` Installs dependencies and builds UI locally. Requires npm. ### `ui` ```python theme={null} ui() ``` Starts a hot-reloading development UI. ### `api` ```python theme={null} api() ``` Starts a hot-reloading development API. ### `start` ```python theme={null} start() ``` Starts a hot-reloading development server with API, UI, and agent processes. ### `build_image` ```python theme={null} build_image() ``` Build a docker image for development. ### `container` ```python theme={null} container() ``` Run a docker container with local code mounted and installed. # events Source: https://docs.prefect.io/v3/api-ref/python/prefect-cli-events # `prefect.cli.events` Events command — native cyclopts implementation. Stream and emit events. ## Functions ### `stream` ```python theme={null} stream() ``` Subscribe to the event stream, printing each event as it is received. ### `emit` ```python theme={null} emit(event: str) ``` Emit a single event to Prefect. # experimental Source: https://docs.prefect.io/v3/api-ref/python/prefect-cli-experimental # `prefect.cli.experimental` Experimental command — native cyclopts implementation. Access experimental features (subject to change). ## Functions ### `diagnose` ```python theme={null} diagnose() ``` Diagnose the experimental plugin system. # flow Source: https://docs.prefect.io/v3/api-ref/python/prefect-cli-flow # `prefect.cli.flow` Flow command — native cyclopts implementation. View and serve flows. ## Functions ### `ls` ```python theme={null} ls() ``` View flows. ### `serve` ```python theme={null} serve(entrypoint: str) ``` Serve a flow via an entrypoint. # flow_run Source: https://docs.prefect.io/v3/api-ref/python/prefect-cli-flow_run # `prefect.cli.flow_run` Flow run command — native cyclopts implementation. Interact with flow runs. ## Functions ### `inspect` ```python theme={null} inspect(id: UUID) ``` View details about a flow run. ### `ls` ```python theme={null} ls() ``` View recent flow runs or flow runs for specific flows. ### `delete` ```python theme={null} delete(id: UUID) ``` Delete a flow run by ID. ### `cancel` ```python theme={null} cancel(id: UUID) ``` Cancel a flow run by ID. ### `retry` ```python theme={null} retry(id_or_name: str) ``` Retry a failed or completed flow run. ### `logs` ```python theme={null} logs(id: UUID) ``` View logs for a flow run. ### `watch` ```python theme={null} watch(id: UUID) ``` Watch a flow run until it reaches a terminal state. ### `execute` ```python theme={null} execute(id: Optional[UUID] = None) ``` Execute a flow run by ID. # flow_runs_watching Source: https://docs.prefect.io/v3/api-ref/python/prefect-cli-flow_runs_watching # `prefect.cli.flow_runs_watching` Utilities for following flow runs with interleaved events and logs ## Functions ### `watch_flow_run` ```python theme={null} watch_flow_run(flow_run_id: UUID, console: Console, timeout: int | None = None) -> FlowRun ``` Watch a flow run, displaying interleaved events and logs until completion. **Args:** * `flow_run_id`: The ID of the flow run to watch * `console`: Rich console for output * `timeout`: Maximum time to wait for flow run completion in seconds. If None, waits indefinitely. **Returns:** * The finished flow run **Raises:** * `FlowRunWaitTimeout`: If the flow run exceeds the timeout ## Classes ### `FlowRunFormatter` Handles formatting of logs and events for CLI display **Methods:** #### `format` ```python theme={null} format(self, item: Log | Event) -> str ``` Format a log or event for display #### `format_event` ```python theme={null} format_event(self, event: Event) -> str ``` Format an event #### `format_log` ```python theme={null} format_log(self, log: Log) -> str ``` Format a log entry #### `format_run_id` ```python theme={null} format_run_id(self, run_id_short: str) -> str ``` Format run ID #### `format_timestamp` ```python theme={null} format_timestamp(self, dt: datetime) -> str ``` Format timestamp with incremental display # global_concurrency_limit Source: https://docs.prefect.io/v3/api-ref/python/prefect-cli-global_concurrency_limit # `prefect.cli.global_concurrency_limit` Global-concurrency-limit command — native cyclopts implementation. Manage global concurrency limits. ## Functions ### `ls` ```python theme={null} ls() ``` List all global concurrency limits. ### `inspect` ```python theme={null} inspect(name: str) ``` Inspect a global concurrency limit. ### `delete` ```python theme={null} delete(name: str) ``` Delete a global concurrency limit. ### `enable` ```python theme={null} enable(name: str) ``` Enable a global concurrency limit. ### `disable` ```python theme={null} disable(name: str) ``` Disable a global concurrency limit. ### `update` ```python theme={null} update(name: str) ``` Update a global concurrency limit. ### `create` ```python theme={null} create(name: str) ``` Create a global concurrency limit. # profile Source: https://docs.prefect.io/v3/api-ref/python/prefect-cli-profile # `prefect.cli.profile` Profile command — native cyclopts implementation. Manages Prefect profiles. ## Functions ### `ls` ```python theme={null} ls() ``` List profile names. ### `create` ```python theme={null} create(name: str) ``` Create a new profile. ### `use` ```python theme={null} use(name: str) ``` Set the given profile to active. ### `delete` ```python theme={null} delete(name: str) ``` Delete the given profile. ### `rename` ```python theme={null} rename(name: str, new_name: str) ``` Change the name of a profile. ### `inspect` ```python theme={null} inspect(name: Annotated[Optional[str], cyclopts.Parameter(help='Name of profile to inspect; defaults to active.')] = None) ``` Display settings from a given profile; defaults to active. ### `populate_defaults` ```python theme={null} populate_defaults() ``` Populate the profiles configuration with default base profiles, preserving existing user profiles. ### `check_server_connection` ```python theme={null} check_server_connection() -> ConnectionStatus ``` ## Classes ### `ConnectionStatus` **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` # sdk Source: https://docs.prefect.io/v3/api-ref/python/prefect-cli-sdk # `prefect.cli.sdk` SDK command — native cyclopts implementation. Generate typed Python SDK from workspace deployments. ## Functions ### `generate` ```python theme={null} generate() -> None ``` (beta) Generate a typed Python SDK from workspace deployments. # server Source: https://docs.prefect.io/v3/api-ref/python/prefect-cli-server # `prefect.cli.server` Server command — native cyclopts implementation. Start and manage the Prefect server. ## Functions ### `start` ```python theme={null} start() ``` Start a Prefect server instance. ### `status` ```python theme={null} status() ``` Check the status of the Prefect server. ### `stop` ```python theme={null} stop() ``` Stop a Prefect server instance running in the background. ### `reset` ```python theme={null} reset() ``` Drop and recreate all Prefect database tables. ### `upgrade` ```python theme={null} upgrade() ``` Upgrade the Prefect database. ### `downgrade` ```python theme={null} downgrade() ``` Downgrade the Prefect database. ### `revision` ```python theme={null} revision() ``` Create a new migration for the Prefect database. ### `stamp` ```python theme={null} stamp(revision: str) ``` Stamp the revision table with the given revision; don't run any migrations. ### `list_services` ```python theme={null} list_services() ``` List all available services and their status. ### `start_services` ```python theme={null} start_services() ``` Start all enabled Prefect services in one process. ### `stop_services` ```python theme={null} stop_services() ``` Stop any background Prefect services that were started. ### `run_manager_process` ```python theme={null} run_manager_process() ``` Internal entrypoint for background services. # shell Source: https://docs.prefect.io/v3/api-ref/python/prefect-cli-shell # `prefect.cli.shell` Shell command — native cyclopts implementation. Run shell commands as Prefect flows. ## Functions ### `output_stream` ```python theme={null} output_stream(pipe: IO[str], logger_function: Callable[[str], None]) -> None ``` Read from a pipe line by line and log using the provided logging function. ### `output_collect` ```python theme={null} output_collect(pipe: IO[str], container: list[str]) -> None ``` Collect output from a subprocess pipe and store it in a container list. ### `run_shell_process` ```python theme={null} run_shell_process(command: str, log_output: bool = True, stream_stdout: bool = False, log_stderr: bool = False, popen_kwargs: Optional[dict[str, Any]] = None) ``` Execute a shell command and log its output. Designed for use within Prefect flows to run shell commands as part of task execution. **Args:** * `command`: The shell command to execute. * `log_output`: If True, log stdout/stderr to Prefect logs. * `stream_stdout`: If True, stream stdout to Prefect logs. * `log_stderr`: If True, log stderr to Prefect logs. * `popen_kwargs`: Additional keyword arguments for subprocess.Popen. ### `watch` ```python theme={null} watch(command: str) ``` Execute a shell command and observe it as a Prefect flow. ### `serve` ```python theme={null} serve(command: str) ``` Create and serve a deployment that runs a shell command. # task Source: https://docs.prefect.io/v3/api-ref/python/prefect-cli-task # `prefect.cli.task` Task command — native cyclopts implementation. Work with task scheduling. ## Functions ### `task_serve` ```python theme={null} task_serve(*args: Any, **kwargs: Any) -> Any ``` ### `serve` ```python theme={null} serve(entrypoints: Annotated[Optional[list[str]], cyclopts.Parameter(help='The paths to one or more tasks, in the form of `./path/to/file.py:task_func_name`.')] = None) ``` Serve the provided tasks so that their runs may be submitted to and executed in the engine. # task_run Source: https://docs.prefect.io/v3/api-ref/python/prefect-cli-task_run # `prefect.cli.task_run` Task run command — native cyclopts implementation. View and inspect task runs. ## Functions ### `inspect` ```python theme={null} inspect(id: UUID) ``` View details about a task run. ### `ls` ```python theme={null} ls() ``` View recent task runs. ### `logs` ```python theme={null} logs(id: UUID) ``` View logs for a task run. # variable Source: https://docs.prefect.io/v3/api-ref/python/prefect-cli-variable # `prefect.cli.variable` Variable command — native cyclopts implementation. Manage Prefect variables. ## Functions ### `list_variables` ```python theme={null} list_variables() ``` List variables. ### `inspect` ```python theme={null} inspect(name: str) ``` View details about a variable. ### `get` ```python theme={null} get(name: str) ``` Get a variable's value. ### `unset` ```python theme={null} unset(name: str) ``` Unset a variable. # version Source: https://docs.prefect.io/v3/api-ref/python/prefect-cli-version # `prefect.cli.version` Version command — native cyclopts implementation. Displays detailed version and integration information. ## Functions ### `version` ```python theme={null} version() ``` Get the current Prefect version and integration information. # work_pool Source: https://docs.prefect.io/v3/api-ref/python/prefect-cli-work_pool # `prefect.cli.work_pool` Work pool command — native cyclopts implementation. Manage work pools. ## Functions ### `create` ```python theme={null} create(name: Annotated[str, cyclopts.Parameter(help='The name of the work pool.')]) ``` Create a new work pool or update an existing one. ### `ls` ```python theme={null} ls() ``` List work pools. ### `inspect` ```python theme={null} inspect(name: Annotated[str, cyclopts.Parameter(help='The name of the work pool to inspect.')]) ``` Inspect a work pool. ### `slots` ```python theme={null} slots(name: Annotated[str, cyclopts.Parameter(help='The name of the work pool.')]) ``` Show concurrency slot utilization for a work pool. ### `pause` ```python theme={null} pause(name: Annotated[str, cyclopts.Parameter(help='The name of the work pool to pause.')]) ``` Pause a work pool. ### `resume` ```python theme={null} resume(name: Annotated[str, cyclopts.Parameter(help='The name of the work pool to resume.')]) ``` Resume a work pool. ### `update` ```python theme={null} update(name: Annotated[str, cyclopts.Parameter(help='The name of the work pool to update.')]) ``` Update a work pool. ### `provision_infrastructure_cmd` ```python theme={null} provision_infrastructure_cmd(name: Annotated[str, cyclopts.Parameter(help='The name of the work pool to provision infrastructure for.')]) ``` Provision infrastructure for a work pool. ### `delete` ```python theme={null} delete(name: Annotated[str, cyclopts.Parameter(help='The name of the work pool to delete.')]) ``` Delete a work pool. ### `set_concurrency_limit` ```python theme={null} set_concurrency_limit(name: Annotated[str, cyclopts.Parameter(help='The name of the work pool to update.')], concurrency_limit: Annotated[int, cyclopts.Parameter(help='The new concurrency limit for the work pool.')]) ``` Set the concurrency limit for a work pool. ### `clear_concurrency_limit` ```python theme={null} clear_concurrency_limit(name: Annotated[str, cyclopts.Parameter(help='The name of the work pool to update.')]) ``` Clear the concurrency limit for a work pool. ### `get_default_base_job_template` ```python theme={null} get_default_base_job_template() ``` Get the default base job template for a given work pool type. ### `preview` ```python theme={null} preview(name: Annotated[Optional[str], cyclopts.Parameter(help='The name or ID of the work pool to preview')] = None) ``` Preview the work pool's scheduled work for all queues. ### `storage_inspect` ```python theme={null} storage_inspect(work_pool_name: Annotated[str, cyclopts.Parameter(help='The name of the work pool to display storage configuration for.')]) ``` EXPERIMENTAL: Inspect the storage configuration for a work pool. ### `storage_configure_s3` ```python theme={null} storage_configure_s3(work_pool_name: Annotated[str, cyclopts.Parameter(help='The name of the work pool to configure storage for.')]) ``` EXPERIMENTAL: Configure AWS S3 storage for a work pool. ### `storage_configure_gcs` ```python theme={null} storage_configure_gcs(work_pool_name: Annotated[str, cyclopts.Parameter(help='The name of the work pool to configure storage for.')]) ``` EXPERIMENTAL: Configure Google Cloud storage for a work pool. ### `storage_configure_azure_blob_storage` ```python theme={null} storage_configure_azure_blob_storage(work_pool_name: Annotated[str, cyclopts.Parameter(help='The name of the work pool to configure storage for.')]) ``` EXPERIMENTAL: Configure Azure Blob Storage for a work pool. # work_queue Source: https://docs.prefect.io/v3/api-ref/python/prefect-cli-work_queue # `prefect.cli.work_queue` Work queue command — native cyclopts implementation. Manage work queues. ## Functions ### `create` ```python theme={null} create(name: Annotated[str, cyclopts.Parameter(help='The unique name to assign this work queue')]) ``` Create a work queue. ### `set_concurrency_limit` ```python theme={null} set_concurrency_limit(name: Annotated[str, cyclopts.Parameter(help='The name or ID of the work queue')], limit: Annotated[int, cyclopts.Parameter(help='The concurrency limit to set on the queue.')]) ``` Set a concurrency limit on a work queue. ### `clear_concurrency_limit` ```python theme={null} clear_concurrency_limit(name: Annotated[str, cyclopts.Parameter(help='The name or ID of the work queue to clear')]) ``` Clear any concurrency limits from a work queue. ### `pause` ```python theme={null} pause(name: Annotated[str, cyclopts.Parameter(help='The name or ID of the work queue to pause')]) ``` Pause a work queue. ### `resume` ```python theme={null} resume(name: Annotated[str, cyclopts.Parameter(help='The name or ID of the work queue to resume')]) ``` Resume a paused work queue. ### `inspect` ```python theme={null} inspect(name: Annotated[str, cyclopts.Parameter(help='The name or ID of the work queue to inspect')]) ``` Inspect a work queue by ID. ### `slots` ```python theme={null} slots(name: Annotated[str, cyclopts.Parameter(help='The name or ID of the work queue.')]) ``` Show concurrency slot utilization for a work queue. ### `ls` ```python theme={null} ls() ``` View all work queues. ### `preview` ```python theme={null} preview(name: Annotated[str, cyclopts.Parameter(help='The name or ID of the work queue to preview')]) ``` Preview a work queue. ### `delete` ```python theme={null} delete(name: Annotated[str, cyclopts.Parameter(help='The name or ID of the work queue to delete')]) ``` Delete a work queue by ID. ### `read_wq_runs` ```python theme={null} read_wq_runs(name: Annotated[str, cyclopts.Parameter(help='The name or ID of the work queue to poll')]) ``` Get runs in a work queue. Note that this will trigger an artificial poll of the work queue. # worker Source: https://docs.prefect.io/v3/api-ref/python/prefect-cli-worker # `prefect.cli.worker` Worker command — native cyclopts implementation. Start and interact with workers. ## Functions ### `start` ```python theme={null} start() ``` Start a worker process to poll a work pool for flow runs. ## Classes ### `InstallPolicy` # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-client-__init__ # `prefect.client` Asynchronous client implementation for communicating with the [Prefect REST API](https://docs.prefect.io/v3/api-ref/rest-api/). Explore the client by communicating with an in-memory webserver - no setup required: \
``` $ # start python REPL with native await functionality $ python -m asyncio from prefect.client.orchestration import get_client async with get_client() as client: response = await client.hello() print(response.json()) 👋 ``` \
# attribution Source: https://docs.prefect.io/v3/api-ref/python/prefect-client-attribution # `prefect.client.attribution` Attribution context for API requests. This module provides functions to gather attribution headers that identify the source of API requests (flow runs, deployments, workers) for usage tracking and rate limit debugging. ## Functions ### `get_attribution_headers` ```python theme={null} get_attribution_headers() -> dict[str, str] ``` Gather attribution headers from the current execution context. These headers help Cloud track which flow runs, deployments, and workers are generating API requests for usage attribution and rate limit debugging. Headers are only included when values are available. All headers are optional. **Returns:** * A dictionary of attribution headers to include in API requests. # base Source: https://docs.prefect.io/v3/api-ref/python/prefect-client-base # `prefect.client.base` ## Functions ### `app_lifespan_context` ```python theme={null} app_lifespan_context(app: ASGIApp) -> AsyncGenerator[None, None] ``` A context manager that calls startup/shutdown hooks for the given application. Lifespan contexts are cached per application to avoid calling the lifespan hooks more than once if the context is entered in nested code. A no-op context will be returned if the context for the given application is already being managed. This manager is robust to concurrent access within the event loop. For example, if you have concurrent contexts for the same application, it is guaranteed that startup hooks will be called before their context starts and shutdown hooks will only be called after their context exits. A reference count is used to support nested use of clients without running lifespan hooks excessively. The first client context entered will create and enter a lifespan context. Each subsequent client will increment a reference count but will not create a new lifespan context. When each client context exits, the reference count is decremented. When the last client context exits, the lifespan will be closed. In simple nested cases, the first client context will be the one to exit the lifespan. However, if client contexts are entered concurrently they may not exit in a consistent order. If the first client context was responsible for closing the lifespan, it would have to wait until all other client contexts to exit to avoid firing shutdown hooks while the application is in use. Waiting for the other clients to exit can introduce deadlocks, so, instead, the first client will exit without closing the lifespan context and reference counts will be used to ensure the lifespan is closed once all of the clients are done. ### `determine_server_type` ```python theme={null} determine_server_type() -> ServerType ``` Determine the server type based on the current settings. **Returns:** * * `ServerType.EPHEMERAL` if the ephemeral server is enabled * * `ServerType.SERVER` if a API URL is configured and it is not a cloud URL * * `ServerType.CLOUD` if an API URL is configured and it is a cloud URL * * `ServerType.UNCONFIGURED` if no API URL is configured and ephemeral mode is not enabled ## Classes ### `ASGIApp` ### `PrefectResponse` A Prefect wrapper for the `httpx.Response` class. Provides more informative error messages. **Methods:** #### `from_httpx_response` ```python theme={null} from_httpx_response(cls: type[Self], response: httpx.Response) -> Response ``` Create a `PrefectResponse` from an `httpx.Response`. By changing the `__class__` attribute of the Response, we change the method resolution order to look for methods defined in PrefectResponse, while leaving everything else about the original Response instance intact. #### `raise_for_status` ```python theme={null} raise_for_status(self) -> Response ``` Raise an exception if the response contains an HTTPStatusError. The `PrefectHTTPStatusError` contains useful additional information that is not contained in the `HTTPStatusError`. ### `PrefectHttpxAsyncClient` A Prefect wrapper for the async httpx client with support for retry-after headers for the provided status codes (typically 429, 502 and 503). Additionally, this client will always call `raise_for_status` on responses. For more details on rate limit headers, see: [Configuring Cloudflare Rate Limiting](https://support.cloudflare.com/hc/en-us/articles/115001635128-Configuring-Rate-Limiting-from-UI) **Methods:** #### `send` ```python theme={null} send(self, request: Request, *args: Any, **kwargs: Any) -> Response ``` Send a request with automatic retry behavior for the following status codes: * 403 Forbidden, if the request failed due to CSRF protection * 408 Request Timeout * 429 CloudFlare-style rate limiting * 502 Bad Gateway * 503 Service unavailable * Any additional status codes provided in `PREFECT_CLIENT_RETRY_EXTRA_CODES` ### `PrefectHttpxSyncClient` A Prefect wrapper for the async httpx client with support for retry-after headers for the provided status codes (typically 429, 502 and 503). Additionally, this client will always call `raise_for_status` on responses. For more details on rate limit headers, see: [Configuring Cloudflare Rate Limiting](https://support.cloudflare.com/hc/en-us/articles/115001635128-Configuring-Rate-Limiting-from-UI) **Methods:** #### `send` ```python theme={null} send(self, request: Request, *args: Any, **kwargs: Any) -> Response ``` Send a request with automatic retry behavior for the following status codes: * 403 Forbidden, if the request failed due to CSRF protection * 408 Request Timeout * 429 CloudFlare-style rate limiting * 502 Bad Gateway * 503 Service unavailable * Any additional status codes provided in `PREFECT_CLIENT_RETRY_EXTRA_CODES` ### `ServerType` **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` # cloud Source: https://docs.prefect.io/v3/api-ref/python/prefect-client-cloud # `prefect.client.cloud` ## Functions ### `get_cloud_client` ```python theme={null} get_cloud_client(host: Optional[str] = None, api_key: Optional[str] = None, httpx_settings: Optional[dict[str, Any]] = None, infer_cloud_url: bool = False) -> 'CloudClient' ``` Needs a docstring. ## Classes ### `CloudUnauthorizedError` Raised when the CloudClient receives a 401 or 403 from the Cloud API. ### `CloudClient` **Methods:** #### `account_base_url` ```python theme={null} account_base_url(self) -> str ``` #### `api_healthcheck` ```python theme={null} api_healthcheck(self) -> None ``` Attempts to connect to the Cloud API and raises the encountered exception if not successful. If successful, returns `None`. #### `check_ip_allowlist_access` ```python theme={null} check_ip_allowlist_access(self) -> IPAllowlistMyAccessResponse ``` #### `get` ```python theme={null} get(self, route: str, **kwargs: Any) -> Any ``` #### `raw_request` ```python theme={null} raw_request(self, method: str, path: str, params: dict[str, Any] | None = None, path_params: dict[str, Any] | None = None, **kwargs: Any) -> httpx.Response ``` Make a raw HTTP request and return the Response object. Unlike request(), this does not parse JSON or raise special exceptions, returning the raw httpx.Response for direct access to headers, status, etc. **Args:** * `method`: HTTP method (GET, POST, etc.) * `path`: API path/route * `params`: Query parameters * `path_params`: Path parameters for formatting * `**kwargs`: Additional arguments passed to httpx (json, headers, etc.) **Returns:** * Raw httpx.Response object #### `read_account_ip_allowlist` ```python theme={null} read_account_ip_allowlist(self) -> IPAllowlist ``` #### `read_account_settings` ```python theme={null} read_account_settings(self) -> dict[str, Any] ``` #### `read_current_workspace` ```python theme={null} read_current_workspace(self) -> Workspace ``` #### `read_worker_metadata` ```python theme={null} read_worker_metadata(self) -> dict[str, Any] ``` #### `read_workspaces` ```python theme={null} read_workspaces(self) -> list[Workspace] ``` #### `request` ```python theme={null} request(self, method: str, route: str, **kwargs: Any) -> Any ``` #### `update_account_ip_allowlist` ```python theme={null} update_account_ip_allowlist(self, updated_allowlist: IPAllowlist) -> None ``` #### `update_account_settings` ```python theme={null} update_account_settings(self, settings: dict[str, Any]) -> None ``` #### `workspace_base_url` ```python theme={null} workspace_base_url(self) -> str ``` # collections Source: https://docs.prefect.io/v3/api-ref/python/prefect-client-collections # `prefect.client.collections` ## Functions ### `get_collections_metadata_client` ```python theme={null} get_collections_metadata_client(httpx_settings: Optional[Dict[str, Any]] = None) -> 'CollectionsMetadataClient' ``` Creates a client that can be used to fetch metadata for Prefect collections. Will return a `CloudClient` if profile is set to connect to Prefect Cloud, otherwise will return an `OrchestrationClient`. ## Classes ### `CollectionsMetadataClient` **Methods:** #### `read_worker_metadata` ```python theme={null} read_worker_metadata(self) -> Dict[str, Any] ``` # constants Source: https://docs.prefect.io/v3/api-ref/python/prefect-client-constants # `prefect.client.constants` *This module is empty or contains only private/internal implementations.* # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-client-orchestration-__init__ # `prefect.client.orchestration` ## Functions ### `get_client` ```python theme={null} get_client(httpx_settings: Optional[dict[str, Any]] = None, sync_client: bool = False) -> Union['SyncPrefectClient', 'PrefectClient'] ``` Retrieve a HTTP client for communicating with the Prefect REST API. The client must be context managed; for example: ```python theme={null} async with get_client() as client: await client.hello() ``` To return a synchronous client, pass sync\_client=True: ```python theme={null} with get_client(sync_client=True) as client: client.hello() ``` ## Classes ### `PrefectClient` An asynchronous client for interacting with the [Prefect REST API](https://docs.prefect.io/v3/api-ref/rest-api/). **Args:** * `api`: the REST API URL or FastAPI application to connect to * `api_key`: An optional API key for authentication. * `api_version`: The API version this client is compatible with. * `httpx_settings`: An optional dictionary of settings to pass to the underlying `httpx.AsyncClient` Examples: Say hello to a Prefect REST API ```python theme={null} async with get_client() as client: response = await client.hello() print(response.json()) 👋 ``` **Methods:** #### `api_healthcheck` ```python theme={null} api_healthcheck(self) -> Optional[Exception] ``` Attempts to connect to the API and returns the encountered exception if not successful. If successful, returns `None`. #### `api_url` ```python theme={null} api_url(self) -> httpx.URL ``` Get the base URL for the API. #### `api_version` ```python theme={null} api_version(self) -> str ``` #### `apply_slas_for_deployment` ```python theme={null} apply_slas_for_deployment(self, deployment_id: 'UUID', slas: 'list[SlaTypes]') -> 'UUID' ``` Applies service level agreements for a deployment. Performs matching by SLA name. If a SLA with the same name already exists, it will be updated. If a SLA with the same name does not exist, it will be created. Existing SLAs that are not in the list will be deleted. Args: deployment\_id: The ID of the deployment to update SLAs for slas: List of SLAs to associate with the deployment Raises: httpx.RequestError: if the SLAs were not updated for any reason Returns: SlaMergeResponse: The response from the backend, containing the names of the created, updated, and deleted SLAs #### `client_version` ```python theme={null} client_version(self) -> str ``` #### `count_flow_runs` ```python theme={null} count_flow_runs(self) -> int ``` Returns the count of flow runs matching all criteria for flow runs. **Args:** * `flow_filter`: filter criteria for flows * `flow_run_filter`: filter criteria for flow runs * `task_run_filter`: filter criteria for task runs * `deployment_filter`: filter criteria for deployments * `work_pool_filter`: filter criteria for work pools * `work_queue_filter`: filter criteria for work pool queues **Returns:** * count of flow runs #### `create_artifact` ```python theme={null} create_artifact(self, artifact: 'ArtifactCreate') -> 'Artifact' ``` #### `create_automation` ```python theme={null} create_automation(self, automation: 'AutomationCore') -> 'UUID' ``` Creates an automation in Prefect Cloud. #### `create_block_document` ```python theme={null} create_block_document(self, block_document: 'BlockDocument | BlockDocumentCreate', include_secrets: bool = True) -> 'BlockDocument' ``` Create a block document in the Prefect API. This data is used to configure a corresponding Block. **Args:** * `include_secrets`: whether to include secret values on the stored Block, corresponding to Pydantic's `SecretStr` and `SecretBytes` fields. Note Blocks may not work as expected if this is set to `False`. #### `create_block_schema` ```python theme={null} create_block_schema(self, block_schema: 'BlockSchemaCreate') -> 'BlockSchema' ``` Create a block schema in the Prefect API. #### `create_block_type` ```python theme={null} create_block_type(self, block_type: 'BlockTypeCreate') -> 'BlockType' ``` Create a block type in the Prefect API. #### `create_concurrency_limit` ```python theme={null} create_concurrency_limit(self, tag: str, concurrency_limit: int) -> 'UUID' ``` Create a tag concurrency limit in the Prefect API. These limits govern concurrently running tasks. **Args:** * `tag`: a tag the concurrency limit is applied to * `concurrency_limit`: the maximum number of concurrent task runs for a given tag **Raises:** * `httpx.RequestError`: if the concurrency limit was not created for any reason **Returns:** * the ID of the concurrency limit in the backend #### `create_deployment` ```python theme={null} create_deployment(self, flow_id: UUID, name: str, version: str | None = None, version_info: 'VersionInfo | None' = None, schedules: list['DeploymentScheduleCreate'] | None = None, concurrency_limit: int | None = None, concurrency_options: 'ConcurrencyOptions | None' = None, parameters: dict[str, Any] | None = None, description: str | None = None, work_queue_name: str | None = None, work_pool_name: str | None = None, tags: list[str] | None = None, storage_document_id: UUID | None = None, path: str | None = None, entrypoint: str | None = None, infrastructure_document_id: UUID | None = None, parameter_openapi_schema: dict[str, Any] | None = None, paused: bool | None = None, pull_steps: list[dict[str, Any]] | None = None, enforce_parameter_schema: bool | None = None, job_variables: dict[str, Any] | None = None, branch: str | None = None, base: UUID | None = None, root: UUID | None = None) -> UUID ``` Create a deployment. **Args:** * `flow_id`: the flow ID to create a deployment for * `name`: the name of the deployment * `version`: an optional version string for the deployment * `tags`: an optional list of tags to apply to the deployment * `storage_document_id`: an reference to the storage block document used for the deployed flow * `infrastructure_document_id`: an reference to the infrastructure block document to use for this deployment * `job_variables`: A dictionary of dot delimited infrastructure overrides that will be applied at runtime; for example `env.CONFIG_KEY=config_value` or `namespace='prefect'`. This argument was previously named `infra_overrides`. Both arguments are supported for backwards compatibility. **Raises:** * `RequestError`: if the deployment was not created for any reason **Returns:** * the ID of the deployment in the backend #### `create_deployment_branch` ```python theme={null} create_deployment_branch(self, deployment_id: UUID, branch: str, options: 'DeploymentBranchingOptions | None' = None, overrides: 'DeploymentUpdate | None' = None) -> UUID ``` #### `create_deployment_schedules` ```python theme={null} create_deployment_schedules(self, deployment_id: UUID, schedules: list[tuple['SCHEDULE_TYPES', bool]] | list['DeploymentScheduleCreate'], max_scheduled_runs: int | None = None, parameters: dict[str, Any] | None = None, slug: str | None = None) -> list['DeploymentSchedule'] ``` Create deployment schedules. **Args:** * `deployment_id`: the deployment ID * `schedules`: a list of tuples containing the schedule to create and whether or not it should be active, or a list of DeploymentScheduleCreate objects. * `max_scheduled_runs`: The maximum number of scheduled runs for the schedule. Only used when schedules is a list of tuples. * `parameters`: Parameter overrides for the schedule. Only used when schedules is a list of tuples. * `slug`: A unique identifier for the schedule. Only used when schedules is a list of tuples. **Raises:** * `RequestError`: if the schedules were not created for any reason **Returns:** * the list of schedules created in the backend #### `create_flow` ```python theme={null} create_flow(self, flow: 'FlowObject[Any, Any]') -> 'UUID' ``` Create a flow in the Prefect API. **Args:** * `flow`: a `Flow` object **Raises:** * `httpx.RequestError`: if a flow was not created for any reason **Returns:** * the ID of the flow in the backend #### `create_flow_from_name` ```python theme={null} create_flow_from_name(self, flow_name: str) -> 'UUID' ``` Create a flow in the Prefect API. **Args:** * `flow_name`: the name of the new flow **Raises:** * `httpx.RequestError`: if a flow was not created for any reason **Returns:** * the ID of the flow in the backend #### `create_flow_run` ```python theme={null} create_flow_run(self, flow: 'FlowObject[Any, R]', name: str | None = None, parameters: dict[str, Any] | None = None, context: dict[str, Any] | None = None, tags: 'Iterable[str] | None' = None, parent_task_run_id: 'UUID | None' = None, state: 'State[R] | None' = None, work_pool_name: str | None = None, work_queue_name: str | None = None, job_variables: dict[str, Any] | None = None) -> 'FlowRun' ``` Create a flow run for a flow. **Args:** * `flow`: The flow model to create the flow run for * `name`: An optional name for the flow run * `parameters`: Parameter overrides for this flow run. * `context`: Optional run context data * `tags`: a list of tags to apply to this flow run * `parent_task_run_id`: if a subflow run is being created, the placeholder task run identifier in the parent flow * `state`: The initial state for the run. If not provided, defaults to `Pending`. * `work_pool_name`: The name of the work pool to run the flow run in. * `work_queue_name`: The name of the work queue to place the flow run in. * `job_variables`: The job variables to use when setting up flow run infrastructure. **Raises:** * `httpx.RequestError`: if the Prefect API does not successfully create a run for any reason **Returns:** * The flow run model #### `create_flow_run_from_deployment` ```python theme={null} create_flow_run_from_deployment(self, deployment_id: UUID) -> 'FlowRun' ``` Create a flow run for a deployment. **Args:** * `deployment_id`: The deployment ID to create the flow run from * `parameters`: Parameter overrides for this flow run. Merged with the deployment defaults * `context`: Optional run context data * `state`: The initial state for the run. If not provided, defaults to `Scheduled` for now. Should always be a `Scheduled` type. * `name`: An optional name for the flow run. If not provided, the server will generate a name. * `tags`: An optional iterable of tags to apply to the flow run; these tags are merged with the deployment's tags. * `idempotency_key`: Optional idempotency key for creation of the flow run. If the key matches the key of an existing flow run, the existing run will be returned instead of creating a new one. * `parent_task_run_id`: if a subflow run is being created, the placeholder task run identifier in the parent flow * `work_queue_name`: An optional work queue name to add this run to. If not provided, will default to the deployment's set work queue. If one is provided that does not exist, a new work queue will be created within the deployment's work pool. * `job_variables`: Optional variables that will be supplied to the flow run job. **Raises:** * `RequestError`: if the Prefect API does not successfully create a run for any reason **Returns:** * The flow run model #### `create_flow_run_input` ```python theme={null} create_flow_run_input(self, flow_run_id: 'UUID', key: str, value: str, sender: str | None = None) -> None ``` Creates a flow run input. **Args:** * `flow_run_id`: The flow run id. * `key`: The input key. * `value`: The input value. * `sender`: The sender of the input. #### `create_global_concurrency_limit` ```python theme={null} create_global_concurrency_limit(self, concurrency_limit: 'GlobalConcurrencyLimitCreate') -> 'UUID' ``` #### `create_logs` ```python theme={null} create_logs(self, logs: Iterable[Union['LogCreate', dict[str, Any]]]) -> None ``` Create logs for a flow or task run **Args:** * `logs`: An iterable of `LogCreate` objects or already json-compatible dicts #### `create_task_run` ```python theme={null} create_task_run(self, task: 'TaskObject[P, R]', flow_run_id: Optional[UUID], dynamic_key: str, id: Optional[UUID] = None, name: Optional[str] = None, extra_tags: Optional[Iterable[str]] = None, state: Optional[prefect.states.State[R]] = None, task_inputs: Optional[dict[str, list[Union[TaskRunResult, FlowRunResult, Parameter, Constant]]]] = None) -> TaskRun ``` Create a task run **Args:** * `task`: The Task to run * `flow_run_id`: The flow run id with which to associate the task run * `dynamic_key`: A key unique to this particular run of a Task within the flow * `id`: An optional ID for the task run. If not provided, one will be generated server-side. * `name`: An optional name for the task run * `extra_tags`: an optional list of extra tags to apply to the task run in addition to `task.tags` * `state`: The initial state for the run. If not provided, defaults to `Pending` for now. Should always be a `Scheduled` type. * `task_inputs`: the set of inputs passed to the task **Returns:** * The created task run. #### `create_variable` ```python theme={null} create_variable(self, variable: 'VariableCreate') -> 'Variable' ``` Creates a variable with the provided configuration. #### `create_work_pool` ```python theme={null} create_work_pool(self, work_pool: 'WorkPoolCreate', overwrite: bool = False) -> 'WorkPool' ``` Creates a work pool with the provided configuration. **Args:** * `work_pool`: Desired configuration for the new work pool. **Returns:** * Information about the newly created work pool. #### `create_work_queue` ```python theme={null} create_work_queue(self, name: str, description: Optional[str] = None, is_paused: Optional[bool] = None, concurrency_limit: Optional[int] = None, priority: Optional[int] = None, work_pool_name: Optional[str] = None) -> WorkQueue ``` Create a work queue. **Args:** * `name`: a unique name for the work queue * `description`: An optional description for the work queue. * `is_paused`: Whether or not the work queue is paused. * `concurrency_limit`: An optional concurrency limit for the work queue. * `priority`: The queue's priority. Lower values are higher priority (1 is the highest). * `work_pool_name`: The name of the work pool to use for this queue. **Raises:** * `prefect.exceptions.ObjectAlreadyExists`: If request returns 409 * `httpx.RequestError`: If request fails **Returns:** * The created work queue #### `decrement_v1_concurrency_slots` ```python theme={null} decrement_v1_concurrency_slots(self, names: list[str], task_run_id: 'UUID', occupancy_seconds: float) -> 'Response' ``` Decrement concurrency limit slots for the specified limits. **Args:** * `names`: A list of limit names to decrement. * `task_run_id`: The task run ID that incremented the limits. * `occupancy_seconds`: The duration in seconds that the limits were held. **Returns:** * "Response": The HTTP response from the server. #### `delete_artifact` ```python theme={null} delete_artifact(self, artifact_id: 'UUID') -> None ``` #### `delete_automation` ```python theme={null} delete_automation(self, automation_id: 'UUID') -> None ``` #### `delete_block_document` ```python theme={null} delete_block_document(self, block_document_id: 'UUID') -> None ``` Delete a block document. #### `delete_block_type` ```python theme={null} delete_block_type(self, block_type_id: 'UUID') -> None ``` Delete a block type. #### `delete_concurrency_limit_by_tag` ```python theme={null} delete_concurrency_limit_by_tag(self, tag: str) -> None ``` Delete the concurrency limit set on a specific tag. **Args:** * `tag`: a tag the concurrency limit is applied to **Raises:** * `ObjectNotFound`: If request returns 404 * `httpx.RequestError`: If request fails #### `delete_deployment` ```python theme={null} delete_deployment(self, deployment_id: UUID) -> None ``` Delete deployment by id. **Args:** * `deployment_id`: The deployment id of interest. Raises: ObjectNotFound: If request returns 404 RequestError: If requests fails #### `delete_deployment_schedule` ```python theme={null} delete_deployment_schedule(self, deployment_id: UUID, schedule_id: UUID) -> None ``` Delete a deployment schedule. **Args:** * `deployment_id`: the deployment ID * `schedule_id`: the ID of the deployment schedule to delete. **Raises:** * `RequestError`: if the schedules were not deleted for any reason #### `delete_flow` ```python theme={null} delete_flow(self, flow_id: 'UUID') -> None ``` Delete a flow by UUID. **Args:** * `flow_id`: ID of the flow to be deleted Raises: prefect.exceptions.ObjectNotFound: If request returns 404 httpx.RequestError: If requests fail #### `delete_flow_run` ```python theme={null} delete_flow_run(self, flow_run_id: 'UUID') -> None ``` Delete a flow run by UUID. **Args:** * `flow_run_id`: The flow run UUID of interest. Raises: ObjectNotFound: If request returns 404 httpx.RequestError: If requests fails #### `delete_flow_run_input` ```python theme={null} delete_flow_run_input(self, flow_run_id: 'UUID', key: str) -> None ``` Deletes a flow run input. **Args:** * `flow_run_id`: The flow run id. * `key`: The input key. #### `delete_global_concurrency_limit_by_name` ```python theme={null} delete_global_concurrency_limit_by_name(self, name: str) -> 'Response' ``` #### `delete_resource_owned_automations` ```python theme={null} delete_resource_owned_automations(self, resource_id: str) -> None ``` #### `delete_task_run` ```python theme={null} delete_task_run(self, task_run_id: UUID) -> None ``` Delete a task run by id. **Args:** * `task_run_id`: the task run ID of interest Raises: prefect.exceptions.ObjectNotFound: If request returns 404 httpx.RequestError: If requests fails #### `delete_variable_by_name` ```python theme={null} delete_variable_by_name(self, name: str) -> None ``` Deletes a variable by name. #### `delete_work_pool` ```python theme={null} delete_work_pool(self, work_pool_name: str) -> None ``` Deletes a work pool. **Args:** * `work_pool_name`: Name of the work pool to delete. #### `delete_work_queue_by_id` ```python theme={null} delete_work_queue_by_id(self, id: UUID) -> None ``` Delete a work queue by its ID. **Args:** * `id`: the id of the work queue to delete **Raises:** * `prefect.exceptions.ObjectNotFound`: If request returns 404 * `httpx.RequestError`: If requests fails #### `filter_flow_run_input` ```python theme={null} filter_flow_run_input(self, flow_run_id: 'UUID', key_prefix: str, limit: int, exclude_keys: 'set[str]') -> 'list[FlowRunInput]' ``` #### `find_automation` ```python theme={null} find_automation(self, id_or_name: 'str | UUID') -> 'Automation | None' ``` #### `get_most_recent_block_schema_for_block_type` ```python theme={null} get_most_recent_block_schema_for_block_type(self, block_type_id: 'UUID') -> 'BlockSchema | None' ``` Fetches the most recent block schema for a specified block type ID. **Args:** * `block_type_id`: The ID of the block type. **Raises:** * `httpx.RequestError`: If the request fails for any reason. **Returns:** * The most recent block schema or None. #### `get_runs_in_work_queue` ```python theme={null} get_runs_in_work_queue(self, id: UUID, limit: int = 10, scheduled_before: Optional[datetime.datetime] = None) -> list[FlowRun] ``` Read flow runs off a work queue. **Args:** * `id`: the id of the work queue to read from * `limit`: a limit on the number of runs to return * `scheduled_before`: a timestamp; only runs scheduled before this time will be returned. Defaults to now. **Raises:** * `prefect.exceptions.ObjectNotFound`: If request returns 404 * `httpx.RequestError`: If request fails **Returns:** * List\[FlowRun]: a list of FlowRun objects read from the queue #### `get_scheduled_flow_runs_for_deployments` ```python theme={null} get_scheduled_flow_runs_for_deployments(self, deployment_ids: list[UUID], scheduled_before: 'datetime.datetime | None' = None, limit: int | None = None) -> list['FlowRun'] ``` #### `get_scheduled_flow_runs_for_work_pool` ```python theme={null} get_scheduled_flow_runs_for_work_pool(self, work_pool_name: str, work_queue_names: list[str] | None = None, scheduled_before: datetime | None = None) -> list['WorkerFlowRunResponse'] ``` Retrieves scheduled flow runs for the provided set of work pool queues. **Args:** * `work_pool_name`: The name of the work pool that the work pool queues are associated with. * `work_queue_names`: The names of the work pool queues from which to get scheduled flow runs. * `scheduled_before`: Datetime used to filter returned flow runs. Flow runs scheduled for after the given datetime string will not be returned. **Returns:** * A list of worker flow run responses containing information about the * retrieved flow runs. #### `hello` ```python theme={null} hello(self) -> httpx.Response ``` Send a GET request to /hello for testing purposes. #### `increment_concurrency_slots` ```python theme={null} increment_concurrency_slots(self, names: list[str], slots: int, mode: Literal['concurrency', 'rate_limit']) -> 'Response' ``` Increment concurrency slots for the specified limits. **Args:** * `names`: A list of limit names for which to occupy slots. * `slots`: The number of concurrency slots to occupy. * `mode`: The mode of the concurrency limits. #### `increment_concurrency_slots_with_lease` ```python theme={null} increment_concurrency_slots_with_lease(self, names: list[str], slots: int, mode: Literal['concurrency', 'rate_limit'], lease_duration: float, holder: 'ConcurrencyLeaseHolder | None' = None) -> 'Response' ``` Increment concurrency slots for the specified limits with a lease. **Args:** * `names`: A list of limit names for which to occupy slots. * `slots`: The number of concurrency slots to occupy. * `mode`: The mode of the concurrency limits. * `lease_duration`: The duration of the lease in seconds. * `holder`: Optional holder information for tracking who holds the slots. #### `increment_v1_concurrency_slots` ```python theme={null} increment_v1_concurrency_slots(self, names: list[str], task_run_id: 'UUID') -> 'Response' ``` Increment concurrency limit slots for the specified limits. **Args:** * `names`: A list of limit names for which to increment limits. * `task_run_id`: The task run ID incrementing the limits. #### `loop` ```python theme={null} loop(self) -> asyncio.AbstractEventLoop | None ``` #### `match_work_queues` ```python theme={null} match_work_queues(self, prefixes: list[str], work_pool_name: Optional[str] = None) -> list[WorkQueue] ``` Query the Prefect API for work queues with names with a specific prefix. **Args:** * `prefixes`: a list of strings used to match work queue name prefixes * `work_pool_name`: an optional work pool name to scope the query to **Returns:** * a list of WorkQueue model representations of the work queues #### `pause_automation` ```python theme={null} pause_automation(self, automation_id: 'UUID') -> None ``` #### `pause_deployment` ```python theme={null} pause_deployment(self, deployment_id: Union[UUID, str]) -> None ``` Pause a deployment by ID. **Args:** * `deployment_id`: The deployment ID of interest (can be a UUID or a string). **Raises:** * `ObjectNotFound`: If request returns 404 * `RequestError`: If request fails #### `raise_for_api_version_mismatch` ```python theme={null} raise_for_api_version_mismatch(self) -> None ``` #### `raise_for_api_version_mismatch_once` ```python theme={null} raise_for_api_version_mismatch_once(self) -> None ``` Run API version compatibility check once per process/API/client version. #### `read_artifacts` ```python theme={null} read_artifacts(self, **kwargs: Unpack['ArtifactReadParams']) -> list['Artifact'] ``` #### `read_automation` ```python theme={null} read_automation(self, automation_id: 'UUID | str') -> 'Automation | None' ``` #### `read_automations` ```python theme={null} read_automations(self) -> list['Automation'] ``` #### `read_automations_by_name` ```python theme={null} read_automations_by_name(self, name: str) -> list['Automation'] ``` Query the Prefect API for an automation by name. Only automations matching the provided name will be returned. **Args:** * `name`: the name of the automation to query **Returns:** * a list of Automation model representations of the automations #### `read_block_document` ```python theme={null} read_block_document(self, block_document_id: 'UUID', include_secrets: bool = True) -> 'BlockDocument' ``` Read the block document with the specified ID. **Args:** * `block_document_id`: the block document id * `include_secrets`: whether to include secret values on the Block, corresponding to Pydantic's `SecretStr` and `SecretBytes` fields. These fields are automatically obfuscated by Pydantic, but users can additionally choose not to receive their values from the API. Note that any business logic on the Block may not work if this is `False`. **Raises:** * `httpx.RequestError`: if the block document was not found for any reason **Returns:** * A block document or None. #### `read_block_document_by_name` ```python theme={null} read_block_document_by_name(self, name: str, block_type_slug: str, include_secrets: bool = True) -> 'BlockDocument' ``` Read the block document with the specified name that corresponds to a specific block type name. **Args:** * `name`: The block document name. * `block_type_slug`: The block type slug. * `include_secrets`: whether to include secret values on the Block, corresponding to Pydantic's `SecretStr` and `SecretBytes` fields. These fields are automatically obfuscated by Pydantic, but users can additionally choose not to receive their values from the API. Note that any business logic on the Block may not work if this is `False`. **Raises:** * `httpx.RequestError`: if the block document was not found for any reason **Returns:** * A block document or None. #### `read_block_documents` ```python theme={null} read_block_documents(self, block_schema_type: str | None = None, offset: int | None = None, limit: int | None = None, include_secrets: bool = True) -> 'list[BlockDocument]' ``` Read block documents **Args:** * `block_schema_type`: an optional block schema type * `offset`: an offset * `limit`: the number of blocks to return * `include_secrets`: whether to include secret values on the Block, corresponding to Pydantic's `SecretStr` and `SecretBytes` fields. These fields are automatically obfuscated by Pydantic, but users can additionally choose not to receive their values from the API. Note that any business logic on the Block may not work if this is `False`. **Returns:** * A list of block documents #### `read_block_documents_by_type` ```python theme={null} read_block_documents_by_type(self, block_type_slug: str, offset: int | None = None, limit: int | None = None, include_secrets: bool = True) -> 'list[BlockDocument]' ``` Retrieve block documents by block type slug. **Args:** * `block_type_slug`: The block type slug. * `offset`: an offset * `limit`: the number of blocks to return * `include_secrets`: whether to include secret values **Returns:** * A list of block documents #### `read_block_schema_by_checksum` ```python theme={null} read_block_schema_by_checksum(self, checksum: str, version: str | None = None) -> 'BlockSchema' ``` Look up a block schema checksum #### `read_block_schemas` ```python theme={null} read_block_schemas(self) -> 'list[BlockSchema]' ``` Read all block schemas Raises: httpx.RequestError: if a valid block schema was not found **Returns:** * A BlockSchema. #### `read_block_type_by_slug` ```python theme={null} read_block_type_by_slug(self, slug: str) -> 'BlockType' ``` Read a block type by its slug. #### `read_block_types` ```python theme={null} read_block_types(self) -> 'list[BlockType]' ``` Read all block types Raises: httpx.RequestError: if the block types were not found **Returns:** * List of BlockTypes. #### `read_concurrency_limit_by_tag` ```python theme={null} read_concurrency_limit_by_tag(self, tag: str) -> 'ConcurrencyLimit' ``` Read the concurrency limit set on a specific tag. **Args:** * `tag`: a tag the concurrency limit is applied to **Raises:** * `ObjectNotFound`: If request returns 404 * `httpx.RequestError`: if the concurrency limit was not created for any reason **Returns:** * the concurrency limit set on a specific tag #### `read_concurrency_limits` ```python theme={null} read_concurrency_limits(self, limit: int, offset: int) -> list['ConcurrencyLimit'] ``` Lists concurrency limits set on task run tags. **Args:** * `limit`: the maximum number of concurrency limits returned * `offset`: the concurrency limit query offset **Returns:** * a list of concurrency limits #### `read_deployment` ```python theme={null} read_deployment(self, deployment_id: Union[UUID, str]) -> 'DeploymentResponse' ``` Query the Prefect API for a deployment by id. **Args:** * `deployment_id`: the deployment ID of interest **Returns:** * a Deployment model representation of the deployment #### `read_deployment_by_name` ```python theme={null} read_deployment_by_name(self, name: str) -> 'DeploymentResponse' ``` Query the Prefect API for a deployment by name. **Args:** * `name`: A deployed flow's name: \/\ **Raises:** * `ObjectNotFound`: If request returns 404 * `RequestError`: If request fails **Returns:** * a Deployment model representation of the deployment #### `read_deployment_schedules` ```python theme={null} read_deployment_schedules(self, deployment_id: UUID) -> list['DeploymentSchedule'] ``` Query the Prefect API for a deployment's schedules. **Args:** * `deployment_id`: the deployment ID **Returns:** * a list of DeploymentSchedule model representations of the deployment schedules #### `read_deployments` ```python theme={null} read_deployments(self) -> list['DeploymentResponse'] ``` Query the Prefect API for deployments. Only deployments matching all the provided criteria will be returned. **Args:** * `flow_filter`: filter criteria for flows * `flow_run_filter`: filter criteria for flow runs * `task_run_filter`: filter criteria for task runs * `deployment_filter`: filter criteria for deployments * `work_pool_filter`: filter criteria for work pools * `work_queue_filter`: filter criteria for work pool queues * `limit`: maximum number of deployments to return. When `None`, the server applies `PREFECT_API_DEFAULT_LIMIT` (200 by default). * `offset`: an offset for the deployment query. **Returns:** * a list of Deployment model representations of the deployments #### `read_events` ```python theme={null} read_events(self, filter: 'EventFilter | None' = None, limit: int = 100) -> EventPage ``` query historical events from the API. **Args:** * `filter`: optional filter criteria to narrow down events * `limit`: maximum number of events to return per page (default 100) **Returns:** * EventPage containing events, total count, and next page link #### `read_events_page` ```python theme={null} read_events_page(self, next_page_url: str) -> EventPage ``` retrieve the next page of events using a next\_page URL. **Args:** * `next_page_url`: the next\_page URL from a previous EventPage response **Returns:** * EventPage containing the next page of events #### `read_flow` ```python theme={null} read_flow(self, flow_id: 'UUID') -> 'Flow' ``` Query the Prefect API for a flow by id. **Args:** * `flow_id`: the flow ID of interest **Returns:** * a Flow model representation of the flow #### `read_flow_by_name` ```python theme={null} read_flow_by_name(self, flow_name: str) -> 'Flow' ``` Query the Prefect API for a flow by name. **Args:** * `flow_name`: the name of a flow **Returns:** * a fully hydrated Flow model #### `read_flow_run` ```python theme={null} read_flow_run(self, flow_run_id: 'UUID') -> 'FlowRun' ``` Query the Prefect API for a flow run by id. **Args:** * `flow_run_id`: the flow run ID of interest **Returns:** * a Flow Run model representation of the flow run #### `read_flow_run_input` ```python theme={null} read_flow_run_input(self, flow_run_id: 'UUID', key: str) -> str ``` Reads a flow run input. **Args:** * `flow_run_id`: The flow run id. * `key`: The input key. #### `read_flow_run_states` ```python theme={null} read_flow_run_states(self, flow_run_id: 'UUID') -> 'list[State]' ``` Query for the states of a flow run **Args:** * `flow_run_id`: the id of the flow run **Returns:** * a list of State model representations of the flow run states #### `read_flow_runs` ```python theme={null} read_flow_runs(self) -> 'list[FlowRun]' ``` Query the Prefect API for flow runs. Only flow runs matching all criteria will be returned. **Args:** * `flow_filter`: filter criteria for flows * `flow_run_filter`: filter criteria for flow runs * `task_run_filter`: filter criteria for task runs * `deployment_filter`: filter criteria for deployments * `work_pool_filter`: filter criteria for work pools * `work_queue_filter`: filter criteria for work pool queues * `sort`: sort criteria for the flow runs * `limit`: maximum number of flow runs to return. When `None`, the server applies `PREFECT_API_DEFAULT_LIMIT` (200 by default). * `offset`: an offset for the flow run query. **Returns:** * a list of Flow Run model representations of the flow runs #### `read_flows` ```python theme={null} read_flows(self) -> list['Flow'] ``` Query the Prefect API for flows. Only flows matching all criteria will be returned. **Args:** * `flow_filter`: filter criteria for flows * `flow_run_filter`: filter criteria for flow runs * `task_run_filter`: filter criteria for task runs * `deployment_filter`: filter criteria for deployments * `work_pool_filter`: filter criteria for work pools * `work_queue_filter`: filter criteria for work pool queues * `sort`: sort criteria for the flows * `limit`: maximum number of flows to return. When `None`, the server applies `PREFECT_API_DEFAULT_LIMIT` (200 by default). * `offset`: an offset for the flow query. **Returns:** * a list of Flow model representations of the flows #### `read_global_concurrency_limit_by_name` ```python theme={null} read_global_concurrency_limit_by_name(self, name: str) -> 'GlobalConcurrencyLimitResponse' ``` #### `read_global_concurrency_limits` ```python theme={null} read_global_concurrency_limits(self, limit: int = 10, offset: int = 0) -> list['GlobalConcurrencyLimitResponse'] ``` #### `read_latest_artifacts` ```python theme={null} read_latest_artifacts(self, **kwargs: Unpack['ArtifactCollectionReadParams']) -> list['ArtifactCollection'] ``` #### `read_logs` ```python theme={null} read_logs(self, log_filter: 'LogFilter | None' = None, limit: int | None = None, offset: int | None = None, sort: 'LogSort | None' = None) -> list[Log] ``` Read flow and task run logs. #### `read_resource_related_automations` ```python theme={null} read_resource_related_automations(self, resource_id: str) -> list['Automation'] ``` #### `read_task_run` ```python theme={null} read_task_run(self, task_run_id: UUID) -> TaskRun ``` Query the Prefect API for a task run by id. **Args:** * `task_run_id`: the task run ID of interest **Returns:** * a Task Run model representation of the task run #### `read_task_run_states` ```python theme={null} read_task_run_states(self, task_run_id: UUID) -> list[prefect.states.State] ``` Query for the states of a task run **Args:** * `task_run_id`: the id of the task run **Returns:** * a list of State model representations of the task run states #### `read_task_runs` ```python theme={null} read_task_runs(self) -> list[TaskRun] ``` Query the Prefect API for task runs. Only task runs matching all criteria will be returned. **Args:** * `flow_filter`: filter criteria for flows * `flow_run_filter`: filter criteria for flow runs * `task_run_filter`: filter criteria for task runs * `deployment_filter`: filter criteria for deployments * `sort`: sort criteria for the task runs * `limit`: maximum number of task runs to return. When `None`, the server applies `PREFECT_API_DEFAULT_LIMIT` (200 by default). * `offset`: an offset for the task run query. **Returns:** * a list of Task Run model representations of the task runs #### `read_variable_by_name` ```python theme={null} read_variable_by_name(self, name: str) -> 'Variable | None' ``` Reads a variable by name. Returns None if no variable is found. #### `read_variables` ```python theme={null} read_variables(self, limit: int | None = None) -> list['Variable'] ``` Reads all variables. #### `read_work_pool` ```python theme={null} read_work_pool(self, work_pool_name: str) -> 'WorkPool' ``` Reads information for a given work pool **Args:** * `work_pool_name`: The name of the work pool to for which to get information. **Returns:** * Information about the requested work pool. #### `read_work_pool_concurrency_status` ```python theme={null} read_work_pool_concurrency_status(self, work_pool_name: str, page: int = 1, limit: int | None = None, flow_run_limit: int = 10) -> 'WorkPoolConcurrencyStatus' ``` Reads concurrency status for a work pool. **Args:** * `work_pool_name`: The name of the work pool. * `page`: Page number (1-indexed). * `limit`: Max queues per page (server default if None). * `flow_run_limit`: Max flow runs per queue (0-200). **Returns:** * Paginated concurrency status with per-queue breakdown. #### `read_work_pools` ```python theme={null} read_work_pools(self, limit: int | None = None, offset: int = 0, work_pool_filter: 'WorkPoolFilter | None' = None) -> list['WorkPool'] ``` Reads work pools. **Args:** * `limit`: Limit for the work pool query. * `offset`: Offset for the work pool query. * `work_pool_filter`: Criteria by which to filter work pools. **Returns:** * A list of work pools. #### `read_work_queue` ```python theme={null} read_work_queue(self, id: UUID) -> WorkQueue ``` Read a work queue. **Args:** * `id`: the id of the work queue to load **Raises:** * `prefect.exceptions.ObjectNotFound`: If request returns 404 * `httpx.RequestError`: If request fails **Returns:** * an instantiated WorkQueue object #### `read_work_queue_by_name` ```python theme={null} read_work_queue_by_name(self, name: str, work_pool_name: Optional[str] = None) -> WorkQueue ``` Read a work queue by name. **Args:** * `name`: a unique name for the work queue * `work_pool_name`: the name of the work pool the queue belongs to. **Raises:** * `prefect.exceptions.ObjectNotFound`: if no work queue is found * `httpx.HTTPStatusError`: other status errors **Returns:** * a work queue API object #### `read_work_queue_concurrency_status` ```python theme={null} read_work_queue_concurrency_status(self, id: UUID, page: int = 1, limit: Optional[int] = None) -> 'WorkQueueConcurrencyStatus' ``` Read concurrency status for a work queue. **Args:** * `id`: the id of the work queue * `page`: Page number (1-indexed). * `limit`: Max flow runs per page (server default if None). **Raises:** * `prefect.exceptions.ObjectNotFound`: If request returns 404 * `httpx.RequestError`: If request fails **Returns:** * Paginated WorkQueueConcurrencyStatus with flow run summaries #### `read_work_queue_status` ```python theme={null} read_work_queue_status(self, id: UUID) -> WorkQueueStatusDetail ``` Read a work queue status. **Args:** * `id`: the id of the work queue to load **Raises:** * `prefect.exceptions.ObjectNotFound`: If request returns 404 * `httpx.RequestError`: If request fails **Returns:** * an instantiated WorkQueueStatus object #### `read_work_queues` ```python theme={null} read_work_queues(self, work_pool_name: Optional[str] = None, work_queue_filter: Optional[WorkQueueFilter] = None, limit: Optional[int] = None, offset: Optional[int] = None) -> list[WorkQueue] ``` Retrieves queues for a work pool. **Args:** * `work_pool_name`: Name of the work pool for which to get queues. * `work_queue_filter`: Criteria by which to filter queues. * `limit`: maximum number of work queues to return. When `None`, the server applies `PREFECT_API_DEFAULT_LIMIT` (200 by default). * `offset`: an offset for the work queue query. **Returns:** * List of queues for the specified work pool. #### `read_worker_metadata` ```python theme={null} read_worker_metadata(self) -> dict[str, Any] ``` Reads worker metadata stored in Prefect collection registry. #### `read_workers_for_work_pool` ```python theme={null} read_workers_for_work_pool(self, work_pool_name: str, worker_filter: 'WorkerFilter | None' = None, offset: int | None = None, limit: int | None = None) -> list['Worker'] ``` Reads workers for a given work pool. **Args:** * `work_pool_name`: The name of the work pool for which to get member workers. * `worker_filter`: Criteria by which to filter workers. * `limit`: Limit for the worker query. * `offset`: Limit for the worker query. #### `release_concurrency_slots` ```python theme={null} release_concurrency_slots(self, names: list[str], slots: int, occupancy_seconds: float) -> 'Response' ``` Release concurrency slots for the specified limits. **Args:** * `names`: A list of limit names for which to release slots. * `slots`: The number of concurrency slots to release. * `occupancy_seconds`: The duration in seconds that the slots were occupied. **Returns:** * "Response": The HTTP response from the server. #### `release_concurrency_slots_with_lease` ```python theme={null} release_concurrency_slots_with_lease(self, lease_id: 'UUID') -> 'Response' ``` Release concurrency slots for the specified lease. **Args:** * `lease_id`: The ID of the lease corresponding to the concurrency limits to release. #### `renew_concurrency_lease` ```python theme={null} renew_concurrency_lease(self, lease_id: 'UUID', lease_duration: float) -> 'Response' ``` Renew a concurrency lease. **Args:** * `lease_id`: The ID of the lease to renew. * `lease_duration`: The new lease duration in seconds. #### `reset_concurrency_limit_by_tag` ```python theme={null} reset_concurrency_limit_by_tag(self, tag: str, slot_override: list['UUID | str'] | None = None) -> None ``` Resets the concurrency limit slots set on a specific tag. **Args:** * `tag`: a tag the concurrency limit is applied to * `slot_override`: a list of task run IDs that are currently using a concurrency slot, please check that any task run IDs included in `slot_override` are currently running, otherwise those concurrency slots will never be released. **Raises:** * `ObjectNotFound`: If request returns 404 * `httpx.RequestError`: If request fails #### `resume_automation` ```python theme={null} resume_automation(self, automation_id: 'UUID') -> None ``` #### `resume_deployment` ```python theme={null} resume_deployment(self, deployment_id: Union[UUID, str]) -> None ``` Resume (unpause) a deployment by ID. **Args:** * `deployment_id`: The deployment ID of interest (can be a UUID or a string). **Raises:** * `ObjectNotFound`: If request returns 404 * `RequestError`: If request fails #### `resume_flow_run` ```python theme={null} resume_flow_run(self, flow_run_id: 'UUID', run_input: dict[str, Any] | None = None) -> 'OrchestrationResult[Any]' ``` Resumes a paused flow run. **Args:** * `flow_run_id`: the flow run ID of interest * `run_input`: the input to resume the flow run with **Returns:** * an OrchestrationResult model representation of state orchestration output #### `send_worker_heartbeat` ```python theme={null} send_worker_heartbeat(self, work_pool_name: str, worker_name: str, heartbeat_interval_seconds: float | None = None, get_worker_id: bool = False, worker_metadata: 'WorkerMetadata | None' = None) -> 'UUID | None' ``` Sends a worker heartbeat for a given work pool. **Args:** * `work_pool_name`: The name of the work pool to heartbeat against. * `worker_name`: The name of the worker sending the heartbeat. * `return_id`: Whether to return the worker ID. Note: will return `None` if the connected server does not support returning worker IDs, even if `return_id` is `True`. * `worker_metadata`: Metadata about the worker to send to the server. #### `set_deployment_paused_state` ```python theme={null} set_deployment_paused_state(self, deployment_id: UUID, paused: bool) -> None ``` DEPRECATED: Use pause\_deployment or resume\_deployment instead. Set the paused state of a deployment. **Args:** * `deployment_id`: the deployment ID to update * `paused`: whether the deployment should be paused #### `set_flow_run_name` ```python theme={null} set_flow_run_name(self, flow_run_id: 'UUID', name: str) -> httpx.Response ``` #### `set_flow_run_state` ```python theme={null} set_flow_run_state(self, flow_run_id: 'UUID | str', state: 'State[T]', force: bool = False) -> 'OrchestrationResult[T]' ``` Set the state of a flow run. **Args:** * `flow_run_id`: the id of the flow run * `state`: the state to set * `force`: if True, disregard orchestration logic when setting the state, forcing the Prefect API to accept the state **Returns:** * an OrchestrationResult model representation of state orchestration output #### `set_task_run_name` ```python theme={null} set_task_run_name(self, task_run_id: UUID, name: str) -> httpx.Response ``` #### `set_task_run_state` ```python theme={null} set_task_run_state(self, task_run_id: UUID, state: prefect.states.State[T], force: bool = False) -> OrchestrationResult[T] ``` Set the state of a task run. **Args:** * `task_run_id`: the id of the task run * `state`: the state to set * `force`: if True, disregard orchestration logic when setting the state, forcing the Prefect API to accept the state **Returns:** * an OrchestrationResult model representation of state orchestration output #### `update_artifact` ```python theme={null} update_artifact(self, artifact_id: 'UUID', artifact: 'ArtifactUpdate') -> None ``` #### `update_automation` ```python theme={null} update_automation(self, automation_id: 'UUID', automation: 'AutomationCore') -> None ``` Updates an automation in Prefect Cloud. #### `update_block_document` ```python theme={null} update_block_document(self, block_document_id: 'UUID', block_document: 'BlockDocumentUpdate') -> None ``` Update a block document in the Prefect API. #### `update_block_type` ```python theme={null} update_block_type(self, block_type_id: 'UUID', block_type: 'BlockTypeUpdate') -> None ``` Update a block document in the Prefect API. #### `update_deployment` ```python theme={null} update_deployment(self, deployment_id: UUID, deployment: 'DeploymentUpdate') -> None ``` #### `update_deployment_schedule` ```python theme={null} update_deployment_schedule(self, deployment_id: UUID, schedule_id: UUID, active: bool | None = None, schedule: 'SCHEDULE_TYPES | None' = None, max_scheduled_runs: int | None = None, parameters: dict[str, Any] | None = None, slug: str | None = None) -> None ``` Update a deployment schedule by ID. **Args:** * `deployment_id`: the deployment ID * `schedule_id`: the deployment schedule ID of interest * `active`: whether or not the schedule should be active * `schedule`: the cron, rrule, or interval schedule this deployment schedule should use * `max_scheduled_runs`: The maximum number of scheduled runs for the schedule. * `parameters`: Parameter overrides for the schedule. * `slug`: A unique identifier for the schedule. #### `update_flow_run` ```python theme={null} update_flow_run(self, flow_run_id: 'UUID', flow_version: str | None = None, parameters: dict[str, Any] | None = None, name: str | None = None, tags: 'Iterable[str] | None' = None, empirical_policy: 'FlowRunPolicy | None' = None, infrastructure_pid: str | None = None, job_variables: dict[str, Any] | None = None) -> httpx.Response ``` Update a flow run's details. **Args:** * `flow_run_id`: The identifier for the flow run to update. * `flow_version`: A new version string for the flow run. * `parameters`: A dictionary of parameter values for the flow run. This will not be merged with any existing parameters. * `name`: A new name for the flow run. * `empirical_policy`: A new flow run orchestration policy. This will not be merged with any existing policy. * `tags`: An iterable of new tags for the flow run. These will not be merged with any existing tags. * `infrastructure_pid`: The id of flow run as returned by an infrastructure block. **Returns:** * an `httpx.Response` object from the PATCH request #### `update_flow_run_labels` ```python theme={null} update_flow_run_labels(self, flow_run_id: 'UUID', labels: 'KeyValueLabelsField') -> None ``` Updates the labels of a flow run. #### `update_global_concurrency_limit` ```python theme={null} update_global_concurrency_limit(self, name: str, concurrency_limit: 'GlobalConcurrencyLimitUpdate') -> 'Response' ``` #### `update_variable` ```python theme={null} update_variable(self, variable: 'VariableUpdate') -> None ``` Updates a variable with the provided configuration. **Args:** * `variable`: Desired configuration for the updated variable. Returns: Information about the updated variable. #### `update_work_pool` ```python theme={null} update_work_pool(self, work_pool_name: str, work_pool: 'WorkPoolUpdate') -> None ``` Updates a work pool. **Args:** * `work_pool_name`: Name of the work pool to update. * `work_pool`: Fields to update in the work pool. #### `update_work_queue` ```python theme={null} update_work_queue(self, id: UUID, **kwargs: Any) -> None ``` Update properties of a work queue. **Args:** * `id`: the ID of the work queue to update * `**kwargs`: the fields to update **Raises:** * `ValueError`: if no kwargs are provided * `prefect.exceptions.ObjectNotFound`: if request returns 404 * `httpx.RequestError`: if the request fails #### `upsert_global_concurrency_limit_by_name` ```python theme={null} upsert_global_concurrency_limit_by_name(self, name: str, limit: int, slot_decay_per_second: float | None = None) -> None ``` Creates a global concurrency limit with the given name and limit if one does not already exist. If one does already exist matching the name then update it's limit and/or slot\_decay\_per\_second if they are different. Note: This is not done atomically. ### `SyncPrefectClient` A synchronous client for interacting with the [Prefect REST API](https://docs.prefect.io/v3/api-ref/rest-api/). **Args:** * `api`: the REST API URL or FastAPI application to connect to * `api_key`: An optional API key for authentication. * `api_version`: The API version this client is compatible with. * `httpx_settings`: An optional dictionary of settings to pass to the underlying `httpx.Client` Examples: Say hello to a Prefect REST API ```python theme={null} with get_client(sync_client=True) as client: response = client.hello() print(response.json()) 👋 ``` **Methods:** #### `api_healthcheck` ```python theme={null} api_healthcheck(self) -> Optional[Exception] ``` Attempts to connect to the API and returns the encountered exception if not successful. If successful, returns `None`. #### `api_url` ```python theme={null} api_url(self) -> httpx.URL ``` Get the base URL for the API. #### `api_version` ```python theme={null} api_version(self) -> str ``` #### `apply_slas_for_deployment` ```python theme={null} apply_slas_for_deployment(self, deployment_id: 'UUID', slas: 'list[SlaTypes]') -> 'SlaMergeResponse' ``` Applies service level agreements for a deployment. Performs matching by SLA name. If a SLA with the same name already exists, it will be updated. If a SLA with the same name does not exist, it will be created. Existing SLAs that are not in the list will be deleted. Args: deployment\_id: The ID of the deployment to update SLAs for slas: List of SLAs to associate with the deployment Raises: httpx.RequestError: if the SLAs were not updated for any reason Returns: SlaMergeResponse: The response from the backend, containing the names of the created, updated, and deleted SLAs #### `client_version` ```python theme={null} client_version(self) -> str ``` #### `count_flow_runs` ```python theme={null} count_flow_runs(self) -> int ``` Returns the count of flow runs matching all criteria for flow runs. **Args:** * `flow_filter`: filter criteria for flows * `flow_run_filter`: filter criteria for flow runs * `task_run_filter`: filter criteria for task runs * `deployment_filter`: filter criteria for deployments * `work_pool_filter`: filter criteria for work pools * `work_queue_filter`: filter criteria for work pool queues **Returns:** * count of flow runs #### `create_artifact` ```python theme={null} create_artifact(self, artifact: 'ArtifactCreate') -> 'Artifact' ``` #### `create_automation` ```python theme={null} create_automation(self, automation: 'AutomationCore') -> 'UUID' ``` Creates an automation in Prefect Cloud. #### `create_block_document` ```python theme={null} create_block_document(self, block_document: 'BlockDocument | BlockDocumentCreate', include_secrets: bool = True) -> 'BlockDocument' ``` Create a block document in the Prefect API. This data is used to configure a corresponding Block. **Args:** * `include_secrets`: whether to include secret values on the stored Block, corresponding to Pydantic's `SecretStr` and `SecretBytes` fields. Note Blocks may not work as expected if this is set to `False`. #### `create_block_schema` ```python theme={null} create_block_schema(self, block_schema: 'BlockSchemaCreate') -> 'BlockSchema' ``` Create a block schema in the Prefect API. #### `create_block_type` ```python theme={null} create_block_type(self, block_type: 'BlockTypeCreate') -> 'BlockType' ``` Create a block type in the Prefect API. #### `create_concurrency_limit` ```python theme={null} create_concurrency_limit(self, tag: str, concurrency_limit: int) -> 'UUID' ``` Create a tag concurrency limit in the Prefect API. These limits govern concurrently running tasks. **Args:** * `tag`: a tag the concurrency limit is applied to * `concurrency_limit`: the maximum number of concurrent task runs for a given tag **Raises:** * `httpx.RequestError`: if the concurrency limit was not created for any reason **Returns:** * the ID of the concurrency limit in the backend #### `create_deployment` ```python theme={null} create_deployment(self, flow_id: UUID, name: str, version: str | None = None, version_info: 'VersionInfo | None' = None, schedules: list['DeploymentScheduleCreate'] | None = None, concurrency_limit: int | None = None, concurrency_options: 'ConcurrencyOptions | None' = None, parameters: dict[str, Any] | None = None, description: str | None = None, work_queue_name: str | None = None, work_pool_name: str | None = None, tags: list[str] | None = None, storage_document_id: UUID | None = None, path: str | None = None, entrypoint: str | None = None, infrastructure_document_id: UUID | None = None, parameter_openapi_schema: dict[str, Any] | None = None, paused: bool | None = None, pull_steps: list[dict[str, Any]] | None = None, enforce_parameter_schema: bool | None = None, job_variables: dict[str, Any] | None = None, branch: str | None = None, base: UUID | None = None, root: UUID | None = None) -> UUID ``` Create a deployment. **Args:** * `flow_id`: the flow ID to create a deployment for * `name`: the name of the deployment * `version`: an optional version string for the deployment * `tags`: an optional list of tags to apply to the deployment * `storage_document_id`: an reference to the storage block document used for the deployed flow * `infrastructure_document_id`: an reference to the infrastructure block document to use for this deployment * `job_variables`: A dictionary of dot delimited infrastructure overrides that will be applied at runtime; for example `env.CONFIG_KEY=config_value` or `namespace='prefect'`. This argument was previously named `infra_overrides`. Both arguments are supported for backwards compatibility. **Raises:** * `RequestError`: if the deployment was not created for any reason **Returns:** * the ID of the deployment in the backend #### `create_deployment_branch` ```python theme={null} create_deployment_branch(self, deployment_id: UUID, branch: str, options: 'DeploymentBranchingOptions | None' = None, overrides: 'DeploymentUpdate | None' = None) -> UUID ``` #### `create_deployment_schedules` ```python theme={null} create_deployment_schedules(self, deployment_id: UUID, schedules: list[tuple['SCHEDULE_TYPES', bool]] | list['DeploymentScheduleCreate'], max_scheduled_runs: int | None = None, parameters: dict[str, Any] | None = None, slug: str | None = None) -> list['DeploymentSchedule'] ``` Create deployment schedules. **Args:** * `deployment_id`: the deployment ID * `schedules`: a list of tuples containing the schedule to create and whether or not it should be active, or a list of DeploymentScheduleCreate objects. * `max_scheduled_runs`: The maximum number of scheduled runs for the schedule. Only used when schedules is a list of tuples. * `parameters`: Parameter overrides for the schedule. Only used when schedules is a list of tuples. * `slug`: A unique identifier for the schedule. Only used when schedules is a list of tuples. **Raises:** * `RequestError`: if the schedules were not created for any reason **Returns:** * the list of schedules created in the backend #### `create_flow` ```python theme={null} create_flow(self, flow: 'FlowObject[Any, Any]') -> 'UUID' ``` Create a flow in the Prefect API. **Args:** * `flow`: a `Flow` object **Raises:** * `httpx.RequestError`: if a flow was not created for any reason **Returns:** * the ID of the flow in the backend #### `create_flow_from_name` ```python theme={null} create_flow_from_name(self, flow_name: str) -> 'UUID' ``` Create a flow in the Prefect API. **Args:** * `flow_name`: the name of the new flow **Raises:** * `httpx.RequestError`: if a flow was not created for any reason **Returns:** * the ID of the flow in the backend #### `create_flow_run` ```python theme={null} create_flow_run(self, flow: 'FlowObject[Any, R]', name: str | None = None, parameters: dict[str, Any] | None = None, context: dict[str, Any] | None = None, tags: 'Iterable[str] | None' = None, parent_task_run_id: 'UUID | None' = None, state: 'State[R] | None' = None, work_pool_name: str | None = None, work_queue_name: str | None = None, job_variables: dict[str, Any] | None = None) -> 'FlowRun' ``` Create a flow run for a flow. **Args:** * `flow`: The flow model to create the flow run for * `name`: An optional name for the flow run * `parameters`: Parameter overrides for this flow run. * `context`: Optional run context data * `tags`: a list of tags to apply to this flow run * `parent_task_run_id`: if a subflow run is being created, the placeholder task run identifier in the parent flow * `state`: The initial state for the run. If not provided, defaults to `Pending`. * `work_pool_name`: The name of the work pool to run the flow run in. * `work_queue_name`: The name of the work queue to place the flow run in. * `job_variables`: The job variables to use when setting up flow run infrastructure. **Raises:** * `httpx.RequestError`: if the Prefect API does not successfully create a run for any reason **Returns:** * The flow run model #### `create_flow_run_from_deployment` ```python theme={null} create_flow_run_from_deployment(self, deployment_id: UUID) -> 'FlowRun' ``` Create a flow run for a deployment. **Args:** * `deployment_id`: The deployment ID to create the flow run from * `parameters`: Parameter overrides for this flow run. Merged with the deployment defaults * `context`: Optional run context data * `state`: The initial state for the run. If not provided, defaults to `Scheduled` for now. Should always be a `Scheduled` type. * `name`: An optional name for the flow run. If not provided, the server will generate a name. * `tags`: An optional iterable of tags to apply to the flow run; these tags are merged with the deployment's tags. * `idempotency_key`: Optional idempotency key for creation of the flow run. If the key matches the key of an existing flow run, the existing run will be returned instead of creating a new one. * `parent_task_run_id`: if a subflow run is being created, the placeholder task run identifier in the parent flow * `work_queue_name`: An optional work queue name to add this run to. If not provided, will default to the deployment's set work queue. If one is provided that does not exist, a new work queue will be created within the deployment's work pool. * `job_variables`: Optional variables that will be supplied to the flow run job. **Raises:** * `RequestError`: if the Prefect API does not successfully create a run for any reason **Returns:** * The flow run model #### `create_flow_run_input` ```python theme={null} create_flow_run_input(self, flow_run_id: 'UUID', key: str, value: str, sender: str | None = None) -> None ``` Creates a flow run input. **Args:** * `flow_run_id`: The flow run id. * `key`: The input key. * `value`: The input value. * `sender`: The sender of the input. #### `create_global_concurrency_limit` ```python theme={null} create_global_concurrency_limit(self, concurrency_limit: 'GlobalConcurrencyLimitCreate') -> 'UUID' ``` #### `create_logs` ```python theme={null} create_logs(self, logs: Iterable[Union['LogCreate', dict[str, Any]]]) -> None ``` Create logs for a flow or task run #### `create_task_run` ```python theme={null} create_task_run(self, task: 'TaskObject[P, R]', flow_run_id: Optional[UUID], dynamic_key: str, id: Optional[UUID] = None, name: Optional[str] = None, extra_tags: Optional[Iterable[str]] = None, state: Optional[prefect.states.State[R]] = None, task_inputs: Optional[dict[str, list[Union[TaskRunResult, FlowRunResult, Parameter, Constant]]]] = None) -> TaskRun ``` Create a task run **Args:** * `task`: The Task to run * `flow_run_id`: The flow run id with which to associate the task run * `dynamic_key`: A key unique to this particular run of a Task within the flow * `id`: An optional ID for the task run. If not provided, one will be generated server-side. * `name`: An optional name for the task run * `extra_tags`: an optional list of extra tags to apply to the task run in addition to `task.tags` * `state`: The initial state for the run. If not provided, defaults to `Pending` for now. Should always be a `Scheduled` type. * `task_inputs`: the set of inputs passed to the task **Returns:** * The created task run. #### `create_variable` ```python theme={null} create_variable(self, variable: 'VariableCreate') -> 'Variable' ``` Creates an variable with the provided configuration. **Args:** * `variable`: Desired configuration for the new variable. Returns: Information about the newly created variable. #### `create_work_pool` ```python theme={null} create_work_pool(self, work_pool: 'WorkPoolCreate', overwrite: bool = False) -> 'WorkPool' ``` Creates a work pool with the provided configuration. **Args:** * `work_pool`: Desired configuration for the new work pool. **Returns:** * Information about the newly created work pool. #### `create_work_queue` ```python theme={null} create_work_queue(self, name: str, description: Optional[str] = None, is_paused: Optional[bool] = None, concurrency_limit: Optional[int] = None, priority: Optional[int] = None, work_pool_name: Optional[str] = None) -> WorkQueue ``` Create a work queue. **Args:** * `name`: a unique name for the work queue * `description`: An optional description for the work queue. * `is_paused`: Whether or not the work queue is paused. * `concurrency_limit`: An optional concurrency limit for the work queue. * `priority`: The queue's priority. Lower values are higher priority (1 is the highest). * `work_pool_name`: The name of the work pool to use for this queue. **Raises:** * `prefect.exceptions.ObjectAlreadyExists`: If request returns 409 * `httpx.RequestError`: If request fails **Returns:** * The created work queue #### `decrement_v1_concurrency_slots` ```python theme={null} decrement_v1_concurrency_slots(self, names: list[str], task_run_id: 'UUID', occupancy_seconds: float) -> 'Response' ``` Decrement concurrency limit slots for the specified limits. **Args:** * `names`: A list of limit names to decrement. * `task_run_id`: The task run ID that incremented the limits. * `occupancy_seconds`: The duration in seconds that the limits were held. **Returns:** * "Response": The HTTP response from the server. #### `delete_artifact` ```python theme={null} delete_artifact(self, artifact_id: 'UUID') -> None ``` #### `delete_automation` ```python theme={null} delete_automation(self, automation_id: 'UUID') -> None ``` #### `delete_block_document` ```python theme={null} delete_block_document(self, block_document_id: 'UUID') -> None ``` Delete a block document. #### `delete_block_type` ```python theme={null} delete_block_type(self, block_type_id: 'UUID') -> None ``` Delete a block type. #### `delete_concurrency_limit_by_tag` ```python theme={null} delete_concurrency_limit_by_tag(self, tag: str) -> None ``` Delete the concurrency limit set on a specific tag. **Args:** * `tag`: a tag the concurrency limit is applied to **Raises:** * `ObjectNotFound`: If request returns 404 * `httpx.RequestError`: If request fails #### `delete_deployment` ```python theme={null} delete_deployment(self, deployment_id: UUID) -> None ``` Delete deployment by id. **Args:** * `deployment_id`: The deployment id of interest. Raises: ObjectNotFound: If request returns 404 RequestError: If requests fails #### `delete_deployment_schedule` ```python theme={null} delete_deployment_schedule(self, deployment_id: UUID, schedule_id: UUID) -> None ``` Delete a deployment schedule. **Args:** * `deployment_id`: the deployment ID * `schedule_id`: the ID of the deployment schedule to delete. **Raises:** * `RequestError`: if the schedules were not deleted for any reason #### `delete_flow` ```python theme={null} delete_flow(self, flow_id: 'UUID') -> None ``` Delete a flow by UUID. **Args:** * `flow_id`: ID of the flow to be deleted Raises: prefect.exceptions.ObjectNotFound: If request returns 404 httpx.RequestError: If requests fail #### `delete_flow_run` ```python theme={null} delete_flow_run(self, flow_run_id: 'UUID') -> None ``` Delete a flow run by UUID. **Args:** * `flow_run_id`: The flow run UUID of interest. Raises: ObjectNotFound: If request returns 404 httpx.RequestError: If requests fails #### `delete_flow_run_input` ```python theme={null} delete_flow_run_input(self, flow_run_id: 'UUID', key: str) -> None ``` Deletes a flow run input. **Args:** * `flow_run_id`: The flow run id. * `key`: The input key. #### `delete_global_concurrency_limit_by_name` ```python theme={null} delete_global_concurrency_limit_by_name(self, name: str) -> 'Response' ``` #### `delete_resource_owned_automations` ```python theme={null} delete_resource_owned_automations(self, resource_id: str) -> None ``` #### `delete_task_run` ```python theme={null} delete_task_run(self, task_run_id: UUID) -> None ``` Delete a task run by id. **Args:** * `task_run_id`: the task run ID of interest Raises: prefect.exceptions.ObjectNotFound: If request returns 404 httpx.RequestError: If requests fails #### `delete_variable_by_name` ```python theme={null} delete_variable_by_name(self, name: str) -> None ``` Deletes a variable by name. #### `delete_work_pool` ```python theme={null} delete_work_pool(self, work_pool_name: str) -> None ``` Deletes a work pool. **Args:** * `work_pool_name`: Name of the work pool to delete. #### `delete_work_queue_by_id` ```python theme={null} delete_work_queue_by_id(self, id: UUID) -> None ``` Delete a work queue by its ID. **Args:** * `id`: the id of the work queue to delete **Raises:** * `prefect.exceptions.ObjectNotFound`: If request returns 404 * `httpx.RequestError`: If requests fails #### `filter_flow_run_input` ```python theme={null} filter_flow_run_input(self, flow_run_id: 'UUID', key_prefix: str, limit: int, exclude_keys: 'set[str]') -> 'list[FlowRunInput]' ``` #### `find_automation` ```python theme={null} find_automation(self, id_or_name: 'str | UUID') -> 'Automation | None' ``` #### `get_most_recent_block_schema_for_block_type` ```python theme={null} get_most_recent_block_schema_for_block_type(self, block_type_id: 'UUID') -> 'BlockSchema | None' ``` Fetches the most recent block schema for a specified block type ID. **Args:** * `block_type_id`: The ID of the block type. **Raises:** * `httpx.RequestError`: If the request fails for any reason. **Returns:** * The most recent block schema or None. #### `get_runs_in_work_queue` ```python theme={null} get_runs_in_work_queue(self, id: UUID, limit: int = 10, scheduled_before: Optional[datetime.datetime] = None) -> list[FlowRun] ``` Read flow runs off a work queue. **Args:** * `id`: the id of the work queue to read from * `limit`: a limit on the number of runs to return * `scheduled_before`: a timestamp; only runs scheduled before this time will be returned. Defaults to now. **Raises:** * `prefect.exceptions.ObjectNotFound`: If request returns 404 * `httpx.RequestError`: If request fails **Returns:** * List\[FlowRun]: a list of FlowRun objects read from the queue #### `get_scheduled_flow_runs_for_deployments` ```python theme={null} get_scheduled_flow_runs_for_deployments(self, deployment_ids: list[UUID], scheduled_before: 'datetime.datetime | None' = None, limit: int | None = None) -> list['FlowRunResponse'] ``` #### `get_scheduled_flow_runs_for_work_pool` ```python theme={null} get_scheduled_flow_runs_for_work_pool(self, work_pool_name: str, work_queue_names: list[str] | None = None, scheduled_before: datetime | None = None) -> list['WorkerFlowRunResponse'] ``` Retrieves scheduled flow runs for the provided set of work pool queues. **Args:** * `work_pool_name`: The name of the work pool that the work pool queues are associated with. * `work_queue_names`: The names of the work pool queues from which to get scheduled flow runs. * `scheduled_before`: Datetime used to filter returned flow runs. Flow runs scheduled for after the given datetime string will not be returned. **Returns:** * A list of worker flow run responses containing information about the * retrieved flow runs. #### `hello` ```python theme={null} hello(self) -> httpx.Response ``` Send a GET request to /hello for testing purposes. #### `increment_concurrency_slots` ```python theme={null} increment_concurrency_slots(self, names: list[str], slots: int, mode: str) -> 'Response' ``` Increment concurrency slots for the specified limits. **Args:** * `names`: A list of limit names for which to occupy slots. * `slots`: The number of concurrency slots to occupy. * `mode`: The mode of the concurrency limits. #### `increment_concurrency_slots_with_lease` ```python theme={null} increment_concurrency_slots_with_lease(self, names: list[str], slots: int, mode: Literal['concurrency', 'rate_limit'], lease_duration: float, holder: 'ConcurrencyLeaseHolder | None' = None) -> 'Response' ``` Increment concurrency slots for the specified limits with a lease. **Args:** * `names`: A list of limit names for which to occupy slots. * `slots`: The number of concurrency slots to occupy. * `mode`: The mode of the concurrency limits. * `lease_duration`: The duration of the lease in seconds. * `holder`: Optional holder information for tracking who holds the slots. #### `increment_v1_concurrency_slots` ```python theme={null} increment_v1_concurrency_slots(self, names: list[str], task_run_id: 'UUID') -> 'Response' ``` Increment concurrency limit slots for the specified limits. **Args:** * `names`: A list of limit names for which to increment limits. * `task_run_id`: The task run ID incrementing the limits. #### `match_work_queues` ```python theme={null} match_work_queues(self, prefixes: list[str], work_pool_name: Optional[str] = None) -> list[WorkQueue] ``` Query the Prefect API for work queues with names with a specific prefix. **Args:** * `prefixes`: a list of strings used to match work queue name prefixes * `work_pool_name`: an optional work pool name to scope the query to **Returns:** * a list of WorkQueue model representations of the work queues #### `pause_automation` ```python theme={null} pause_automation(self, automation_id: 'UUID') -> None ``` #### `pause_deployment` ```python theme={null} pause_deployment(self, deployment_id: Union[UUID, str]) -> None ``` Pause a deployment by ID. **Args:** * `deployment_id`: The deployment ID of interest (can be a UUID or a string). **Raises:** * `ObjectNotFound`: If request returns 404 * `RequestError`: If request fails #### `raise_for_api_version_mismatch` ```python theme={null} raise_for_api_version_mismatch(self) -> None ``` #### `raise_for_api_version_mismatch_once` ```python theme={null} raise_for_api_version_mismatch_once(self) -> None ``` Run API version compatibility check once per process/API/client version. #### `read_artifacts` ```python theme={null} read_artifacts(self, **kwargs: Unpack['ArtifactReadParams']) -> list['Artifact'] ``` #### `read_automation` ```python theme={null} read_automation(self, automation_id: 'UUID | str') -> 'Automation | None' ``` #### `read_automations` ```python theme={null} read_automations(self) -> list['Automation'] ``` #### `read_automations_by_name` ```python theme={null} read_automations_by_name(self, name: str) -> list['Automation'] ``` Query the Prefect API for an automation by name. Only automations matching the provided name will be returned. **Args:** * `name`: the name of the automation to query **Returns:** * a list of Automation model representations of the automations #### `read_block_document` ```python theme={null} read_block_document(self, block_document_id: 'UUID', include_secrets: bool = True) -> 'BlockDocument' ``` Read the block document with the specified ID. **Args:** * `block_document_id`: the block document id * `include_secrets`: whether to include secret values on the Block, corresponding to Pydantic's `SecretStr` and `SecretBytes` fields. These fields are automatically obfuscated by Pydantic, but users can additionally choose not to receive their values from the API. Note that any business logic on the Block may not work if this is `False`. **Raises:** * `httpx.RequestError`: if the block document was not found for any reason **Returns:** * A block document or None. #### `read_block_document_by_name` ```python theme={null} read_block_document_by_name(self, name: str, block_type_slug: str, include_secrets: bool = True) -> 'BlockDocument' ``` Read the block document with the specified name that corresponds to a specific block type name. **Args:** * `name`: The block document name. * `block_type_slug`: The block type slug. * `include_secrets`: whether to include secret values on the Block, corresponding to Pydantic's `SecretStr` and `SecretBytes` fields. These fields are automatically obfuscated by Pydantic, but users can additionally choose not to receive their values from the API. Note that any business logic on the Block may not work if this is `False`. **Raises:** * `httpx.RequestError`: if the block document was not found for any reason **Returns:** * A block document or None. #### `read_block_documents` ```python theme={null} read_block_documents(self, block_schema_type: str | None = None, offset: int | None = None, limit: int | None = None, include_secrets: bool = True) -> 'list[BlockDocument]' ``` Read block documents **Args:** * `block_schema_type`: an optional block schema type * `offset`: an offset * `limit`: the number of blocks to return * `include_secrets`: whether to include secret values on the Block, corresponding to Pydantic's `SecretStr` and `SecretBytes` fields. These fields are automatically obfuscated by Pydantic, but users can additionally choose not to receive their values from the API. Note that any business logic on the Block may not work if this is `False`. **Returns:** * A list of block documents #### `read_block_documents_by_type` ```python theme={null} read_block_documents_by_type(self, block_type_slug: str, offset: int | None = None, limit: int | None = None, include_secrets: bool = True) -> 'list[BlockDocument]' ``` Retrieve block documents by block type slug. **Args:** * `block_type_slug`: The block type slug. * `offset`: an offset * `limit`: the number of blocks to return * `include_secrets`: whether to include secret values **Returns:** * A list of block documents #### `read_block_schema_by_checksum` ```python theme={null} read_block_schema_by_checksum(self, checksum: str, version: str | None = None) -> 'BlockSchema' ``` Look up a block schema checksum #### `read_block_schemas` ```python theme={null} read_block_schemas(self) -> 'list[BlockSchema]' ``` Read all block schemas Raises: httpx.RequestError: if a valid block schema was not found **Returns:** * A BlockSchema. #### `read_block_type_by_slug` ```python theme={null} read_block_type_by_slug(self, slug: str) -> 'BlockType' ``` Read a block type by its slug. #### `read_block_types` ```python theme={null} read_block_types(self) -> 'list[BlockType]' ``` Read all block types Raises: httpx.RequestError: if the block types were not found **Returns:** * List of BlockTypes. #### `read_concurrency_limit_by_tag` ```python theme={null} read_concurrency_limit_by_tag(self, tag: str) -> 'ConcurrencyLimit' ``` Read the concurrency limit set on a specific tag. **Args:** * `tag`: a tag the concurrency limit is applied to **Raises:** * `ObjectNotFound`: If request returns 404 * `httpx.RequestError`: if the concurrency limit was not created for any reason **Returns:** * the concurrency limit set on a specific tag #### `read_concurrency_limits` ```python theme={null} read_concurrency_limits(self, limit: int, offset: int) -> list['ConcurrencyLimit'] ``` Lists concurrency limits set on task run tags. **Args:** * `limit`: the maximum number of concurrency limits returned * `offset`: the concurrency limit query offset **Returns:** * a list of concurrency limits #### `read_deployment` ```python theme={null} read_deployment(self, deployment_id: Union[UUID, str]) -> 'DeploymentResponse' ``` Query the Prefect API for a deployment by id. **Args:** * `deployment_id`: the deployment ID of interest **Returns:** * a Deployment model representation of the deployment #### `read_deployment_by_name` ```python theme={null} read_deployment_by_name(self, name: str) -> 'DeploymentResponse' ``` Query the Prefect API for a deployment by name. **Args:** * `name`: A deployed flow's name: \/\ **Raises:** * `ObjectNotFound`: If request returns 404 * `RequestError`: If request fails **Returns:** * a Deployment model representation of the deployment #### `read_deployment_schedules` ```python theme={null} read_deployment_schedules(self, deployment_id: UUID) -> list['DeploymentSchedule'] ``` Query the Prefect API for a deployment's schedules. **Args:** * `deployment_id`: the deployment ID **Returns:** * a list of DeploymentSchedule model representations of the deployment schedules #### `read_deployments` ```python theme={null} read_deployments(self) -> list['DeploymentResponse'] ``` Query the Prefect API for deployments. Only deployments matching all the provided criteria will be returned. **Args:** * `flow_filter`: filter criteria for flows * `flow_run_filter`: filter criteria for flow runs * `task_run_filter`: filter criteria for task runs * `deployment_filter`: filter criteria for deployments * `work_pool_filter`: filter criteria for work pools * `work_queue_filter`: filter criteria for work pool queues * `limit`: maximum number of deployments to return. When `None`, the server applies `PREFECT_API_DEFAULT_LIMIT` (200 by default). * `offset`: an offset for the deployment query. **Returns:** * a list of Deployment model representations of the deployments #### `read_events` ```python theme={null} read_events(self, filter: 'EventFilter | None' = None, limit: int = 100) -> EventPage ``` query historical events from the API. **Args:** * `filter`: optional filter criteria to narrow down events * `limit`: maximum number of events to return per page (default 100) **Returns:** * EventPage containing events, total count, and next page link #### `read_events_page` ```python theme={null} read_events_page(self, next_page_url: str) -> EventPage ``` retrieve the next page of events using a next\_page URL. **Args:** * `next_page_url`: the next\_page URL from a previous EventPage response **Returns:** * EventPage containing the next page of events #### `read_flow` ```python theme={null} read_flow(self, flow_id: 'UUID') -> 'Flow' ``` Query the Prefect API for a flow by id. **Args:** * `flow_id`: the flow ID of interest **Returns:** * a Flow model representation of the flow #### `read_flow_by_name` ```python theme={null} read_flow_by_name(self, flow_name: str) -> 'Flow' ``` Query the Prefect API for a flow by name. **Args:** * `flow_name`: the name of a flow **Returns:** * a fully hydrated Flow model #### `read_flow_run` ```python theme={null} read_flow_run(self, flow_run_id: 'UUID') -> 'FlowRun' ``` Query the Prefect API for a flow run by id. **Args:** * `flow_run_id`: the flow run ID of interest **Returns:** * a Flow Run model representation of the flow run #### `read_flow_run_input` ```python theme={null} read_flow_run_input(self, flow_run_id: 'UUID', key: str) -> str ``` Reads a flow run input. **Args:** * `flow_run_id`: The flow run id. * `key`: The input key. #### `read_flow_run_states` ```python theme={null} read_flow_run_states(self, flow_run_id: 'UUID') -> 'list[State]' ``` Query for the states of a flow run **Args:** * `flow_run_id`: the id of the flow run **Returns:** * a list of State model representations of the flow run states #### `read_flow_runs` ```python theme={null} read_flow_runs(self) -> 'list[FlowRun]' ``` Query the Prefect API for flow runs. Only flow runs matching all criteria will be returned. **Args:** * `flow_filter`: filter criteria for flows * `flow_run_filter`: filter criteria for flow runs * `task_run_filter`: filter criteria for task runs * `deployment_filter`: filter criteria for deployments * `work_pool_filter`: filter criteria for work pools * `work_queue_filter`: filter criteria for work pool queues * `sort`: sort criteria for the flow runs * `limit`: maximum number of flow runs to return. When `None`, the server applies `PREFECT_API_DEFAULT_LIMIT` (200 by default). * `offset`: an offset for the flow run query. **Returns:** * a list of Flow Run model representations of the flow runs #### `read_flows` ```python theme={null} read_flows(self) -> list['Flow'] ``` Query the Prefect API for flows. Only flows matching all criteria will be returned. **Args:** * `flow_filter`: filter criteria for flows * `flow_run_filter`: filter criteria for flow runs * `task_run_filter`: filter criteria for task runs * `deployment_filter`: filter criteria for deployments * `work_pool_filter`: filter criteria for work pools * `work_queue_filter`: filter criteria for work pool queues * `sort`: sort criteria for the flows * `limit`: maximum number of flows to return. When `None`, the server applies `PREFECT_API_DEFAULT_LIMIT` (200 by default). * `offset`: an offset for the flow query. **Returns:** * a list of Flow model representations of the flows #### `read_global_concurrency_limit_by_name` ```python theme={null} read_global_concurrency_limit_by_name(self, name: str) -> 'GlobalConcurrencyLimitResponse' ``` #### `read_global_concurrency_limits` ```python theme={null} read_global_concurrency_limits(self, limit: int = 10, offset: int = 0) -> list['GlobalConcurrencyLimitResponse'] ``` #### `read_latest_artifacts` ```python theme={null} read_latest_artifacts(self, **kwargs: Unpack['ArtifactCollectionReadParams']) -> list['ArtifactCollection'] ``` #### `read_logs` ```python theme={null} read_logs(self, log_filter: 'LogFilter | None' = None, limit: int | None = None, offset: int | None = None, sort: 'LogSort | None' = None) -> list['Log'] ``` Read flow and task run logs. #### `read_resource_related_automations` ```python theme={null} read_resource_related_automations(self, resource_id: str) -> list['Automation'] ``` #### `read_task_run` ```python theme={null} read_task_run(self, task_run_id: UUID) -> TaskRun ``` Query the Prefect API for a task run by id. **Args:** * `task_run_id`: the task run ID of interest **Returns:** * a Task Run model representation of the task run #### `read_task_run_states` ```python theme={null} read_task_run_states(self, task_run_id: UUID) -> list[prefect.states.State] ``` Query for the states of a task run **Args:** * `task_run_id`: the id of the task run **Returns:** * a list of State model representations of the task run states #### `read_task_runs` ```python theme={null} read_task_runs(self) -> list[TaskRun] ``` Query the Prefect API for task runs. Only task runs matching all criteria will be returned. **Args:** * `flow_filter`: filter criteria for flows * `flow_run_filter`: filter criteria for flow runs * `task_run_filter`: filter criteria for task runs * `deployment_filter`: filter criteria for deployments * `sort`: sort criteria for the task runs * `limit`: maximum number of task runs to return. When `None`, the server applies `PREFECT_API_DEFAULT_LIMIT` (200 by default). * `offset`: an offset for the task run query. **Returns:** * a list of Task Run model representations of the task runs #### `read_variable_by_name` ```python theme={null} read_variable_by_name(self, name: str) -> 'Variable | None' ``` Reads a variable by name. Returns None if no variable is found. #### `read_variables` ```python theme={null} read_variables(self, limit: int | None = None) -> list['Variable'] ``` Reads all variables. #### `read_work_pool` ```python theme={null} read_work_pool(self, work_pool_name: str) -> 'WorkPool' ``` Reads information for a given work pool **Args:** * `work_pool_name`: The name of the work pool to for which to get information. **Returns:** * Information about the requested work pool. #### `read_work_pool_concurrency_status` ```python theme={null} read_work_pool_concurrency_status(self, work_pool_name: str, page: int = 1, limit: int | None = None, flow_run_limit: int = 10) -> 'WorkPoolConcurrencyStatus' ``` Reads concurrency status for a work pool. **Args:** * `work_pool_name`: The name of the work pool. * `page`: Page number (1-indexed). * `limit`: Max queues per page (server default if None). * `flow_run_limit`: Max flow runs per queue (0-200). **Returns:** * Paginated concurrency status with per-queue breakdown. #### `read_work_pools` ```python theme={null} read_work_pools(self, limit: int | None = None, offset: int = 0, work_pool_filter: 'WorkPoolFilter | None' = None) -> list['WorkPool'] ``` Reads work pools. **Args:** * `limit`: Limit for the work pool query. * `offset`: Offset for the work pool query. * `work_pool_filter`: Criteria by which to filter work pools. **Returns:** * A list of work pools. #### `read_work_queue` ```python theme={null} read_work_queue(self, id: UUID) -> WorkQueue ``` Read a work queue. **Args:** * `id`: the id of the work queue to load **Raises:** * `prefect.exceptions.ObjectNotFound`: If request returns 404 * `httpx.RequestError`: If request fails **Returns:** * an instantiated WorkQueue object #### `read_work_queue_by_name` ```python theme={null} read_work_queue_by_name(self, name: str, work_pool_name: Optional[str] = None) -> WorkQueue ``` Read a work queue by name. **Args:** * `name`: a unique name for the work queue * `work_pool_name`: the name of the work pool the queue belongs to. **Raises:** * `prefect.exceptions.ObjectNotFound`: if no work queue is found * `httpx.HTTPStatusError`: other status errors **Returns:** * a work queue API object #### `read_work_queue_concurrency_status` ```python theme={null} read_work_queue_concurrency_status(self, id: UUID, page: int = 1, limit: Optional[int] = None) -> 'WorkQueueConcurrencyStatus' ``` Read concurrency status for a work queue. **Args:** * `id`: the id of the work queue * `page`: Page number (1-indexed). * `limit`: Max flow runs per page (server default if None). **Raises:** * `prefect.exceptions.ObjectNotFound`: If request returns 404 * `httpx.RequestError`: If request fails **Returns:** * Paginated WorkQueueConcurrencyStatus with flow run summaries #### `read_work_queue_status` ```python theme={null} read_work_queue_status(self, id: UUID) -> WorkQueueStatusDetail ``` Read a work queue status. **Args:** * `id`: the id of the work queue to load **Raises:** * `prefect.exceptions.ObjectNotFound`: If request returns 404 * `httpx.RequestError`: If request fails **Returns:** * an instantiated WorkQueueStatus object #### `read_work_queues` ```python theme={null} read_work_queues(self, work_pool_name: Optional[str] = None, work_queue_filter: Optional[WorkQueueFilter] = None, limit: Optional[int] = None, offset: Optional[int] = None) -> list[WorkQueue] ``` Retrieves queues for a work pool. **Args:** * `work_pool_name`: Name of the work pool for which to get queues. * `work_queue_filter`: Criteria by which to filter queues. * `limit`: maximum number of work queues to return. When `None`, the server applies `PREFECT_API_DEFAULT_LIMIT` (200 by default). * `offset`: an offset for the work queue query. **Returns:** * List of queues for the specified work pool. #### `read_worker_metadata` ```python theme={null} read_worker_metadata(self) -> dict[str, Any] ``` Reads worker metadata stored in Prefect collection registry. #### `read_workers_for_work_pool` ```python theme={null} read_workers_for_work_pool(self, work_pool_name: str, worker_filter: 'WorkerFilter | None' = None, offset: int | None = None, limit: int | None = None) -> list['Worker'] ``` Reads workers for a given work pool. **Args:** * `work_pool_name`: The name of the work pool for which to get member workers. * `worker_filter`: Criteria by which to filter workers. * `limit`: Limit for the worker query. * `offset`: Limit for the worker query. #### `release_concurrency_slots` ```python theme={null} release_concurrency_slots(self, names: list[str], slots: int, occupancy_seconds: float) -> 'Response' ``` Release concurrency slots for the specified limits. **Args:** * `names`: A list of limit names for which to release slots. * `slots`: The number of concurrency slots to release. * `occupancy_seconds`: The duration in seconds that the slots were occupied. **Returns:** * "Response": The HTTP response from the server. #### `release_concurrency_slots_with_lease` ```python theme={null} release_concurrency_slots_with_lease(self, lease_id: 'UUID') -> 'Response' ``` Release concurrency slots for the specified lease. **Args:** * `lease_id`: The ID of the lease corresponding to the concurrency limits to release. #### `renew_concurrency_lease` ```python theme={null} renew_concurrency_lease(self, lease_id: 'UUID', lease_duration: float) -> 'Response' ``` Renew a concurrency lease. **Args:** * `lease_id`: The ID of the lease to renew. * `lease_duration`: The new lease duration in seconds. #### `reset_concurrency_limit_by_tag` ```python theme={null} reset_concurrency_limit_by_tag(self, tag: str, slot_override: list['UUID | str'] | None = None) -> None ``` Resets the concurrency limit slots set on a specific tag. **Args:** * `tag`: a tag the concurrency limit is applied to * `slot_override`: a list of task run IDs that are currently using a concurrency slot, please check that any task run IDs included in `slot_override` are currently running, otherwise those concurrency slots will never be released. **Raises:** * `ObjectNotFound`: If request returns 404 * `httpx.RequestError`: If request fails #### `resume_automation` ```python theme={null} resume_automation(self, automation_id: 'UUID') -> None ``` #### `resume_deployment` ```python theme={null} resume_deployment(self, deployment_id: Union[UUID, str]) -> None ``` Resume (unpause) a deployment by ID. **Args:** * `deployment_id`: The deployment ID of interest (can be a UUID or a string). **Raises:** * `ObjectNotFound`: If request returns 404 * `RequestError`: If request fails #### `resume_flow_run` ```python theme={null} resume_flow_run(self, flow_run_id: 'UUID', run_input: dict[str, Any] | None = None) -> 'OrchestrationResult[Any]' ``` Resumes a paused flow run. **Args:** * `flow_run_id`: the flow run ID of interest * `run_input`: the input to resume the flow run with **Returns:** * an OrchestrationResult model representation of state orchestration output #### `send_worker_heartbeat` ```python theme={null} send_worker_heartbeat(self, work_pool_name: str, worker_name: str, heartbeat_interval_seconds: float | None = None, get_worker_id: bool = False, worker_metadata: 'WorkerMetadata | None' = None) -> 'UUID | None' ``` Sends a worker heartbeat for a given work pool. **Args:** * `work_pool_name`: The name of the work pool to heartbeat against. * `worker_name`: The name of the worker sending the heartbeat. * `return_id`: Whether to return the worker ID. Note: will return `None` if the connected server does not support returning worker IDs, even if `return_id` is `True`. * `worker_metadata`: Metadata about the worker to send to the server. #### `set_deployment_paused_state` ```python theme={null} set_deployment_paused_state(self, deployment_id: UUID, paused: bool) -> None ``` DEPRECATED: Use pause\_deployment or resume\_deployment instead. Set the paused state of a deployment. **Args:** * `deployment_id`: the deployment ID to update * `paused`: whether the deployment should be paused #### `set_flow_run_name` ```python theme={null} set_flow_run_name(self, flow_run_id: 'UUID', name: str) -> httpx.Response ``` #### `set_flow_run_state` ```python theme={null} set_flow_run_state(self, flow_run_id: 'UUID | str', state: 'State[T]', force: bool = False) -> 'OrchestrationResult[T]' ``` Set the state of a flow run. **Args:** * `flow_run_id`: the id of the flow run * `state`: the state to set * `force`: if True, disregard orchestration logic when setting the state, forcing the Prefect API to accept the state **Returns:** * an OrchestrationResult model representation of state orchestration output #### `set_task_run_name` ```python theme={null} set_task_run_name(self, task_run_id: UUID, name: str) -> httpx.Response ``` #### `set_task_run_state` ```python theme={null} set_task_run_state(self, task_run_id: UUID, state: prefect.states.State[Any], force: bool = False) -> OrchestrationResult[Any] ``` Set the state of a task run. **Args:** * `task_run_id`: the id of the task run * `state`: the state to set * `force`: if True, disregard orchestration logic when setting the state, forcing the Prefect API to accept the state **Returns:** * an OrchestrationResult model representation of state orchestration output #### `update_artifact` ```python theme={null} update_artifact(self, artifact_id: 'UUID', artifact: 'ArtifactUpdate') -> None ``` #### `update_automation` ```python theme={null} update_automation(self, automation_id: 'UUID', automation: 'AutomationCore') -> None ``` Updates an automation in Prefect Cloud. #### `update_block_document` ```python theme={null} update_block_document(self, block_document_id: 'UUID', block_document: 'BlockDocumentUpdate') -> None ``` Update a block document in the Prefect API. #### `update_block_type` ```python theme={null} update_block_type(self, block_type_id: 'UUID', block_type: 'BlockTypeUpdate') -> None ``` Update a block document in the Prefect API. #### `update_deployment` ```python theme={null} update_deployment(self, deployment_id: UUID, deployment: 'DeploymentUpdate') -> None ``` #### `update_deployment_schedule` ```python theme={null} update_deployment_schedule(self, deployment_id: UUID, schedule_id: UUID, active: bool | None = None, schedule: 'SCHEDULE_TYPES | None' = None, max_scheduled_runs: int | None = None, parameters: dict[str, Any] | None = None, slug: str | None = None) -> None ``` Update a deployment schedule by ID. **Args:** * `deployment_id`: the deployment ID * `schedule_id`: the deployment schedule ID of interest * `active`: whether or not the schedule should be active * `schedule`: the cron, rrule, or interval schedule this deployment schedule should use * `max_scheduled_runs`: The maximum number of scheduled runs for the schedule. * `parameters`: Parameter overrides for the schedule. * `slug`: A unique identifier for the schedule. #### `update_flow_run` ```python theme={null} update_flow_run(self, flow_run_id: 'UUID', flow_version: str | None = None, parameters: dict[str, Any] | None = None, name: str | None = None, tags: 'Iterable[str] | None' = None, empirical_policy: 'FlowRunPolicy | None' = None, infrastructure_pid: str | None = None, job_variables: dict[str, Any] | None = None) -> httpx.Response ``` Update a flow run's details. **Args:** * `flow_run_id`: The identifier for the flow run to update. * `flow_version`: A new version string for the flow run. * `parameters`: A dictionary of parameter values for the flow run. This will not be merged with any existing parameters. * `name`: A new name for the flow run. * `empirical_policy`: A new flow run orchestration policy. This will not be merged with any existing policy. * `tags`: An iterable of new tags for the flow run. These will not be merged with any existing tags. * `infrastructure_pid`: The id of flow run as returned by an infrastructure block. **Returns:** * an `httpx.Response` object from the PATCH request #### `update_flow_run_labels` ```python theme={null} update_flow_run_labels(self, flow_run_id: 'UUID', labels: 'KeyValueLabelsField') -> None ``` Updates the labels of a flow run. #### `update_global_concurrency_limit` ```python theme={null} update_global_concurrency_limit(self, name: str, concurrency_limit: 'GlobalConcurrencyLimitUpdate') -> 'Response' ``` #### `update_variable` ```python theme={null} update_variable(self, variable: 'VariableUpdate') -> None ``` Updates a variable with the provided configuration. **Args:** * `variable`: Desired configuration for the updated variable. Returns: Information about the updated variable. #### `update_work_pool` ```python theme={null} update_work_pool(self, work_pool_name: str, work_pool: 'WorkPoolUpdate') -> None ``` Updates a work pool. **Args:** * `work_pool_name`: Name of the work pool to update. * `work_pool`: Fields to update in the work pool. #### `update_work_queue` ```python theme={null} update_work_queue(self, id: UUID, **kwargs: Any) -> None ``` Update properties of a work queue. **Args:** * `id`: the ID of the work queue to update * `**kwargs`: the fields to update **Raises:** * `ValueError`: if no kwargs are provided * `prefect.exceptions.ObjectNotFound`: if request returns 404 * `httpx.RequestError`: if the request fails #### `upsert_global_concurrency_limit_by_name` ```python theme={null} upsert_global_concurrency_limit_by_name(self, name: str, limit: int, slot_decay_per_second: float | None = None) -> None ``` Creates a global concurrency limit with the given name and limit if one does not already exist. If one does already exist matching the name then update it's limit and/or slot\_decay\_per\_second if they are different. Note: This is not done atomically. # base Source: https://docs.prefect.io/v3/api-ref/python/prefect-client-orchestration-base # `prefect.client.orchestration.base` ## Classes ### `BaseClient` **Methods:** #### `request` ```python theme={null} request(self, method: HTTP_METHODS, path: 'ServerRoutes', params: dict[str, Any] | None = None, path_params: dict[str, Any] | None = None, **kwargs: Any) -> 'Response' ``` ### `BaseAsyncClient` **Methods:** #### `request` ```python theme={null} request(self, method: HTTP_METHODS, path: 'ServerRoutes', params: dict[str, Any] | None = None, path_params: dict[str, Any] | None = None, **kwargs: Any) -> 'Response' ``` # routes Source: https://docs.prefect.io/v3/api-ref/python/prefect-client-orchestration-routes # `prefect.client.orchestration.routes` *This module is empty or contains only private/internal implementations.* # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-client-schemas-__init__ # `prefect.client.schemas` *This module is empty or contains only private/internal implementations.* # actions Source: https://docs.prefect.io/v3/api-ref/python/prefect-client-schemas-actions # `prefect.client.schemas.actions` ## Classes ### `StateCreate` Data used by the Prefect REST API to create a new state. ### `FlowCreate` Data used by the Prefect REST API to create a flow. ### `FlowUpdate` Data used by the Prefect REST API to update a flow. ### `DeploymentScheduleCreate` **Methods:** #### `from_schedule` ```python theme={null} from_schedule(cls, schedule: Schedule) -> 'DeploymentScheduleCreate' ``` #### `validate_active` ```python theme={null} validate_active(cls, v: Any, handler: Callable[[Any], Any]) -> bool ``` #### `validate_max_scheduled_runs` ```python theme={null} validate_max_scheduled_runs(cls, v: Optional[int]) -> Optional[int] ``` ### `DeploymentScheduleUpdate` **Methods:** #### `validate_max_scheduled_runs` ```python theme={null} validate_max_scheduled_runs(cls, v: Optional[int]) -> Optional[int] ``` ### `DeploymentCreate` Data used by the Prefect REST API to create a deployment. **Methods:** #### `check_valid_configuration` ```python theme={null} check_valid_configuration(self, base_job_template: dict[str, Any]) -> None ``` Check that the combination of base\_job\_template defaults and job\_variables conforms to the specified schema. #### `convert_to_strings` ```python theme={null} convert_to_strings(cls, values: Optional[Union[str, list[str]]]) -> Union[str, list[str]] ``` #### `remove_old_fields` ```python theme={null} remove_old_fields(cls, values: dict[str, Any]) -> dict[str, Any] ``` ### `DeploymentUpdate` Data used by the Prefect REST API to update a deployment. **Methods:** #### `check_valid_configuration` ```python theme={null} check_valid_configuration(self, base_job_template: dict[str, Any]) -> None ``` Check that the combination of base\_job\_template defaults and job\_variables conforms to the specified schema. #### `remove_old_fields` ```python theme={null} remove_old_fields(cls, values: dict[str, Any]) -> dict[str, Any] ``` ### `DeploymentBranch` **Methods:** #### `validate_branch_length` ```python theme={null} validate_branch_length(cls, v: str) -> str ``` ### `FlowRunUpdate` Data used by the Prefect REST API to update a flow run. ### `TaskRunCreate` Data used by the Prefect REST API to create a task run ### `TaskRunUpdate` Data used by the Prefect REST API to update a task run ### `FlowRunCreate` Data used by the Prefect REST API to create a flow run. ### `DeploymentFlowRunCreate` Data used by the Prefect REST API to create a flow run from a deployment. **Methods:** #### `convert_parameters_to_plain_data` ```python theme={null} convert_parameters_to_plain_data(cls, values: dict[str, Any]) -> dict[str, Any] ``` #### `serialize_parameters` ```python theme={null} serialize_parameters(self, value: dict[str, Any]) -> dict[str, Any] ``` Serialize datetime types as ISO strings instead of timestamps. PrefectBaseModel has ser\_json\_timedelta='float' to serialize timedeltas as floats, but this also causes datetime/date/time to serialize as timestamps. This serializer overrides that behavior for datetime types while preserving float serialization for timedeltas. ### `SavedSearchCreate` Data used by the Prefect REST API to create a saved search. ### `ConcurrencyLimitCreate` Data used by the Prefect REST API to create a concurrency limit. ### `ConcurrencyLimitV2Create` Data used by the Prefect REST API to create a v2 concurrency limit. ### `ConcurrencyLimitV2Update` Data used by the Prefect REST API to update a v2 concurrency limit. ### `BlockTypeCreate` Data used by the Prefect REST API to create a block type. ### `BlockTypeUpdate` Data used by the Prefect REST API to update a block type. **Methods:** #### `updatable_fields` ```python theme={null} updatable_fields(cls) -> set[str] ``` ### `BlockSchemaCreate` Data used by the Prefect REST API to create a block schema. ### `BlockDocumentCreate` Data used by the Prefect REST API to create a block document. **Methods:** #### `validate_name_is_present_if_not_anonymous` ```python theme={null} validate_name_is_present_if_not_anonymous(cls, values: dict[str, Any]) -> dict[str, Any] ``` ### `BlockDocumentUpdate` Data used by the Prefect REST API to update a block document. ### `BlockDocumentReferenceCreate` Data used to create block document reference. ### `LogCreate` Data used by the Prefect REST API to create a log. **Methods:** #### `model_dump` ```python theme={null} model_dump(self, *args: Any, **kwargs: Any) -> dict[str, Any] ``` The worker\_id field is only included in logs sent to Prefect Cloud. If it's unset, we should not include it in the log payload. ### `WorkPoolCreate` Data used by the Prefect REST API to create a work pool. ### `WorkPoolUpdate` Data used by the Prefect REST API to update a work pool. ### `WorkQueueCreate` Data used by the Prefect REST API to create a work queue. ### `WorkQueueUpdate` Data used by the Prefect REST API to update a work queue. ### `ArtifactCreate` Data used by the Prefect REST API to create an artifact. ### `ArtifactUpdate` Data used by the Prefect REST API to update an artifact. ### `VariableCreate` Data used by the Prefect REST API to create a Variable. ### `VariableUpdate` Data used by the Prefect REST API to update a Variable. ### `GlobalConcurrencyLimitCreate` Data used by the Prefect REST API to create a global concurrency limit. ### `GlobalConcurrencyLimitUpdate` Data used by the Prefect REST API to update a global concurrency limit. # events Source: https://docs.prefect.io/v3/api-ref/python/prefect-client-schemas-events # `prefect.client.schemas.events` ## Classes ### `EventPage` a single page of events returned from the API **Methods:** #### `get_next_page` ```python theme={null} get_next_page(self, client: 'PrefectClient') -> 'EventPage | None' ``` fetch the next page of events. **Args:** * `client`: the PrefectClient instance to use for fetching **Returns:** * the next EventPage, or None if there are no more pages #### `get_next_page_sync` ```python theme={null} get_next_page_sync(self, client: 'SyncPrefectClient') -> 'EventPage | None' ``` fetch the next page of events (sync version). **Args:** * `client`: the SyncPrefectClient instance to use for fetching **Returns:** * the next EventPage, or None if there are no more pages #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. # filters Source: https://docs.prefect.io/v3/api-ref/python/prefect-client-schemas-filters # `prefect.client.schemas.filters` Schemas that define Prefect REST API filtering operations. ## Classes ### `Operator` Operators for combining filter criteria. **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `OperatorMixin` Base model for Prefect filters that combines criteria with a user-provided operator ### `FlowFilterId` Filter by `Flow.id`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowFilterName` Filter by `Flow.name`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowFilterTags` Filter by `Flow.tags`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowFilter` Filter for flows. Only flows matching all criteria will be returned. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilterId` Filter by FlowRun.id. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilterName` Filter by `FlowRun.name`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilterTags` Filter by `FlowRun.tags`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilterDeploymentId` Filter by `FlowRun.deployment_id`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilterWorkQueueName` Filter by `FlowRun.work_queue_name`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilterStateType` Filter by `FlowRun.state_type`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilterStateName` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilterState` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilterFlowVersion` Filter by `FlowRun.flow_version`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilterStartTime` Filter by `FlowRun.start_time`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilterEndTime` Filter by `FlowRun.end_time`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilterExpectedStartTime` Filter by `FlowRun.expected_start_time`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilterNextScheduledStartTime` Filter by `FlowRun.next_scheduled_start_time`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilterParentFlowRunId` Filter for subflows of the given flow runs **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilterParentTaskRunId` Filter by `FlowRun.parent_task_run_id`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilterIdempotencyKey` Filter by FlowRun.idempotency\_key. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilterCreatedBy` Filter by `FlowRun.created_by`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunFilter` Filter flow runs. Only flow runs matching all criteria will be returned **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `TaskRunFilterFlowRunId` Filter by `TaskRun.flow_run_id`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `TaskRunFilterId` Filter by `TaskRun.id`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `TaskRunFilterName` Filter by `TaskRun.name`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `TaskRunFilterTags` Filter by `TaskRun.tags`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `TaskRunFilterStateType` Filter by `TaskRun.state_type`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `TaskRunFilterStateName` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `TaskRunFilterState` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `TaskRunFilterSubFlowRuns` Filter by `TaskRun.subflow_run`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `TaskRunFilterStartTime` Filter by `TaskRun.start_time`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `TaskRunFilterEndTime` Filter by `TaskRun.end_time`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `TaskRunFilter` Filter task runs. Only task runs matching all criteria will be returned **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `DeploymentFilterId` Filter by `Deployment.id`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `DeploymentFilterName` Filter by `Deployment.name`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `DeploymentFilterWorkQueueName` Filter by `Deployment.work_queue_name`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `DeploymentFilterTags` Filter by `Deployment.tags`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `DeploymentFilterConcurrencyLimit` DEPRECATED: Prefer `Deployment.concurrency_limit_id` over `Deployment.concurrency_limit`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `DeploymentFilter` Filter for deployments. Only deployments matching all criteria will be returned. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `LogFilterName` Filter by `Log.name`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `LogFilterLevel` Filter by `Log.level`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `LogFilterTimestamp` Filter by `Log.timestamp`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `LogFilterFlowRunId` Filter by `Log.flow_run_id`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `LogFilterTaskRunId` Filter by `Log.task_run_id`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `LogFilterTextSearch` Filter by text search across log content. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `LogFilter` Filter logs. Only logs matching all criteria will be returned **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FilterSet` A collection of filters for common objects **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `BlockTypeFilterName` Filter by `BlockType.name` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `BlockTypeFilterSlug` Filter by `BlockType.slug` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `BlockTypeFilter` Filter BlockTypes **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `BlockSchemaFilterBlockTypeId` Filter by `BlockSchema.block_type_id`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `BlockSchemaFilterId` Filter by BlockSchema.id **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `BlockSchemaFilterCapabilities` Filter by `BlockSchema.capabilities` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `BlockSchemaFilterVersion` Filter by `BlockSchema.capabilities` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `BlockSchemaFilter` Filter BlockSchemas **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `BlockDocumentFilterIsAnonymous` Filter by `BlockDocument.is_anonymous`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `BlockDocumentFilterBlockTypeId` Filter by `BlockDocument.block_type_id`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `BlockDocumentFilterId` Filter by `BlockDocument.id`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `BlockDocumentFilterName` Filter by `BlockDocument.name`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `BlockDocumentFilter` Filter BlockDocuments. Only BlockDocuments matching all criteria will be returned **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkQueueFilterId` Filter by `WorkQueue.id`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkQueueFilterName` Filter by `WorkQueue.name`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkQueueFilter` Filter work queues. Only work queues matching all criteria will be returned **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkPoolFilterId` Filter by `WorkPool.id`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkPoolFilterName` Filter by `WorkPool.name`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkPoolFilterType` Filter by `WorkPool.type`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkPoolFilter` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkerFilterWorkPoolId` Filter by `Worker.worker_config_id`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkerFilterLastHeartbeatTime` Filter by `Worker.last_heartbeat_time`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkerFilterStatus` Filter by `Worker.status`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkerFilter` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `ArtifactFilterId` Filter by `Artifact.id`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `ArtifactFilterKey` Filter by `Artifact.key`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `ArtifactFilterFlowRunId` Filter by `Artifact.flow_run_id`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `ArtifactFilterTaskRunId` Filter by `Artifact.task_run_id`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `ArtifactFilterType` Filter by `Artifact.type`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `ArtifactFilter` Filter artifacts. Only artifacts matching all criteria will be returned **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `ArtifactCollectionFilterLatestId` Filter by `ArtifactCollection.latest_id`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `ArtifactCollectionFilterKey` Filter by `ArtifactCollection.key`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `ArtifactCollectionFilterFlowRunId` Filter by `ArtifactCollection.flow_run_id`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `ArtifactCollectionFilterTaskRunId` Filter by `ArtifactCollection.task_run_id`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `ArtifactCollectionFilterType` Filter by `ArtifactCollection.type`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `ArtifactCollectionFilter` Filter artifact collections. Only artifact collections matching all criteria will be returned **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `VariableFilterId` Filter by `Variable.id`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `VariableFilterName` Filter by `Variable.name`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `VariableFilterTags` Filter by `Variable.tags`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `VariableFilter` Filter variables. Only variables matching all criteria will be returned **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. # objects Source: https://docs.prefect.io/v3/api-ref/python/prefect-client-schemas-objects # `prefect.client.schemas.objects` ## Functions ### `data_discriminator` ```python theme={null} data_discriminator(x: Any) -> str ``` ## Classes ### `RunType` **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `StateType` Enumeration of state types. **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `WorkPoolStatus` Enumeration of work pool statuses. **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` #### `display_name` ```python theme={null} display_name(self) -> str ``` ### `WorkerStatus` Enumeration of worker statuses. **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `DeploymentStatus` Enumeration of deployment statuses. **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `WorkQueueStatus` Enumeration of work queue statuses. **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `ConcurrencyLimitStrategy` Enumeration of concurrency limit strategies. **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `ConcurrencyOptions` Class for storing the concurrency config in database. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `ConcurrencyLimitConfig` Class for storing the concurrency limit config in database. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `ConcurrencyLeaseHolder` Model for validating concurrency lease holder information. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `StateDetails` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `to_run_result` ```python theme={null} to_run_result(self, run_type: RunType) -> Optional[Union[FlowRunResult, TaskRunResult]] ``` ### `State` The state of a run. **Methods:** #### `aresult` ```python theme={null} aresult(self: 'State[R]', raise_on_failure: Literal[True] = ..., retry_result_failure: bool = ...) -> R ``` #### `aresult` ```python theme={null} aresult(self: 'State[R]', raise_on_failure: Literal[False] = False, retry_result_failure: bool = ...) -> Union[R, Exception] ``` #### `aresult` ```python theme={null} aresult(self: 'State[R]', raise_on_failure: bool = ..., retry_result_failure: bool = ...) -> Union[R, Exception] ``` #### `aresult` ```python theme={null} aresult(self, raise_on_failure: bool = True, retry_result_failure: bool = True) -> Union[R, Exception] ``` Retrieve the result attached to this state. #### `default_name_from_type` ```python theme={null} default_name_from_type(self) -> Self ``` If a name is not provided, use the type #### `default_scheduled_start_time` ```python theme={null} default_scheduled_start_time(self) -> Self ``` #### `fresh_copy` ```python theme={null} fresh_copy(self, **kwargs: Any) -> Self ``` Return a fresh copy of the state with a new ID. #### `is_cancelled` ```python theme={null} is_cancelled(self) -> bool ``` #### `is_cancelling` ```python theme={null} is_cancelling(self) -> bool ``` #### `is_completed` ```python theme={null} is_completed(self) -> bool ``` #### `is_crashed` ```python theme={null} is_crashed(self) -> bool ``` #### `is_failed` ```python theme={null} is_failed(self) -> bool ``` #### `is_final` ```python theme={null} is_final(self) -> bool ``` #### `is_paused` ```python theme={null} is_paused(self) -> bool ``` #### `is_pending` ```python theme={null} is_pending(self) -> bool ``` #### `is_running` ```python theme={null} is_running(self) -> bool ``` #### `is_scheduled` ```python theme={null} is_scheduled(self) -> bool ``` #### `model_copy` ```python theme={null} model_copy(self) -> Self ``` Copying API models should return an object that could be inserted into the database again. The 'timestamp' is reset using the default factory. #### `result` ```python theme={null} result(self: 'State[R]', raise_on_failure: Literal[True] = ..., retry_result_failure: bool = ...) -> R ``` #### `result` ```python theme={null} result(self: 'State[R]', raise_on_failure: Literal[False] = False, retry_result_failure: bool = ...) -> Union[R, Exception] ``` #### `result` ```python theme={null} result(self: 'State[R]', raise_on_failure: bool = ..., retry_result_failure: bool = ...) -> Union[R, Exception] ``` #### `result` ```python theme={null} result(self, raise_on_failure: bool = True, retry_result_failure: bool = True) -> Union[R, Exception] ``` Retrieve the result attached to this state. **Args:** * `raise_on_failure`: a boolean specifying whether to raise an exception if the state is of type `FAILED` and the underlying data is an exception. When flow was run in a different memory space (using `run_deployment`), this will only raise if `fetch` is `True`. * `retry_result_failure`: a boolean specifying whether to retry on failures to load the result from result storage **Raises:** * `TypeError`: If the state is failed but the result is not an exception. **Returns:** * The result of the run **Examples:** Get the result from a flow state ```python theme={null} @flow def my_flow(): return "hello" my_flow(return_state=True).result() # hello ``` Get the result from a failed state ```python theme={null} @flow def my_flow(): raise ValueError("oh no!") state = my_flow(return_state=True) # Error is wrapped in FAILED state state.result() # Raises `ValueError` ``` Get the result from a failed state without erroring ```python theme={null} @flow def my_flow(): raise ValueError("oh no!") state = my_flow(return_state=True) result = state.result(raise_on_failure=False) print(result) # ValueError("oh no!") ``` Get the result from a flow state in an async context ```python theme={null} @flow async def my_flow(): return "hello" state = await my_flow(return_state=True) await state.result() # hello ``` Get the result with `raise_on_failure` from a flow run in a different memory space ```python theme={null} @flow async def my_flow(): raise ValueError("oh no!") my_flow.deploy("my_deployment/my_flow") flow_run = run_deployment("my_deployment/my_flow") await flow_run.state.result(raise_on_failure=True) # Raises `ValueError("oh no!")` ``` #### `set_unpersisted_results_to_none` ```python theme={null} set_unpersisted_results_to_none(self) -> Self ``` ### `FlowRunPolicy` Defines of how a flow run should be orchestrated. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `populate_deprecated_fields` ```python theme={null} populate_deprecated_fields(cls, values: Any) -> Any ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRun` **Methods:** #### `set_default_name` ```python theme={null} set_default_name(cls, name: Optional[str]) -> str ``` ### `TaskRunPolicy` Defines of how a task run should retry. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `populate_deprecated_fields` ```python theme={null} populate_deprecated_fields(self) ``` If deprecated fields are provided, populate the corresponding new fields to preserve orchestration behavior. #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `validate_configured_retry_delays` ```python theme={null} validate_configured_retry_delays(cls, v: Optional[int | float | list[int] | list[float]]) -> Optional[int | float | list[int] | list[float]] ``` #### `validate_jitter_factor` ```python theme={null} validate_jitter_factor(cls, v: Optional[float]) -> Optional[float] ``` ### `RunInput` Base class for classes that represent inputs to task runs, which could include, constants, parameters, or other task runs. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `TaskRunResult` Represents a task run result input to another task run. ### `FlowRunResult` ### `Parameter` Represents a parameter input to a task run. ### `Constant` Represents constant input value to a task run. ### `TaskRun` **Methods:** #### `set_default_name` ```python theme={null} set_default_name(cls, name: Optional[str]) -> Name ``` ### `Workspace` A Prefect Cloud workspace. Expected payload for each workspace returned by the `me/workspaces` route. **Methods:** #### `api_url` ```python theme={null} api_url(self) -> str ``` Generate the API URL for accessing this workspace #### `handle` ```python theme={null} handle(self) -> str ``` The full handle of the workspace as `account_handle` / `workspace_handle` #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `ui_url` ```python theme={null} ui_url(self) -> str ``` Generate the UI URL for accessing this workspace ### `IPAllowlistEntry` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `IPAllowlist` A Prefect Cloud IP allowlist. Expected payload for an IP allowlist from the Prefect Cloud API. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `IPAllowlistMyAccessResponse` Expected payload for an IP allowlist access response from the Prefect Cloud API. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `BlockType` An ORM representation of a block type ### `BlockSchema` A representation of a block schema. ### `BlockDocument` An ORM representation of a block document. **Methods:** #### `serialize_data` ```python theme={null} serialize_data(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `validate_name_is_present_if_not_anonymous` ```python theme={null} validate_name_is_present_if_not_anonymous(cls, values: dict[str, Any]) -> dict[str, Any] ``` ### `Flow` An ORM representation of flow data. ### `DeploymentSchedule` ### `VersionInfo` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `BranchingScheduleHandling` ### `DeploymentBranchingOptions` ### `Deployment` An ORM representation of deployment data. ### `ConcurrencyLimit` An ORM representation of a concurrency limit. ### `BlockSchemaReference` An ORM representation of a block schema reference. ### `BlockDocumentReference` An ORM representation of a block document reference. **Methods:** #### `validate_parent_and_ref_are_different` ```python theme={null} validate_parent_and_ref_are_different(cls, values: Any) -> Any ``` ### `Configuration` An ORM representation of account info. ### `SavedSearchFilter` A filter for a saved search model. Intended for use by the Prefect UI. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `SavedSearch` An ORM representation of saved search data. Represents a set of filter criteria. ### `Log` An ORM representation of log data. ### `QueueFilter` Filter criteria definition for a work queue. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkQueue` An ORM representation of a work queue ### `WorkQueueHealthPolicy` **Methods:** #### `evaluate_health_status` ```python theme={null} evaluate_health_status(self, late_runs_count: int, last_polled: datetime.datetime | None = None) -> bool ``` Given empirical information about the state of the work queue, evaluate its health status. **Args:** * `late_runs`: the count of late runs for the work queue. * `last_polled`: the last time the work queue was polled, if available. **Returns:** * whether or not the work queue is healthy. #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkQueueStatusDetail` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `Agent` An ORM representation of an agent ### `WorkPoolStorageConfiguration` A work pool storage configuration **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkPool` An ORM representation of a work pool **Methods:** #### `helpful_error_for_missing_default_queue_id` ```python theme={null} helpful_error_for_missing_default_queue_id(cls, v: Optional[UUID]) -> UUID ``` #### `is_managed_pool` ```python theme={null} is_managed_pool(self) -> bool ``` #### `is_push_pool` ```python theme={null} is_push_pool(self) -> bool ``` ### `Worker` An ORM representation of a worker ### `Artifact` **Methods:** #### `validate_metadata_length` ```python theme={null} validate_metadata_length(cls, v: Optional[dict[str, str]]) -> Optional[dict[str, str]] ``` ### `ArtifactCollection` ### `Variable` ### `FlowRunInput` **Methods:** #### `decoded_value` ```python theme={null} decoded_value(self) -> Any ``` Decode the value of the input. **Returns:** * the decoded value ### `GlobalConcurrencyLimit` An ORM representation of a global concurrency limit ### `CsrfToken` ### `Integration` A representation of an installed Prefect integration. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkerMetadata` Worker metadata. We depend on the structure of `integrations`, but otherwise, worker classes should support flexible metadata. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. # responses Source: https://docs.prefect.io/v3/api-ref/python/prefect-client-schemas-responses # `prefect.client.schemas.responses` ## Classes ### `SetStateStatus` Enumerates return statuses for setting run states. **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `StateAcceptDetails` Details associated with an ACCEPT state transition. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `StateRejectDetails` Details associated with a REJECT state transition. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `StateAbortDetails` Details associated with an ABORT state transition. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `StateWaitDetails` Details associated with a WAIT state transition. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `HistoryResponseState` Represents a single state's history over an interval. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `HistoryResponse` Represents a history of aggregation states over an interval **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `OrchestrationResult` A container for the output of state orchestration. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkerFlowRunResponse` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunResponse` ### `DeploymentResponse` **Methods:** #### `as_related_resource` ```python theme={null} as_related_resource(self, role: str = 'deployment') -> 'RelatedResource' ``` ### `MinimalConcurrencyLimitResponse` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `ConcurrencyLimitWithLeaseResponse` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `GlobalConcurrencyLimitResponse` A response object for global concurrency limits. ### `FlowRunSlotSummary` Summary of a flow run occupying a concurrency slot. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkQueueConcurrencyStatusDetail` Per-queue concurrency status with flow run details. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkPoolConcurrencyStatus` Paginated pool-level concurrency status with per-queue breakdown. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkQueueConcurrencyStatus` Paginated queue-level concurrency status with flow run details. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. # schedules Source: https://docs.prefect.io/v3/api-ref/python/prefect-client-schemas-schedules # `prefect.client.schemas.schedules` Schedule schemas ## Functions ### `is_valid_timezone` ```python theme={null} is_valid_timezone(v: str) -> bool ``` Validate that the provided timezone is a valid IANA timezone. Unfortunately this list is slightly different from the list of valid timezones we use for cron and interval timezone validation. ### `is_schedule_type` ```python theme={null} is_schedule_type(obj: Any) -> TypeGuard[SCHEDULE_TYPES] ``` ### `construct_schedule` ```python theme={null} construct_schedule(interval: Optional[Union[int, float, datetime.timedelta]] = None, anchor_date: Optional[Union[datetime.datetime, str]] = None, cron: Optional[str] = None, rrule: Optional[str] = None, timezone: Optional[str] = None) -> SCHEDULE_TYPES ``` Construct a schedule from the provided arguments. **Args:** * `interval`: An interval on which to schedule runs. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds. * `anchor_date`: The start date for an interval schedule. * `cron`: A cron schedule for runs. * `rrule`: An rrule schedule of when to execute runs of this flow. * `timezone`: A timezone to use for the schedule. Defaults to UTC. ## Classes ### `IntervalSchedule` A schedule formed by adding `interval` increments to an `anchor_date`. If no `anchor_date` is supplied, the current UTC time is used. If a timezone-naive datetime is provided for `anchor_date`, it is assumed to be in the schedule's timezone (or UTC). Even if supplied with an IANA timezone, anchor dates are always stored as UTC offsets, so a `timezone` can be provided to determine localization behaviors like DST boundary handling. If none is provided it will be inferred from the anchor date. NOTE: If the `IntervalSchedule` `anchor_date` or `timezone` is provided in a DST-observing timezone, then the schedule will adjust itself appropriately. Intervals greater than 24 hours will follow DST conventions, while intervals of less than 24 hours will follow UTC intervals. For example, an hourly schedule will fire every UTC hour, even across DST boundaries. When clocks are set back, this will result in two runs that *appear* to both be scheduled for 1am local time, even though they are an hour apart in UTC time. For longer intervals, like a daily schedule, the interval schedule will adjust for DST boundaries so that the clock-hour remains constant. This means that a daily schedule that always fires at 9am will observe DST and continue to fire at 9am in the local time zone. **Args:** * `interval`: an interval to schedule on * `anchor_date`: an anchor date to schedule increments against; if not provided, the current timestamp will be used * `timezone`: a valid timezone string **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `validate_timezone` ```python theme={null} validate_timezone(self) ``` ### `CronSchedule` Cron schedule NOTE: If the timezone is a DST-observing one, then the schedule will adjust itself appropriately. Cron's rules for DST are based on schedule times, not intervals. This means that an hourly cron schedule will fire on every new schedule hour, not every elapsed hour; for example, when clocks are set back this will result in a two-hour pause as the schedule will fire *the first time* 1am is reached and *the first time* 2am is reached, 120 minutes later. Longer schedules, such as one that fires at 9am every morning, will automatically adjust for DST. **Args:** * `cron`: a valid cron string * `timezone`: a valid timezone string in IANA tzdata format (for example, America/New\_York). * `day_or`: Control how croniter handles `day` and `day_of_week` entries. Defaults to True, matching cron which connects those values using OR. If the switch is set to False, the values are connected using AND. This behaves like fcron and enables you to e.g. define a job that executes each 2nd friday of a month by setting the days of month and the weekday. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `valid_cron_string` ```python theme={null} valid_cron_string(cls, v: str) -> str ``` #### `valid_timezone` ```python theme={null} valid_timezone(cls, v: Optional[str]) -> Optional[str] ``` ### `RRuleSchedule` RRule schedule, based on the iCalendar standard ([RFC 5545](https://datatracker.ietf.org/doc/html/rfc5545)) as implemented in `dateutils.rrule`. RRules are appropriate for any kind of calendar-date manipulation, including irregular intervals, repetition, exclusions, week day or day-of-month adjustments, and more. Note that as a calendar-oriented standard, `RRuleSchedules` are sensitive to to the initial timezone provided. A 9am daily schedule with a daylight saving time-aware start date will maintain a local 9am time through DST boundaries; a 9am daily schedule with a UTC start date will maintain a 9am UTC time. **Args:** * `rrule`: a valid RRule string * `timezone`: a valid timezone string **Methods:** #### `from_rrule` ```python theme={null} from_rrule(cls, rrule: Union[dateutil.rrule.rrule, dateutil.rrule.rruleset]) -> 'RRuleSchedule' ``` #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `to_rrule` ```python theme={null} to_rrule(self) -> Union[dateutil.rrule.rrule, dateutil.rrule.rruleset] ``` Since rrule doesn't properly serialize/deserialize timezones, we localize dates here #### `valid_timezone` ```python theme={null} valid_timezone(cls, v: Optional[str]) -> str ``` Validate that the provided timezone is a valid IANA timezone. Unfortunately this list is slightly different from the list of valid timezones we use for cron and interval timezone validation. #### `validate_rrule_str` ```python theme={null} validate_rrule_str(cls, v: str) -> str ``` ### `NoSchedule` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. # sorting Source: https://docs.prefect.io/v3/api-ref/python/prefect-client-schemas-sorting # `prefect.client.schemas.sorting` ## Classes ### `FlowRunSort` Defines flow run sorting options. **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `TaskRunSort` Defines task run sorting options. **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `AutomationSort` Defines automation sorting options. **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `LogSort` Defines log sorting options. **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `FlowSort` Defines flow sorting options. **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `DeploymentSort` Defines deployment sorting options. **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `ArtifactSort` Defines artifact sorting options. **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `ArtifactCollectionSort` Defines artifact collection sorting options. **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `VariableSort` Defines variables sorting options. **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `BlockDocumentSort` Defines block document sorting options. **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` # subscriptions Source: https://docs.prefect.io/v3/api-ref/python/prefect-client-subscriptions # `prefect.client.subscriptions` ## Classes ### `Subscription` **Methods:** #### `websocket` ```python theme={null} websocket(self) -> websockets.asyncio.client.ClientConnection ``` # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-client-types-__init__ # `prefect.client.types` *This module is empty or contains only private/internal implementations.* # flexible_schedule_list Source: https://docs.prefect.io/v3/api-ref/python/prefect-client-types-flexible_schedule_list # `prefect.client.types.flexible_schedule_list` *This module is empty or contains only private/internal implementations.* # utilities Source: https://docs.prefect.io/v3/api-ref/python/prefect-client-utilities # `prefect.client.utilities` Utilities for working with clients. ## Functions ### `get_or_create_client` ```python theme={null} get_or_create_client(client: Optional['PrefectClient'] = None) -> tuple['PrefectClient', bool] ``` Returns provided client, infers a client from context if available, or creates a new client. **Args:** * `- client`: an optional client to use **Returns:** * * tuple: a tuple of the client and a boolean indicating if the client was inferred from context ### `client_injector` ```python theme={null} client_injector(func: Callable[Concatenate['PrefectClient', P], Coroutine[Any, Any, R]]) -> Callable[P, Coroutine[Any, Any, R]] ``` ### `inject_client` ```python theme={null} inject_client(fn: Callable[P, Coroutine[Any, Any, R]]) -> Callable[P, Coroutine[Any, Any, R]] ``` Simple helper to provide a context managed client to an asynchronous function. The decorated function *must* take a `client` kwarg and if a client is passed when called it will be used instead of creating a new one, but it will not be context managed as it is assumed that the caller is managing the context. # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-concurrency-__init__ # `prefect.concurrency` *This module is empty or contains only private/internal implementations.* # asyncio Source: https://docs.prefect.io/v3/api-ref/python/prefect-concurrency-asyncio # `prefect.concurrency.asyncio` ## Functions ### `concurrency` ```python theme={null} concurrency(names: Union[str, list[str]], occupy: int = 1, timeout_seconds: Optional[float] = None, max_retries: Optional[int] = None, lease_duration: float = 300, strict: bool = False, holder: 'Optional[ConcurrencyLeaseHolder]' = None, raise_on_lease_renewal_failure: Optional[bool] = None) -> AsyncGenerator[None, None] ``` A context manager that acquires and releases concurrency slots from the given concurrency limits. **Args:** * `names`: The names of the concurrency limits to acquire slots from. * `occupy`: The number of slots to acquire and hold from each limit. * `timeout_seconds`: The number of seconds to wait for the slots to be acquired before raising a `TimeoutError`. A timeout of `None` will wait indefinitely. * `max_retries`: The maximum number of retries to acquire the concurrency slots. * `lease_duration`: The duration of the lease for the acquired slots in seconds. * `strict`: A boolean specifying whether to raise an error if the concurrency limit does not exist. Defaults to `False`. * `holder`: A dictionary containing information about the holder of the concurrency slots. Typically includes 'type' and 'id' keys. * `raise_on_lease_renewal_failure`: Controls whether to terminate execution when lease renewal fails. When `None` (default), follows the `strict` parameter for backward compatibility. Set to `False` to allow long-running tasks to continue even if a transient lease renewal error occurs. Set to `True` to terminate execution immediately on renewal failure. **Raises:** * `TimeoutError`: If the slots are not acquired within the given timeout. * `ConcurrencySlotAcquisitionError`: If the concurrency limit does not exist and `strict` is `True`. Example: A simple example of using the async `concurrency` context manager: ```python theme={null} from prefect.concurrency.asyncio import concurrency async def resource_heavy(): async with concurrency("test", occupy=1): print("Resource heavy task") async def main(): await resource_heavy() ``` ### `rate_limit` ```python theme={null} rate_limit(names: Union[str, list[str]], occupy: int = 1, timeout_seconds: Optional[float] = None, strict: bool = False) -> None ``` Block execution until an `occupy` number of slots of the concurrency limits given in `names` are acquired. Requires that all given concurrency limits have a slot decay. **Args:** * `names`: The names of the concurrency limits to acquire slots from. * `occupy`: The number of slots to acquire and hold from each limit. * `timeout_seconds`: The number of seconds to wait for the slots to be acquired before raising a `TimeoutError`. A timeout of `None` will wait indefinitely. * `strict`: A boolean specifying whether to raise an error if the concurrency limit does not exist. Defaults to `False`. **Raises:** * `TimeoutError`: If the slots are not acquired within the given timeout. * `ConcurrencySlotAcquisitionError`: If the concurrency limit does not exist and `strict` is `True`. # context Source: https://docs.prefect.io/v3/api-ref/python/prefect-concurrency-context # `prefect.concurrency.context` ## Classes ### `ConcurrencyContext` **Methods:** #### `get` ```python theme={null} get(cls: type[Self]) -> Optional[Self] ``` Get the current context instance #### `model_copy` ```python theme={null} model_copy(self: Self) -> Self ``` Duplicate the context model, optionally choosing which fields to include, exclude, or change. **Attributes:** * `include`: Fields to include in new model. * `exclude`: Fields to exclude from new model, as with values this takes precedence over include. * `update`: Values to change/add in the new model. Note: the data is not validated before creating the new model - you should trust this data. * `deep`: Set to `True` to make a deep copy of the model. **Returns:** * A new model instance. #### `serialize` ```python theme={null} serialize(self, include_secrets: bool = True) -> dict[str, Any] ``` Serialize the context model to a dictionary that can be pickled with cloudpickle. # services Source: https://docs.prefect.io/v3/api-ref/python/prefect-concurrency-services # `prefect.concurrency.services` ## Classes ### `ConcurrencySlotAcquisitionService` **Methods:** #### `acquire` ```python theme={null} acquire(self, slots: int, mode: Literal['concurrency', 'rate_limit'], timeout_seconds: Optional[float] = None, max_retries: Optional[int] = None) -> httpx.Response ``` ### `ConcurrencySlotAcquisitionWithLeaseService` A service that acquires concurrency slots with leases. This service serializes acquisition attempts for a given set of limit names, preventing thundering herd issues when many tasks try to acquire slots simultaneously. Each unique set of limit names gets its own singleton service instance. **Args:** * `concurrency_limit_names`: A frozenset of concurrency limit names to acquire slots from. **Methods:** #### `acquire` ```python theme={null} acquire(self, slots: int, mode: Literal['concurrency', 'rate_limit'], timeout_seconds: Optional[float] = None, max_retries: Optional[int] = None, lease_duration: float = 300, strict: bool = False, holder: Optional['ConcurrencyLeaseHolder'] = None) -> httpx.Response ``` Acquire concurrency slots with a lease, with retry logic for 423 responses. **Args:** * `slots`: Number of slots to acquire * `mode`: Either "concurrency" or "rate\_limit" * `timeout_seconds`: Optional timeout for the entire acquisition attempt * `max_retries`: Maximum number of retries on 423 LOCKED responses * `lease_duration`: Duration of the lease in seconds * `strict`: Whether to raise errors for missing limits * `holder`: Optional holder information for the lease **Returns:** * HTTP response from the server **Raises:** * `httpx.HTTPStatusError`: If the server returns an error other than 423 LOCKED * `TimeoutError`: If acquisition times out # sync Source: https://docs.prefect.io/v3/api-ref/python/prefect-concurrency-sync # `prefect.concurrency.sync` ## Functions ### `concurrency` ```python theme={null} concurrency(names: Union[str, list[str]], occupy: int = 1, timeout_seconds: Optional[float] = None, max_retries: Optional[int] = None, lease_duration: float = 300, strict: bool = False, holder: 'Optional[ConcurrencyLeaseHolder]' = None, raise_on_lease_renewal_failure: Optional[bool] = None) -> Generator[None, None, None] ``` A context manager that acquires and releases concurrency slots from the given concurrency limits. **Args:** * `names`: The names of the concurrency limits to acquire slots from. * `occupy`: The number of slots to acquire and hold from each limit. * `timeout_seconds`: The number of seconds to wait for the slots to be acquired before raising a `TimeoutError`. A timeout of `None` will wait indefinitely. * `max_retries`: The maximum number of retries to acquire the concurrency slots. * `lease_duration`: The duration of the lease for the acquired slots in seconds. * `strict`: A boolean specifying whether to raise an error if the concurrency limit does not exist. Defaults to `False`. * `holder`: A dictionary containing information about the holder of the concurrency slots. Typically includes 'type' and 'id' keys. * `raise_on_lease_renewal_failure`: Controls whether to terminate execution when lease renewal fails. When `None` (default), follows the `strict` parameter for backward compatibility. Set to `False` to allow long-running tasks to continue even if a transient lease renewal error occurs. Set to `True` to terminate execution immediately on renewal failure. **Raises:** * `TimeoutError`: If the slots are not acquired within the given timeout. * `ConcurrencySlotAcquisitionError`: If the concurrency limit does not exist and `strict` is `True`. Example: A simple example of using the sync `concurrency` context manager: ```python theme={null} from prefect.concurrency.sync import concurrency def resource_heavy(): with concurrency("test", occupy=1): print("Resource heavy task") def main(): resource_heavy() ``` ### `rate_limit` ```python theme={null} rate_limit(names: Union[str, list[str]], occupy: int = 1, timeout_seconds: Optional[float] = None, strict: bool = False) -> None ``` Block execution until an `occupy` number of slots of the concurrency limits given in `names` are acquired. Requires that all given concurrency limits have a slot decay. **Args:** * `names`: The names of the concurrency limits to acquire slots from. * `occupy`: The number of slots to acquire and hold from each limit. * `timeout_seconds`: The number of seconds to wait for the slots to be acquired before raising a `TimeoutError`. A timeout of `None` will wait indefinitely. * `strict`: A boolean specifying whether to raise an error if the concurrency limit does not exist. Defaults to `False`. **Raises:** * `TimeoutError`: If the slots are not acquired within the given timeout. * `ConcurrencySlotAcquisitionError`: If the concurrency limit does not exist and `strict` is `True`. # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-concurrency-v1-__init__ # `prefect.concurrency.v1` *This module is empty or contains only private/internal implementations.* # asyncio Source: https://docs.prefect.io/v3/api-ref/python/prefect-concurrency-v1-asyncio # `prefect.concurrency.v1.asyncio` ## Functions ### `concurrency` ```python theme={null} concurrency(names: Union[str, list[str]], task_run_id: UUID, timeout_seconds: Optional[float] = None) -> AsyncGenerator[None, None] ``` A context manager that acquires and releases concurrency slots from the given concurrency limits. **Args:** * `names`: The names of the concurrency limits to acquire slots from. * `task_run_id`: The name of the task\_run\_id that is incrementing the slots. * `timeout_seconds`: The number of seconds to wait for the slots to be acquired before raising a `TimeoutError`. A timeout of `None` will wait indefinitely. **Raises:** * `TimeoutError`: If the slots are not acquired within the given timeout. Example: A simple example of using the async `concurrency` context manager: ```python theme={null} from prefect.concurrency.v1.asyncio import concurrency async def resource_heavy(): async with concurrency("test", task_run_id): print("Resource heavy task") async def main(): await resource_heavy() ``` # context Source: https://docs.prefect.io/v3/api-ref/python/prefect-concurrency-v1-context # `prefect.concurrency.v1.context` ## Classes ### `ConcurrencyContext` **Methods:** #### `get` ```python theme={null} get(cls: type[Self]) -> Optional[Self] ``` Get the current context instance #### `model_copy` ```python theme={null} model_copy(self: Self) -> Self ``` Duplicate the context model, optionally choosing which fields to include, exclude, or change. **Attributes:** * `include`: Fields to include in new model. * `exclude`: Fields to exclude from new model, as with values this takes precedence over include. * `update`: Values to change/add in the new model. Note: the data is not validated before creating the new model - you should trust this data. * `deep`: Set to `True` to make a deep copy of the model. **Returns:** * A new model instance. #### `serialize` ```python theme={null} serialize(self, include_secrets: bool = True) -> dict[str, Any] ``` Serialize the context model to a dictionary that can be pickled with cloudpickle. # services Source: https://docs.prefect.io/v3/api-ref/python/prefect-concurrency-v1-services # `prefect.concurrency.v1.services` ## Classes ### `ConcurrencySlotAcquisitionServiceError` Raised when an error occurs while acquiring concurrency slots. ### `ConcurrencySlotAcquisitionService` **Methods:** #### `acquire` ```python theme={null} acquire(self, task_run_id: UUID, timeout_seconds: Optional[float] = None) -> httpx.Response ``` # sync Source: https://docs.prefect.io/v3/api-ref/python/prefect-concurrency-v1-sync # `prefect.concurrency.v1.sync` ## Functions ### `concurrency` ```python theme={null} concurrency(names: Union[str, list[str]], task_run_id: UUID, timeout_seconds: Optional[float] = None) -> Generator[None, None, None] ``` A context manager that acquires and releases concurrency slots from the given concurrency limits. **Args:** * `names`: The names of the concurrency limits to acquire. * `task_run_id`: The task run ID acquiring the limits. * `timeout_seconds`: The number of seconds to wait to acquire the limits before raising a `TimeoutError`. A timeout of `None` will wait indefinitely. **Raises:** * `TimeoutError`: If the limits are not acquired within the given timeout. Example: A simple example of using the sync `concurrency` context manager: ```python theme={null} from prefect.concurrency.v1.sync import concurrency def resource_heavy(): with concurrency("test"): print("Resource heavy task") def main(): resource_heavy() ``` # context Source: https://docs.prefect.io/v3/api-ref/python/prefect-context # `prefect.context` Async and thread safe models for passing runtime context data. These contexts should never be directly mutated by the user. For more user-accessible information about the current run, see [`prefect.runtime`](https://docs.prefect.io/v3/api-ref/python/prefect-runtime-flow_run). ## Functions ### `with_context` ```python theme={null} with_context(fn: Callable[..., Any]) -> _ContextWrappedCallable ``` Wrap a function so it runs with the current Prefect context when called in a subprocess. Use this to enable `get_run_logger()` and other context-dependent APIs in functions executed via `multiprocessing.Pool`, `ProcessPoolExecutor`, or `multiprocessing.Process`. ### `serialize_context` ```python theme={null} serialize_context(asset_ctx_kwargs: Union[dict[str, Any], None] = None) -> dict[str, Any] ``` Serialize the current context for use in a remote execution environment. Optionally provide asset\_ctx\_kwargs to create new AssetContext, that will be used in the remote execution environment. This is useful for TaskRunners, who rely on creating the task run in the remote environment. ### `hydrated_context` ```python theme={null} hydrated_context(serialized_context: Optional[dict[str, Any]] = None, client: Union[PrefectClient, SyncPrefectClient, None] = None) -> Generator[None, Any, None] ``` ### `get_run_context` ```python theme={null} get_run_context() -> Union[FlowRunContext, TaskRunContext] ``` Get the current run context from within a task or flow function. **Returns:** * A `FlowRunContext` or `TaskRunContext` depending on the function type. **Raises:** * `RuntimeError`: If called outside of a flow or task run. ### `get_settings_context` ```python theme={null} get_settings_context() -> SettingsContext ``` Get the current settings context which contains profile information and the settings that are being used. Generally, the settings that are being used are a combination of values from the profile and environment. See `prefect.context.use_profile` for more details. ### `tags` ```python theme={null} tags(*new_tags: str) -> Generator[set[str], None, None] ``` Context manager to add tags to flow and task run calls. Tags are always combined with any existing tags. **Examples:** ```python theme={null} from prefect import tags, task, flow @task def my_task(): pass ``` Run a task with tags ```python theme={null} @flow def my_flow(): with tags("a", "b"): my_task() # has tags: a, b ``` Run a flow with tags ```python theme={null} @flow def my_flow(): pass with tags("a", "b"): my_flow() # has tags: a, b ``` Run a task with nested tag contexts ```python theme={null} @flow def my_flow(): with tags("a", "b"): with tags("c", "d"): my_task() # has tags: a, b, c, d my_task() # has tags: a, b ``` Inspect the current tags ```python theme={null} @flow def my_flow(): with tags("c", "d"): with tags("e", "f") as current_tags: print(current_tags) with tags("a", "b"): my_flow() # {"a", "b", "c", "d", "e", "f"} ``` ### `use_profile` ```python theme={null} use_profile(profile: Union[Profile, str], override_environment_variables: bool = False, include_current_context: bool = True) -> Generator[SettingsContext, Any, None] ``` Switch to a profile for the duration of this context. Profile contexts are confined to an async context in a single thread. **Args:** * `profile`: The name of the profile to load or an instance of a Profile. * `override_environment_variable`: If set, variables in the profile will take precedence over current environment variables. By default, environment variables will override profile settings. * `include_current_context`: If set, the new settings will be constructed with the current settings context as a base. If not set, the use\_base settings will be loaded from the environment and defaults. ### `root_settings_context` ```python theme={null} root_settings_context() -> SettingsContext ``` Return the settings context that will exist as the root context for the module. The profile to use is determined with the following precedence * Command line via 'prefect --profile \' * Environment variable via 'PREFECT\_PROFILE' * Profiles file via the 'active' key ### `refresh_global_settings_context` ```python theme={null} refresh_global_settings_context() -> None ``` Refresh the global settings context to pick up environment variable changes. This is called after plugins run to ensure any environment variables they set are reflected in get\_current\_settings(). ## Classes ### `ContextModel` A base model for context data that forbids mutation and extra data while providing a context manager **Methods:** #### `get` ```python theme={null} get(cls: type[Self]) -> Optional[Self] ``` Get the current context instance #### `model_copy` ```python theme={null} model_copy(self: Self) -> Self ``` Duplicate the context model, optionally choosing which fields to include, exclude, or change. **Attributes:** * `include`: Fields to include in new model. * `exclude`: Fields to exclude from new model, as with values this takes precedence over include. * `update`: Values to change/add in the new model. Note: the data is not validated before creating the new model - you should trust this data. * `deep`: Set to `True` to make a deep copy of the model. **Returns:** * A new model instance. #### `serialize` ```python theme={null} serialize(self, include_secrets: bool = True) -> dict[str, Any] ``` Serialize the context model to a dictionary that can be pickled with cloudpickle. ### `SyncClientContext` A context for managing the sync Prefect client instances. Clients were formerly tracked on the TaskRunContext and FlowRunContext, but having two separate places and the addition of both sync and async clients made it difficult to manage. This context is intended to be the single source for sync clients. The client creates a sync client, which can either be read directly from the context object OR loaded with get\_client, inject\_client, or other Prefect utilities. with SyncClientContext.get\_or\_create() as ctx: c1 = get\_client(sync\_client=True) c2 = get\_client(sync\_client=True) assert c1 is c2 assert c1 is ctx.client **Methods:** #### `get` ```python theme={null} get(cls: type[Self]) -> Optional[Self] ``` Get the current context instance #### `get_or_create` ```python theme={null} get_or_create(cls) -> Generator[Self, None, None] ``` #### `model_copy` ```python theme={null} model_copy(self: Self) -> Self ``` Duplicate the context model, optionally choosing which fields to include, exclude, or change. **Attributes:** * `include`: Fields to include in new model. * `exclude`: Fields to exclude from new model, as with values this takes precedence over include. * `update`: Values to change/add in the new model. Note: the data is not validated before creating the new model - you should trust this data. * `deep`: Set to `True` to make a deep copy of the model. **Returns:** * A new model instance. #### `serialize` ```python theme={null} serialize(self, include_secrets: bool = True) -> dict[str, Any] ``` Serialize the context model to a dictionary that can be pickled with cloudpickle. ### `AsyncClientContext` A context for managing the async Prefect client instances. Clients were formerly tracked on the TaskRunContext and FlowRunContext, but having two separate places and the addition of both sync and async clients made it difficult to manage. This context is intended to be the single source for async clients. The client creates an async client, which can either be read directly from the context object OR loaded with get\_client, inject\_client, or other Prefect utilities. with AsyncClientContext.get\_or\_create() as ctx: c1 = get\_client(sync\_client=False) c2 = get\_client(sync\_client=False) assert c1 is c2 assert c1 is ctx.client **Methods:** #### `get` ```python theme={null} get(cls: type[Self]) -> Optional[Self] ``` Get the current context instance #### `get_or_create` ```python theme={null} get_or_create(cls) -> AsyncGenerator[Self, None] ``` #### `model_copy` ```python theme={null} model_copy(self: Self) -> Self ``` Duplicate the context model, optionally choosing which fields to include, exclude, or change. **Attributes:** * `include`: Fields to include in new model. * `exclude`: Fields to exclude from new model, as with values this takes precedence over include. * `update`: Values to change/add in the new model. Note: the data is not validated before creating the new model - you should trust this data. * `deep`: Set to `True` to make a deep copy of the model. **Returns:** * A new model instance. #### `serialize` ```python theme={null} serialize(self, include_secrets: bool = True) -> dict[str, Any] ``` Serialize the context model to a dictionary that can be pickled with cloudpickle. ### `RunContext` The base context for a flow or task run. Data in this context will always be available when `get_run_context` is called. **Attributes:** * `start_time`: The time the run context was entered * `client`: The Prefect client instance being used for API communication **Methods:** #### `get` ```python theme={null} get(cls: type[Self]) -> Optional[Self] ``` Get the current context instance #### `model_copy` ```python theme={null} model_copy(self: Self) -> Self ``` Duplicate the context model, optionally choosing which fields to include, exclude, or change. **Attributes:** * `include`: Fields to include in new model. * `exclude`: Fields to exclude from new model, as with values this takes precedence over include. * `update`: Values to change/add in the new model. Note: the data is not validated before creating the new model - you should trust this data. * `deep`: Set to `True` to make a deep copy of the model. **Returns:** * A new model instance. #### `serialize` ```python theme={null} serialize(self: Self, include_secrets: bool = True) -> dict[str, Any] ``` #### `serialize` ```python theme={null} serialize(self, include_secrets: bool = True) -> dict[str, Any] ``` Serialize the context model to a dictionary that can be pickled with cloudpickle. ### `EngineContext` The context for a flow run. Data in this context is only available from within a flow run function. **Attributes:** * `flow`: The flow instance associated with the run * `flow_run`: The API metadata for the flow run * `task_runner`: The task runner instance being used for the flow run * `run_results`: A mapping of result ids to run states for this flow run * `log_prints`: Whether to log print statements from the flow run * `parameters`: The parameters passed to the flow run * `detached`: Flag indicating if context has been serialized and sent to remote infrastructure * `result_store`: The result store used to persist results * `persist_result`: Whether to persist the flow run result * `task_run_dynamic_keys`: Counter for task calls allowing unique keys * `observed_flow_pauses`: Counter for flow pauses * `events`: Events worker to emit events **Methods:** #### `serialize` ```python theme={null} serialize(self: Self, include_secrets: bool = True) -> dict[str, Any] ``` #### `serialize` ```python theme={null} serialize(self: Self, include_secrets: bool = True) -> dict[str, Any] ``` ### `TaskRunContext` The context for a task run. Data in this context is only available from within a task run function. **Attributes:** * `task`: The task instance associated with the task run * `task_run`: The API metadata for this task run **Methods:** #### `serialize` ```python theme={null} serialize(self: Self, include_secrets: bool = True) -> dict[str, Any] ``` #### `serialize` ```python theme={null} serialize(self: Self, include_secrets: bool = True) -> dict[str, Any] ``` ### `AssetContext` The asset context for a materializing task run. Contains all asset-related information needed for asset event emission and downstream asset dependency propagation. **Attributes:** * `direct_asset_dependencies`: Assets that this task directly depends on (from task.asset\_deps) * `downstream_assets`: Assets that this task will create/materialize (from MaterializingTask.assets) * `upstream_assets`: Assets from upstream task dependencies * `materialized_by`: Tool that materialized the assets (from MaterializingTask.materialized\_by) * `task_run_id`: ID of the associated task run * `materialization_metadata`: Metadata for materialized assets **Methods:** #### `add_asset_metadata` ```python theme={null} add_asset_metadata(self, asset_key: str, metadata: dict[str, Any]) -> None ``` Add metadata for a materialized asset. **Args:** * `asset_key`: The asset key * `metadata`: Metadata dictionary to add **Raises:** * `ValueError`: If asset\_key is not in downstream\_assets #### `asset_as_related` ```python theme={null} asset_as_related(asset: Asset) -> dict[str, str] ``` Convert Asset to event related format. #### `asset_as_resource` ```python theme={null} asset_as_resource(asset: Asset) -> dict[str, str] ``` Convert Asset to event resource format. #### `emit_events` ```python theme={null} emit_events(self, state: State) -> None ``` Emit asset events #### `from_task_and_inputs` ```python theme={null} from_task_and_inputs(cls, task: 'Task[Any, Any]', task_run_id: UUID, task_inputs: Optional[dict[str, set[Any]]] = None, copy_to_child_ctx: bool = False) -> 'AssetContext' ``` Create an AssetContext from a task and its resolved inputs. **Args:** * `task`: The task instance * `task_run_id`: The task run ID * `task_inputs`: The resolved task inputs (TaskRunResult objects) * `copy_to_child_ctx`: Whether this context should be copied on a child AssetContext **Returns:** * Configured AssetContext #### `get` ```python theme={null} get(cls: type[Self]) -> Optional[Self] ``` Get the current context instance #### `model_copy` ```python theme={null} model_copy(self: Self) -> Self ``` Duplicate the context model, optionally choosing which fields to include, exclude, or change. **Attributes:** * `include`: Fields to include in new model. * `exclude`: Fields to exclude from new model, as with values this takes precedence over include. * `update`: Values to change/add in the new model. Note: the data is not validated before creating the new model - you should trust this data. * `deep`: Set to `True` to make a deep copy of the model. **Returns:** * A new model instance. #### `related_materialized_by` ```python theme={null} related_materialized_by(by: str) -> dict[str, str] ``` Create a related resource for the tool that performed the materialization #### `serialize` ```python theme={null} serialize(self: Self, include_secrets: bool = True) -> dict[str, Any] ``` Serialize the AssetContext for distributed execution. #### `serialize` ```python theme={null} serialize(self, include_secrets: bool = True) -> dict[str, Any] ``` Serialize the context model to a dictionary that can be pickled with cloudpickle. #### `update_tracked_assets` ```python theme={null} update_tracked_assets(self) -> None ``` Update the flow run context with assets that should be propagated downstream. ### `TagsContext` The context for `prefect.tags` management. **Attributes:** * `current_tags`: A set of current tags in the context **Methods:** #### `get` ```python theme={null} get(cls) -> Self ``` #### `get` ```python theme={null} get(cls: type[Self]) -> Optional[Self] ``` Get the current context instance #### `model_copy` ```python theme={null} model_copy(self: Self) -> Self ``` Duplicate the context model, optionally choosing which fields to include, exclude, or change. **Attributes:** * `include`: Fields to include in new model. * `exclude`: Fields to exclude from new model, as with values this takes precedence over include. * `update`: Values to change/add in the new model. Note: the data is not validated before creating the new model - you should trust this data. * `deep`: Set to `True` to make a deep copy of the model. **Returns:** * A new model instance. #### `serialize` ```python theme={null} serialize(self, include_secrets: bool = True) -> dict[str, Any] ``` Serialize the context model to a dictionary that can be pickled with cloudpickle. ### `SettingsContext` The context for a Prefect settings. This allows for safe concurrent access and modification of settings. **Attributes:** * `profile`: The profile that is in use. * `settings`: The complete settings model. **Methods:** #### `get` ```python theme={null} get(cls) -> Optional['SettingsContext'] ``` #### `get` ```python theme={null} get(cls: type[Self]) -> Optional[Self] ``` Get the current context instance #### `model_copy` ```python theme={null} model_copy(self: Self) -> Self ``` Duplicate the context model, optionally choosing which fields to include, exclude, or change. **Attributes:** * `include`: Fields to include in new model. * `exclude`: Fields to exclude from new model, as with values this takes precedence over include. * `update`: Values to change/add in the new model. Note: the data is not validated before creating the new model - you should trust this data. * `deep`: Set to `True` to make a deep copy of the model. **Returns:** * A new model instance. #### `serialize` ```python theme={null} serialize(self, include_secrets: bool = True) -> dict[str, Any] ``` Serialize the context model to a dictionary that can be pickled with cloudpickle. # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-deployments-__init__ # `prefect.deployments` *This module is empty or contains only private/internal implementations.* # base Source: https://docs.prefect.io/v3/api-ref/python/prefect-deployments-base # `prefect.deployments.base` Core primitives for managing Prefect deployments via `prefect deploy`, providing a minimally opinionated build system for managing flows and deployments. To get started, follow along with [the deployments tutorial](https://docs.prefect.io/v3/how-to-guides/deployments/create-deployments). ## Functions ### `create_default_prefect_yaml` ```python theme={null} create_default_prefect_yaml(path: str, name: Optional[str] = None, contents: Optional[Dict[str, Any]] = None) -> bool ``` Creates default `prefect.yaml` file in the provided path if one does not already exist; returns boolean specifying whether a file was created. **Args:** * `name`: the name of the project; if not provided, the current directory name will be used * `contents`: a dictionary of contents to write to the file; if not provided, defaults will be used ### `configure_project_by_recipe` ```python theme={null} configure_project_by_recipe(recipe: str, **formatting_kwargs: Any) -> dict[str, Any] | type[NotSet] ``` Given a recipe name, returns a dictionary representing base configuration options. **Args:** * `recipe`: the name of the recipe to use * `formatting_kwargs`: additional keyword arguments to format the recipe **Raises:** * `ValueError`: if provided recipe name does not exist. ### `initialize_project` ```python theme={null} initialize_project(name: Optional[str] = None, recipe: Optional[str] = None, inputs: Optional[Dict[str, Any]] = None) -> List[str] ``` Initializes a basic project structure with base files. If no name is provided, the name of the current directory is used. If no recipe is provided, one is inferred. **Args:** * `name`: the name of the project; if not provided, the current directory name * `recipe`: the name of the recipe to use; if not provided, one is inferred * `inputs`: a dictionary of inputs to use when formatting the recipe **Returns:** * List\[str]: a list of files / directories that were created # deployments Source: https://docs.prefect.io/v3/api-ref/python/prefect-deployments-deployments # `prefect.deployments.deployments` *This module is empty or contains only private/internal implementations.* # flow_runs Source: https://docs.prefect.io/v3/api-ref/python/prefect-deployments-flow_runs # `prefect.deployments.flow_runs` ## Functions ### `arun_deployment` ```python theme={null} arun_deployment(name: Union[str, UUID], client: Optional['PrefectClient'] = None, parameters: Optional[dict[str, Any]] = None, scheduled_time: Optional[datetime] = None, flow_run_name: Optional[str] = None, timeout: Optional[float] = None, poll_interval: Optional[float] = 5, tags: Optional[Iterable[str]] = None, idempotency_key: Optional[str] = None, work_queue_name: Optional[str] = None, as_subflow: Optional[bool] = True, job_variables: Optional[dict[str, Any]] = None) -> 'FlowRun' ``` Asynchronously create a flow run for a deployment and return it after completion or a timeout. By default, this function blocks until the flow run finishes executing. Specify a timeout (in seconds) to wait for the flow run to execute before returning flow run metadata. To return immediately, without waiting for the flow run to execute, set `timeout=0`. Note that if you specify a timeout, this function will return the flow run metadata whether or not the flow run finished executing. If called within a flow or task, the flow run this function creates will be linked to the current flow run as a subflow. Disable this behavior by passing `as_subflow=False`. **Args:** * `name`: The deployment id or deployment name in the form: `"flow name/deployment name"` * `client`: An optional PrefectClient to use for API requests. * `parameters`: Parameter overrides for this flow run. Merged with the deployment defaults. * `scheduled_time`: The time to schedule the flow run for, defaults to scheduling the flow run to start now. * `flow_run_name`: A name for the created flow run * `timeout`: The amount of time to wait (in seconds) for the flow run to complete before returning. Setting `timeout` to 0 will return the flow run metadata immediately. Setting `timeout` to None will allow this function to poll indefinitely. Defaults to None. * `poll_interval`: The number of seconds between polls * `tags`: A list of tags to associate with this flow run; tags can be used in automations and for organizational purposes. * `idempotency_key`: A unique value to recognize retries of the same run, and prevent creating multiple flow runs. * `work_queue_name`: The name of a work queue to use for this run. Defaults to the default work queue for the deployment. * `as_subflow`: Whether to link the flow run as a subflow of the current flow or task run. * `job_variables`: A dictionary of dot delimited infrastructure overrides that will be applied at runtime; for example `env.CONFIG_KEY=config_value` or `namespace='prefect'` ### `run_deployment` ```python theme={null} run_deployment(name: Union[str, UUID], client: Optional['PrefectClient'] = None, parameters: Optional[dict[str, Any]] = None, scheduled_time: Optional[datetime] = None, flow_run_name: Optional[str] = None, timeout: Optional[float] = None, poll_interval: Optional[float] = 5, tags: Optional[Iterable[str]] = None, idempotency_key: Optional[str] = None, work_queue_name: Optional[str] = None, as_subflow: Optional[bool] = True, job_variables: Optional[dict[str, Any]] = None) -> 'FlowRun' ``` Create a flow run for a deployment and return it after completion or a timeout. This function will dispatch to `arun_deployment` when called from an async context. By default, this function blocks until the flow run finishes executing. Specify a timeout (in seconds) to wait for the flow run to execute before returning flow run metadata. To return immediately, without waiting for the flow run to execute, set `timeout=0`. Note that if you specify a timeout, this function will return the flow run metadata whether or not the flow run finished executing. If called within a flow or task, the flow run this function creates will be linked to the current flow run as a subflow. Disable this behavior by passing `as_subflow=False`. **Args:** * `name`: The deployment id or deployment name in the form: `"flow name/deployment name"` * `client`: An optional PrefectClient to use for API requests. This is ignored when called from a synchronous context. * `parameters`: Parameter overrides for this flow run. Merged with the deployment defaults. * `scheduled_time`: The time to schedule the flow run for, defaults to scheduling the flow run to start now. * `flow_run_name`: A name for the created flow run * `timeout`: The amount of time to wait (in seconds) for the flow run to complete before returning. Setting `timeout` to 0 will return the flow run metadata immediately. Setting `timeout` to None will allow this function to poll indefinitely. Defaults to None. * `poll_interval`: The number of seconds between polls * `tags`: A list of tags to associate with this flow run; tags can be used in automations and for organizational purposes. * `idempotency_key`: A unique value to recognize retries of the same run, and prevent creating multiple flow runs. * `work_queue_name`: The name of a work queue to use for this run. Defaults to the default work queue for the deployment. * `as_subflow`: Whether to link the flow run as a subflow of the current flow or task run. * `job_variables`: A dictionary of dot delimited infrastructure overrides that will be applied at runtime; for example `env.CONFIG_KEY=config_value` or `namespace='prefect'` # runner Source: https://docs.prefect.io/v3/api-ref/python/prefect-deployments-runner # `prefect.deployments.runner` Objects for creating and configuring deployments for flows using `serve` functionality. Example: ```python theme={null} import time from prefect import flow, serve @flow def slow_flow(sleep: int = 60): "Sleepy flow - sleeps the provided amount of time (in seconds)." time.sleep(sleep) @flow def fast_flow(): "Fastest flow this side of the Mississippi." return if __name__ == "__main__": # to_deployment creates RunnerDeployment instances slow_deploy = slow_flow.to_deployment(name="sleeper", interval=45) fast_deploy = fast_flow.to_deployment(name="fast") serve(slow_deploy, fast_deploy) ``` ## Functions ### `adeploy` ```python theme={null} adeploy(*deployments: RunnerDeployment) -> List[UUID] ``` Deploy the provided list of deployments to dynamic infrastructure via a work pool. By default, calling this function will build a Docker image for the deployments, push it to a registry, and create each deployment via the Prefect API that will run the corresponding flow on the given schedule. If you want to use an existing image, you can pass `build=False` to skip building and pushing an image. **Args:** * `*deployments`: A list of deployments to deploy. * `work_pool_name`: The name of the work pool to use for these deployments. Defaults to the value of `PREFECT_DEFAULT_WORK_POOL_NAME`. * `image`: The name of the Docker image to build, including the registry and repository. Pass a DockerImage instance to customize the Dockerfile used and build arguments. * `build`: Whether or not to build a new image for the flow. If False, the provided image will be used as-is and pulled at runtime. * `push`: Whether or not to skip pushing the built image to a registry. * `print_next_steps_message`: Whether or not to print a message with next steps after deploying the deployments. **Returns:** * A list of deployment IDs for the created/updated deployments. **Examples:** Deploy a group of flows to a work pool: ```python theme={null} from prefect import deploy, flow @flow(log_prints=True) def local_flow(): print("I'm a locally defined flow!") if __name__ == "__main__": await adeploy( local_flow.to_deployment(name="example-deploy-local-flow"), flow.from_source( source="https://github.com/org/repo.git", entrypoint="flows.py:my_flow", ).to_deployment( name="example-deploy-remote-flow", ), work_pool_name="my-work-pool", image="my-registry/my-image:dev", ) ``` ### `deploy` ```python theme={null} deploy(*deployments: RunnerDeployment) -> List[UUID] ``` Deploy the provided list of deployments to dynamic infrastructure via a work pool. By default, calling this function will build a Docker image for the deployments, push it to a registry, and create each deployment via the Prefect API that will run the corresponding flow on the given schedule. If you want to use an existing image, you can pass `build=False` to skip building and pushing an image. **Args:** * `*deployments`: A list of deployments to deploy. * `work_pool_name`: The name of the work pool to use for these deployments. Defaults to the value of `PREFECT_DEFAULT_WORK_POOL_NAME`. * `image`: The name of the Docker image to build, including the registry and repository. Pass a DockerImage instance to customize the Dockerfile used and build arguments. * `build`: Whether or not to build a new image for the flow. If False, the provided image will be used as-is and pulled at runtime. * `push`: Whether or not to skip pushing the built image to a registry. * `print_next_steps_message`: Whether or not to print a message with next steps after deploying the deployments. **Returns:** * A list of deployment IDs for the created/updated deployments. **Examples:** Deploy a group of flows to a work pool: ```python theme={null} from prefect import deploy, flow @flow(log_prints=True) def local_flow(): print("I'm a locally defined flow!") if __name__ == "__main__": deploy( local_flow.to_deployment(name="example-deploy-local-flow"), flow.from_source( source="https://github.com/org/repo.git", entrypoint="flows.py:my_flow", ).to_deployment( name="example-deploy-remote-flow", ), work_pool_name="my-work-pool", image="my-registry/my-image:dev", ) ``` ## Classes ### `DeploymentApplyError` Raised when an error occurs while applying a deployment. ### `RunnerDeployment` A Prefect RunnerDeployment definition, used for specifying and building deployments. **Attributes:** * `name`: A name for the deployment (required). * `version`: An optional version for the deployment; defaults to the flow's version * `description`: An optional description of the deployment; defaults to the flow's description * `tags`: An optional list of tags to associate with this deployment; note that tags are used only for organizational purposes. For delegating work to workers, see `work_queue_name`. * `schedule`: A schedule to run this deployment on, once registered * `parameters`: A dictionary of parameter values to pass to runs created from this deployment * `path`: The path to the working directory for the workflow, relative to remote storage or, if stored on a local filesystem, an absolute path * `entrypoint`: The path to the entrypoint for the workflow, always relative to the `path` * `parameter_openapi_schema`: The parameter schema of the flow, including defaults. * `enforce_parameter_schema`: Whether or not the Prefect API should enforce the parameter schema for this deployment. * `work_pool_name`: The name of the work pool to use for this deployment. * `work_queue_name`: The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used. * `job_variables`: Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings. * `_sla`: (Experimental) SLA configuration for the deployment. May be removed or modified at any time. Currently only supported on Prefect Cloud. **Methods:** #### `aapply` ```python theme={null} aapply(self, schedules: Optional[List[dict[str, Any]]] = None, work_pool_name: Optional[str] = None, image: Optional[str] = None, version_info: Optional[VersionInfo] = None) -> UUID ``` Registers this deployment with the API and returns the deployment's ID. **Args:** * `work_pool_name`: The name of the work pool to use for this deployment. * `image`: The registry, name, and tag of the Docker image to use for this deployment. Only used when the deployment is deployed to a work pool. * `version_info`: The version information to use for the deployment. Returns: The ID of the created deployment. #### `afrom_storage` ```python theme={null} afrom_storage(cls, storage: RunnerStorage, entrypoint: str, name: str, flow_name: Optional[str] = None, interval: Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]] = None, cron: Optional[Union[Iterable[str], str]] = None, rrule: Optional[Union[Iterable[str], str]] = None, paused: Optional[bool] = None, schedule: Optional[Schedule] = None, schedules: Optional['FlexibleScheduleList'] = None, concurrency_limit: Optional[Union[int, ConcurrencyLimitConfig, None]] = None, parameters: Optional[dict[str, Any]] = None, triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None, description: Optional[str] = None, tags: Optional[List[str]] = None, version: Optional[str] = None, version_type: Optional[VersionType] = None, enforce_parameter_schema: bool = True, work_pool_name: Optional[str] = None, work_queue_name: Optional[str] = None, job_variables: Optional[dict[str, Any]] = None, _sla: Optional[Union[SlaTypes, list[SlaTypes]]] = None) -> 'RunnerDeployment' ``` Create a RunnerDeployment from a flow located at a given entrypoint and stored in a local storage location. **Args:** * `entrypoint`: The path to a file containing a flow and the name of the flow function in the format `./path/to/file.py\:flow_func_name`, or a module path to a flow function in the format `module.path.flow_func_name`. * `name`: A name for the deployment * `flow_name`: The name of the flow to deploy * `storage`: A storage object to use for retrieving flow code. If not provided, a URL must be provided. * `interval`: An interval on which to execute the current flow. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds. * `cron`: A cron schedule of when to execute runs of this flow. * `rrule`: An rrule schedule of when to execute runs of this flow. * `paused`: Whether or not the deployment is paused. * `schedule`: A schedule object defining when to execute runs of this deployment. Used to provide additional scheduling options like `timezone` or `parameters`. * `schedules`: A list of schedule objects defining when to execute runs of this deployment. Used to provide additional scheduling options like `timezone` or `parameters`. * `triggers`: A list of triggers that should kick of a run of this flow. * `parameters`: A dictionary of default parameter values to pass to runs of this flow. * `description`: A description for the created deployment. Defaults to the flow's description if not provided. * `tags`: A list of tags to associate with the created deployment for organizational purposes. * `version`: A version for the created deployment. Defaults to the flow's version. * `version_type`: The type of version information to use for the deployment. The version type will be inferred if not provided. * `enforce_parameter_schema`: Whether or not the Prefect API should enforce the parameter schema for this deployment. * `work_pool_name`: The name of the work pool to use for this deployment. * `work_queue_name`: The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used. * `job_variables`: Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings. * `_sla`: (Experimental) SLA configuration for the deployment. May be removed or modified at any time. Currently only supported on Prefect Cloud. #### `apply` ```python theme={null} apply(self, schedules: Optional[List[dict[str, Any]]] = None, work_pool_name: Optional[str] = None, image: Optional[str] = None, version_info: Optional[VersionInfo] = None) -> UUID ``` Registers this deployment with the API and returns the deployment's ID. **Args:** * `work_pool_name`: The name of the work pool to use for this deployment. * `image`: The registry, name, and tag of the Docker image to use for this deployment. Only used when the deployment is deployed to a work pool. * `version_info`: The version information to use for the deployment. Returns: The ID of the created deployment. #### `entrypoint_type` ```python theme={null} entrypoint_type(self) -> EntrypointType ``` #### `from_entrypoint` ```python theme={null} from_entrypoint(cls, entrypoint: str, name: str, flow_name: Optional[str] = None, interval: Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]] = None, cron: Optional[Union[Iterable[str], str]] = None, rrule: Optional[Union[Iterable[str], str]] = None, paused: Optional[bool] = None, schedule: Optional[Schedule] = None, schedules: Optional['FlexibleScheduleList'] = None, concurrency_limit: Optional[Union[int, ConcurrencyLimitConfig, None]] = None, parameters: Optional[dict[str, Any]] = None, triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None, description: Optional[str] = None, tags: Optional[List[str]] = None, version: Optional[str] = None, enforce_parameter_schema: bool = True, work_pool_name: Optional[str] = None, work_queue_name: Optional[str] = None, job_variables: Optional[dict[str, Any]] = None, _sla: Optional[Union[SlaTypes, list[SlaTypes]]] = None) -> 'RunnerDeployment' ``` Configure a deployment for a given flow located at a given entrypoint. **Args:** * `entrypoint`: The path to a file containing a flow and the name of the flow function in the format `./path/to/file.py\:flow_func_name`. * `name`: A name for the deployment * `flow_name`: The name of the flow to deploy * `interval`: An interval on which to execute the current flow. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds. * `cron`: A cron schedule of when to execute runs of this flow. * `rrule`: An rrule schedule of when to execute runs of this flow. * `paused`: Whether or not to set this deployment as paused. * `schedules`: A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options like `timezone`. * `triggers`: A list of triggers that should kick of a run of this flow. * `parameters`: A dictionary of default parameter values to pass to runs of this flow. * `description`: A description for the created deployment. Defaults to the flow's description if not provided. * `tags`: A list of tags to associate with the created deployment for organizational purposes. * `version`: A version for the created deployment. Defaults to the flow's version. * `enforce_parameter_schema`: Whether or not the Prefect API should enforce the parameter schema for this deployment. * `work_pool_name`: The name of the work pool to use for this deployment. * `work_queue_name`: The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used. * `job_variables`: Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings. * `_sla`: (Experimental) SLA configuration for the deployment. May be removed or modified at any time. Currently only supported on Prefect Cloud. #### `from_flow` ```python theme={null} from_flow(cls, flow: 'Flow[..., Any]', name: str, interval: Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]] = None, cron: Optional[Union[Iterable[str], str]] = None, rrule: Optional[Union[Iterable[str], str]] = None, paused: Optional[bool] = None, schedule: Optional[Schedule] = None, schedules: Optional['FlexibleScheduleList'] = None, concurrency_limit: Optional[Union[int, ConcurrencyLimitConfig, None]] = None, parameters: Optional[dict[str, Any]] = None, triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None, description: Optional[str] = None, tags: Optional[List[str]] = None, version: Optional[str] = None, version_type: Optional[VersionType] = None, enforce_parameter_schema: bool = True, work_pool_name: Optional[str] = None, work_queue_name: Optional[str] = None, job_variables: Optional[dict[str, Any]] = None, entrypoint_type: EntrypointType = EntrypointType.FILE_PATH, _sla: Optional[Union[SlaTypes, list[SlaTypes]]] = None) -> 'RunnerDeployment' ``` Configure a deployment for a given flow. **Args:** * `flow`: A flow function to deploy * `name`: A name for the deployment * `interval`: An interval on which to execute the current flow. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds. * `cron`: A cron schedule of when to execute runs of this flow. * `rrule`: An rrule schedule of when to execute runs of this flow. * `paused`: Whether or not to set this deployment as paused. * `schedule`: A schedule object defining when to execute runs of this deployment. Used to provide additional scheduling options like `timezone` or `parameters`. * `schedules`: A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options like `timezone`. * `concurrency_limit`: The maximum number of concurrent runs this deployment will allow. * `triggers`: A list of triggers that should kick of a run of this flow. * `parameters`: A dictionary of default parameter values to pass to runs of this flow. * `description`: A description for the created deployment. Defaults to the flow's description if not provided. * `tags`: A list of tags to associate with the created deployment for organizational purposes. * `version`: A version for the created deployment. Defaults to the flow's version. * `version_type`: The type of version information to use for the deployment. * `enforce_parameter_schema`: Whether or not the Prefect API should enforce the parameter schema for this deployment. * `work_pool_name`: The name of the work pool to use for this deployment. * `work_queue_name`: The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used. * `job_variables`: Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings. * `_sla`: (Experimental) SLA configuration for the deployment. May be removed or modified at any time. Currently only supported on Prefect Cloud. #### `from_storage` ```python theme={null} from_storage(cls, storage: RunnerStorage, entrypoint: str, name: str, flow_name: Optional[str] = None, interval: Optional[Union[Iterable[Union[int, float, timedelta]], int, float, timedelta]] = None, cron: Optional[Union[Iterable[str], str]] = None, rrule: Optional[Union[Iterable[str], str]] = None, paused: Optional[bool] = None, schedule: Optional[Schedule] = None, schedules: Optional['FlexibleScheduleList'] = None, concurrency_limit: Optional[Union[int, ConcurrencyLimitConfig, None]] = None, parameters: Optional[dict[str, Any]] = None, triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None, description: Optional[str] = None, tags: Optional[List[str]] = None, version: Optional[str] = None, version_type: Optional[VersionType] = None, enforce_parameter_schema: bool = True, work_pool_name: Optional[str] = None, work_queue_name: Optional[str] = None, job_variables: Optional[dict[str, Any]] = None, _sla: Optional[Union[SlaTypes, list[SlaTypes]]] = None) -> 'RunnerDeployment' ``` Create a RunnerDeployment from a flow located at a given entrypoint and stored in a local storage location. **Args:** * `entrypoint`: The path to a file containing a flow and the name of the flow function in the format `./path/to/file.py\:flow_func_name`, or a module path to a flow function in the format `module.path.flow_func_name`. * `name`: A name for the deployment * `flow_name`: The name of the flow to deploy * `storage`: A storage object to use for retrieving flow code. If not provided, a URL must be provided. * `interval`: An interval on which to execute the current flow. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds. * `cron`: A cron schedule of when to execute runs of this flow. * `rrule`: An rrule schedule of when to execute runs of this flow. * `paused`: Whether or not the deployment is paused. * `schedule`: A schedule object defining when to execute runs of this deployment. Used to provide additional scheduling options like `timezone` or `parameters`. * `schedules`: A list of schedule objects defining when to execute runs of this deployment. Used to provide additional scheduling options like `timezone` or `parameters`. * `triggers`: A list of triggers that should kick of a run of this flow. * `parameters`: A dictionary of default parameter values to pass to runs of this flow. * `description`: A description for the created deployment. Defaults to the flow's description if not provided. * `tags`: A list of tags to associate with the created deployment for organizational purposes. * `version`: A version for the created deployment. Defaults to the flow's version. * `version_type`: The type of version information to use for the deployment. The version type will be inferred if not provided. * `enforce_parameter_schema`: Whether or not the Prefect API should enforce the parameter schema for this deployment. * `work_pool_name`: The name of the work pool to use for this deployment. * `work_queue_name`: The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used. * `job_variables`: Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings. * `_sla`: (Experimental) SLA configuration for the deployment. May be removed or modified at any time. Currently only supported on Prefect Cloud. #### `full_name` ```python theme={null} full_name(self) -> str ``` #### `reconcile_paused` ```python theme={null} reconcile_paused(cls, values: dict[str, Any]) -> dict[str, Any] ``` #### `reconcile_schedules` ```python theme={null} reconcile_schedules(cls, values: dict[str, Any]) -> dict[str, Any] ``` #### `validate_automation_names` ```python theme={null} validate_automation_names(self) ``` Ensure that each trigger has a name for its automation if none is provided. #### `validate_deployment_parameters` ```python theme={null} validate_deployment_parameters(self) -> Self ``` Update the parameter schema to mark frozen parameters as readonly. #### `validate_name` ```python theme={null} validate_name(cls, value: str) -> str ``` # schedules Source: https://docs.prefect.io/v3/api-ref/python/prefect-deployments-schedules # `prefect.deployments.schedules` ## Functions ### `create_deployment_schedule_create` ```python theme={null} create_deployment_schedule_create(schedule: Union['SCHEDULE_TYPES', 'Schedule'], active: Optional[bool] = True) -> DeploymentScheduleCreate ``` Create a DeploymentScheduleCreate object from common schedule parameters. ### `normalize_to_deployment_schedule` ```python theme={null} normalize_to_deployment_schedule(schedules: Optional['FlexibleScheduleList']) -> List[Union[DeploymentScheduleCreate, DeploymentScheduleUpdate]] ``` # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-deployments-steps-__init__ # `prefect.deployments.steps` *This module is empty or contains only private/internal implementations.* # core Source: https://docs.prefect.io/v3/api-ref/python/prefect-deployments-steps-core # `prefect.deployments.steps.core` Core primitives for running Prefect deployment steps. Deployment steps are YAML representations of Python functions along with their inputs. Whenever a step is run, the following actions are taken: * The step's inputs and block / variable references are resolved (see [the `prefect deploy` documentation](https://docs.prefect.io/v3/how-to-guides/deployments/prefect-yaml#templating-options) for more details) * The step's function is imported; if it cannot be found, the `requires` keyword is used to install the necessary packages * The step's function is called with the resolved inputs * The step's output is returned and used to resolve inputs for subsequent steps ## Functions ### `run_step` ```python theme={null} run_step(step: dict[str, Any], upstream_outputs: dict[str, Any] | None = None) -> dict[str, Any] ``` Runs a step, returns the step's output. Steps are assumed to be in the format `{"importable.func.name": {"kwarg1": "value1", ...}}`. The 'id and 'requires' keywords are reserved for specific purposes and will be removed from the inputs before passing to the step function: This keyword is used to specify packages that should be installed before running the step. ### `run_steps` ```python theme={null} run_steps(steps: list[dict[str, Any]], upstream_outputs: dict[str, Any] | None = None, print_function: Any = print, deployment: Any | None = None, flow_run: Any | None = None, logger: Any | None = None) -> dict[str, Any] ``` ## Classes ### `StepExecutionError` Raised when a step fails to execute. # pull Source: https://docs.prefect.io/v3/api-ref/python/prefect-deployments-steps-pull # `prefect.deployments.steps.pull` Core set of steps for specifying a Prefect project pull step. ## Functions ### `set_working_directory` ```python theme={null} set_working_directory(directory: str) -> dict[str, str] ``` Sets the working directory; works with both absolute and relative paths. **Args:** * `directory`: the directory to set as the working directory **Returns:** * a dictionary containing a `directory` key of the absolute path of the directory that was set ### `agit_clone` ```python theme={null} agit_clone(repository: str, branch: Optional[str] = None, commit_sha: Optional[str] = None, include_submodules: bool = False, access_token: Optional[str] = None, credentials: Optional['Block'] = None, directories: Optional[list[str]] = None, clone_directory_name: Optional[str] = None) -> dict[str, str] ``` Asynchronously clones a git repository into the current working directory. **Args:** * `repository`: the URL of the repository to clone * `branch`: the branch to clone; if not provided, the default branch will be used * `commit_sha`: the commit SHA to clone; if not provided, the default branch will be used * `include_submodules`: whether to include git submodules when cloning the repository * `access_token`: an access token to use for cloning the repository; if not provided the repository will be cloned using the default git credentials * `credentials`: a GitHubCredentials, GitLabCredentials, or BitBucketCredentials block can be used to specify the credentials to use for cloning the repository. * `clone_directory_name`: the name of the local directory to clone into; if not provided, the name will be inferred from the repository URL and branch **Returns:** * a dictionary containing a `directory` key of the new directory that was created **Raises:** * `subprocess.CalledProcessError`: if the git clone command fails for any reason ### `git_clone` ```python theme={null} git_clone(repository: str, branch: Optional[str] = None, commit_sha: Optional[str] = None, include_submodules: bool = False, access_token: Optional[str] = None, credentials: Optional['Block'] = None, directories: Optional[list[str]] = None, clone_directory_name: Optional[str] = None) -> dict[str, str] ``` Clones a git repository into the current working directory. **Args:** * `repository`: the URL of the repository to clone * `branch`: the branch to clone; if not provided, the default branch will be used * `commit_sha`: the commit SHA to clone; if not provided, the default branch will be used * `include_submodules`: whether to include git submodules when cloning the repository * `access_token`: an access token to use for cloning the repository; if not provided the repository will be cloned using the default git credentials * `credentials`: a GitHubCredentials, GitLabCredentials, or BitBucketCredentials block can be used to specify the credentials to use for cloning the repository. * `directories`: Specify directories you want to be included (uses git sparse-checkout) * `clone_directory_name`: the name of the local directory to clone into; if not provided, the name will be inferred from the repository URL and branch **Returns:** * a dictionary containing a `directory` key of the new directory that was created **Raises:** * `subprocess.CalledProcessError`: if the git clone command fails for any reason **Examples:** Clone a public repository: ```yaml theme={null} pull: - prefect.deployments.steps.git_clone: repository: https://github.com/PrefectHQ/prefect.git ``` Clone a branch of a public repository: ```yaml theme={null} pull: - prefect.deployments.steps.git_clone: repository: https://github.com/PrefectHQ/prefect.git branch: my-branch ``` Clone a private repository using a GitHubCredentials block: ```yaml theme={null} pull: - prefect.deployments.steps.git_clone: repository: https://github.com/org/repo.git credentials: "{{ prefect.blocks.github-credentials.my-github-credentials-block }}" ``` Clone a private repository using an access token: ```yaml theme={null} pull: - prefect.deployments.steps.git_clone: repository: https://github.com/org/repo.git access_token: "{{ prefect.blocks.secret.github-access-token }}" # Requires creation of a Secret block ``` Note that you will need to [create a Secret block](https://docs.prefect.io/v3/concepts/blocks/#pre-registered-blocks) to store the value of your git credentials. You can also store a username/password combo or token prefix (e.g. `x-token-auth`) in your secret block. Refer to your git providers documentation for the correct authentication schema. Clone a repository with submodules: ```yaml theme={null} pull: - prefect.deployments.steps.git_clone: repository: https://github.com/org/repo.git include_submodules: true ``` Clone a repository with an SSH key (note that the SSH key must be added to the worker before executing flows): ```yaml theme={null} pull: - prefect.deployments.steps.git_clone: repository: git@github.com:org/repo.git ``` Clone a repository using sparse-checkout (allows specific folders of the repository to be checked out) ```yaml theme={null} pull: - prefect.deployments.steps.git_clone: repository: https://github.com/org/repo.git directories: ["dir_1", "dir_2", "prefect"] ``` Clone a repository with a custom directory name: ```yaml theme={null} pull: - prefect.deployments.steps.git_clone: repository: https://github.com/org/repo.git branch: dev clone_directory_name: my-custom-name ``` ### `pull_from_remote_storage` ```python theme={null} pull_from_remote_storage(url: str, **settings: Any) -> dict[str, Any] ``` Pulls code from a remote storage location into the current working directory. Works with protocols supported by `fsspec`. **Args:** * `url`: the URL of the remote storage location. Should be a valid `fsspec` URL. Some protocols may require an additional `fsspec` dependency to be installed. Refer to the [`fsspec` docs](https://filesystem-spec.readthedocs.io/en/latest/api.html#other-known-implementations) for more details. * `**settings`: any additional settings to pass the `fsspec` filesystem class. **Returns:** * a dictionary containing a `directory` key of the new directory that was created **Examples:** Pull code from a remote storage location: ```yaml theme={null} pull: - prefect.deployments.steps.pull_from_remote_storage: url: s3://my-bucket/my-folder ``` Pull code from a remote storage location with additional settings: ```yaml theme={null} pull: - prefect.deployments.steps.pull_from_remote_storage: url: s3://my-bucket/my-folder key: {{ prefect.blocks.secret.my-aws-access-key }}} secret: {{ prefect.blocks.secret.my-aws-secret-key }}} ``` ### `pull_with_block` ```python theme={null} pull_with_block(block_document_name: str, block_type_slug: str) -> dict[str, Any] ``` Pulls code using a block. **Args:** * `block_document_name`: The name of the block document to use * `block_type_slug`: The slug of the type of block to use # utility Source: https://docs.prefect.io/v3/api-ref/python/prefect-deployments-steps-utility # `prefect.deployments.steps.utility` Utility project steps that are useful for managing a project's deployment lifecycle. Steps within this module can be used within a `build`, `push`, or `pull` deployment action. Example: Use the `run_shell_script` setp to retrieve the short Git commit hash of the current repository and use it as a Docker image tag: ```yaml theme={null} build: - prefect.deployments.steps.run_shell_script: id: get-commit-hash script: git rev-parse --short HEAD stream_output: false - prefect_docker.deployments.steps.build_docker_image: requires: prefect-docker image_name: my-image image_tag: "{{ get-commit-hash.stdout }}" dockerfile: auto ``` ## Functions ### `run_shell_script` ```python theme={null} run_shell_script(script: str, directory: Optional[str] = None, env: Optional[Dict[str, str]] = None, stream_output: bool = True, expand_env_vars: bool = False, shell: bool = False) -> RunShellScriptResult ``` Runs one or more shell commands in a subprocess. Returns the standard output and standard error of the script. **Args:** * `script`: The script to run * `directory`: The directory to run the script in. Defaults to the current working directory. * `env`: A dictionary of environment variables to set for the script * `stream_output`: Whether to stream the output of the script to stdout/stderr * `expand_env_vars`: Whether to expand environment variables in the script before running it * `shell`: Whether to run the command through the system shell. When True, shell operators like pipes (|), redirects (>), and logical operators (&&, ||) are supported. Only set this to True when you need shell features, as it has security implications similar to subprocess.run(shell=True). **Returns:** * A dictionary with the keys `stdout` and `stderr` containing the output of the script **Examples:** Retrieve the short Git commit hash of the current repository to use as a Docker image tag: ```yaml theme={null} build: - prefect.deployments.steps.run_shell_script: id: get-commit-hash script: git rev-parse --short HEAD stream_output: false - prefect_docker.deployments.steps.build_docker_image: requires: prefect-docker image_name: my-image image_tag: "{{ get-commit-hash.stdout }}" dockerfile: auto ``` Run a multi-line shell script: ```yaml theme={null} build: - prefect.deployments.steps.run_shell_script: script: | echo "Hello" echo "World" ``` Run a shell script with environment variables: ```yaml theme={null} build: - prefect.deployments.steps.run_shell_script: script: echo "Hello $NAME" env: NAME: World ``` Run a shell script with environment variables expanded from the current environment: ```yaml theme={null} pull: - prefect.deployments.steps.run_shell_script: script: | echo "User: $USER" echo "Home Directory: $HOME" stream_output: true expand_env_vars: true ``` Run a shell script in a specific directory: ```yaml theme={null} build: - prefect.deployments.steps.run_shell_script: script: echo "Hello" directory: /path/to/directory ``` Run a script stored in a file: ```yaml theme={null} build: - prefect.deployments.steps.run_shell_script: script: "bash path/to/script.sh" ``` Run a command that uses shell operators like pipes: ```yaml theme={null} push: - prefect.deployments.steps.run_shell_script: script: echo "hello world" | tr '[:lower:]' '[:upper:]' shell: true ``` ### `pip_install_requirements` ```python theme={null} pip_install_requirements(directory: Optional[str] = None, requirements_file: str = 'requirements.txt', stream_output: bool = True) -> dict[str, Any] ``` Installs dependencies from a requirements.txt file. **Args:** * `requirements_file`: The requirements.txt to use for installation. * `directory`: The directory the requirements.txt file is in. Defaults to the current working directory. * `stream_output`: Whether to stream the output from pip install should be streamed to the console **Returns:** * A dictionary with the keys `stdout` and `stderr` containing the output the `pip install` command **Raises:** * `subprocess.CalledProcessError`: if the pip install command fails for any reason ## Classes ### `RunShellScriptResult` The result of a `run_shell_script` step. **Attributes:** * `stdout`: The captured standard output of the script. * `stderr`: The captured standard error of the script. # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-docker-__init__ # `prefect.docker` *This module is empty or contains only private/internal implementations.* # docker_image Source: https://docs.prefect.io/v3/api-ref/python/prefect-docker-docker_image # `prefect.docker.docker_image` ## Classes ### `DockerImage` Configuration used to build and push a Docker image for a deployment. **Attributes:** * `name`: The name of the Docker image to build, including the registry and repository. * `tag`: The tag to apply to the built image. * `dockerfile`: The path to the Dockerfile to use for building the image. If not provided, a default Dockerfile will be generated. * `stream_progress_to`: A stream to write build and push progress output to. Defaults to sys.stdout. Set to None to suppress output. * `build_backend`: The backend to use for building images. `"docker-py"` (default) uses the docker-py library. `"buildx"` uses python-on-whales for BuildKit/buildx support, enabling features like build secrets, SSH forwarding, and multi-platform builds. * `**build_kwargs`: Additional keyword arguments to pass to the Docker build request. When `build_backend="docker-py"`, these are forwarded to docker-py's `client.api.build()`. When `build_backend="buildx"`, these are forwarded to `python_on_whales.docker.buildx.build()` and may include `secrets`, `ssh`, `cache_from`, `cache_to`, `platforms`, and `push`. **Methods:** #### `build` ```python theme={null} build(self) -> None ``` #### `push` ```python theme={null} push(self) -> None ``` #### `reference` ```python theme={null} reference(self) -> str ``` # engine Source: https://docs.prefect.io/v3/api-ref/python/prefect-engine # `prefect.engine` ## Functions ### `handle_engine_signals` ```python theme={null} handle_engine_signals(flow_run_id: UUID | None = None) ``` Handle signals from the orchestrator to abort or pause the flow run or otherwise handle unexpected exceptions. This context manager will handle exiting the process depending on the signal received. **Args:** * `flow_run_id`: The ID of the flow run to handle signals for. # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-events-__init__ # `prefect.events` *This module is empty or contains only private/internal implementations.* # actions Source: https://docs.prefect.io/v3/api-ref/python/prefect-events-actions # `prefect.events.actions` ## Classes ### `Action` An Action that may be performed when an Automation is triggered **Methods:** #### `describe_for_cli` ```python theme={null} describe_for_cli(self) -> str ``` A human-readable description of the action #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `DoNothing` Do nothing when an Automation is triggered **Methods:** #### `describe_for_cli` ```python theme={null} describe_for_cli(self) -> str ``` A human-readable description of the action ### `DeploymentAction` Base class for Actions that operate on Deployments and need to infer them from events **Methods:** #### `describe_for_cli` ```python theme={null} describe_for_cli(self) -> str ``` A human-readable description of the action #### `selected_deployment_requires_id` ```python theme={null} selected_deployment_requires_id(self) ``` ### `RunDeployment` Runs the given deployment with the given parameters **Methods:** #### `selected_deployment_requires_id` ```python theme={null} selected_deployment_requires_id(self) ``` ### `PauseDeployment` Pauses the given Deployment **Methods:** #### `selected_deployment_requires_id` ```python theme={null} selected_deployment_requires_id(self) ``` ### `ResumeDeployment` Resumes the given Deployment **Methods:** #### `selected_deployment_requires_id` ```python theme={null} selected_deployment_requires_id(self) ``` ### `ChangeFlowRunState` Changes the state of a flow run associated with the trigger **Methods:** #### `describe_for_cli` ```python theme={null} describe_for_cli(self) -> str ``` A human-readable description of the action ### `CancelFlowRun` Cancels a flow run associated with the trigger **Methods:** #### `describe_for_cli` ```python theme={null} describe_for_cli(self) -> str ``` A human-readable description of the action ### `ResumeFlowRun` Resumes a flow run associated with the trigger **Methods:** #### `describe_for_cli` ```python theme={null} describe_for_cli(self) -> str ``` A human-readable description of the action ### `SuspendFlowRun` Suspends a flow run associated with the trigger **Methods:** #### `describe_for_cli` ```python theme={null} describe_for_cli(self) -> str ``` A human-readable description of the action ### `CallWebhook` Call a webhook when an Automation is triggered. **Methods:** #### `describe_for_cli` ```python theme={null} describe_for_cli(self) -> str ``` A human-readable description of the action ### `SendNotification` Send a notification when an Automation is triggered **Methods:** #### `describe_for_cli` ```python theme={null} describe_for_cli(self) -> str ``` A human-readable description of the action ### `WorkPoolAction` Base class for Actions that operate on Work Pools and need to infer them from events **Methods:** #### `describe_for_cli` ```python theme={null} describe_for_cli(self) -> str ``` A human-readable description of the action ### `PauseWorkPool` Pauses a Work Pool ### `ResumeWorkPool` Resumes a Work Pool ### `WorkQueueAction` Base class for Actions that operate on Work Queues and need to infer them from events **Methods:** #### `describe_for_cli` ```python theme={null} describe_for_cli(self) -> str ``` A human-readable description of the action #### `selected_work_queue_requires_id` ```python theme={null} selected_work_queue_requires_id(self) -> Self ``` ### `PauseWorkQueue` Pauses a Work Queue **Methods:** #### `selected_work_queue_requires_id` ```python theme={null} selected_work_queue_requires_id(self) -> Self ``` ### `ResumeWorkQueue` Resumes a Work Queue **Methods:** #### `selected_work_queue_requires_id` ```python theme={null} selected_work_queue_requires_id(self) -> Self ``` ### `AutomationAction` Base class for Actions that operate on Automations and need to infer them from events **Methods:** #### `describe_for_cli` ```python theme={null} describe_for_cli(self) -> str ``` A human-readable description of the action #### `selected_automation_requires_id` ```python theme={null} selected_automation_requires_id(self) -> Self ``` ### `PauseAutomation` Pauses a Work Queue **Methods:** #### `selected_automation_requires_id` ```python theme={null} selected_automation_requires_id(self) -> Self ``` ### `ResumeAutomation` Resumes a Work Queue **Methods:** #### `selected_automation_requires_id` ```python theme={null} selected_automation_requires_id(self) -> Self ``` ### `DeclareIncident` Declares an incident for the triggering event. Only available on Prefect Cloud **Methods:** #### `describe_for_cli` ```python theme={null} describe_for_cli(self) -> str ``` A human-readable description of the action # clients Source: https://docs.prefect.io/v3/api-ref/python/prefect-events-clients # `prefect.events.clients` ## Functions ### `http_to_ws` ```python theme={null} http_to_ws(url: str) -> str ``` ### `events_in_socket_from_api_url` ```python theme={null} events_in_socket_from_api_url(url: str) -> str ``` ### `events_out_socket_from_api_url` ```python theme={null} events_out_socket_from_api_url(url: str) -> str ``` ### `get_events_client` ```python theme={null} get_events_client(reconnection_attempts: int = 10, checkpoint_every: int = 700, checkpoint_interval: float = 30.0) -> 'EventsClient' ``` ### `get_events_subscriber` ```python theme={null} get_events_subscriber(filter: Optional['EventFilter'] = None, reconnection_attempts: int = 10) -> 'PrefectEventSubscriber' ``` ## Classes ### `EventsClient` The abstract interface for all Prefect Events clients **Methods:** #### `client_name` ```python theme={null} client_name(self) -> str ``` #### `emit` ```python theme={null} emit(self, event: Event) -> None ``` Emit a single event ### `NullEventsClient` A Prefect Events client implementation that does nothing **Methods:** #### `client_name` ```python theme={null} client_name(self) -> str ``` #### `emit` ```python theme={null} emit(self, event: Event) -> None ``` Emit a single event ### `AssertingEventsClient` A Prefect Events client that records all events sent to it for inspection during tests. **Methods:** #### `client_name` ```python theme={null} client_name(self) -> str ``` #### `emit` ```python theme={null} emit(self, event: Event) -> None ``` Emit a single event #### `pop_events` ```python theme={null} pop_events(self) -> List[Event] ``` #### `reset` ```python theme={null} reset(cls) -> None ``` Reset all captured instances and their events. For use between tests ### `PrefectEventsClient` A Prefect Events client that streams events to a Prefect server **Methods:** #### `client_name` ```python theme={null} client_name(self) -> str ``` #### `emit` ```python theme={null} emit(self, event: Event) -> None ``` Emit a single event ### `AssertingPassthroughEventsClient` A Prefect Events client that BOTH records all events sent to it for inspection during tests AND sends them to a Prefect server. **Methods:** #### `pop_events` ```python theme={null} pop_events(self) -> list[Event] ``` #### `reset` ```python theme={null} reset(cls) -> None ``` ### `PrefectCloudEventsClient` A Prefect Events client that streams events to a Prefect Cloud Workspace ### `PrefectEventSubscriber` Subscribes to a Prefect event stream, yielding events as they occur. Example: from prefect.events.clients import PrefectEventSubscriber from prefect.events.filters import EventFilter, EventNameFilter filter = EventFilter(event=EventNameFilter(prefix=\["prefect.flow-run."])) async with PrefectEventSubscriber(filter=filter) as subscriber: async for event in subscriber: print(event.occurred, event.resource.id, event.event) **Methods:** #### `client_name` ```python theme={null} client_name(self) -> str ``` ### `PrefectCloudEventSubscriber` **Methods:** #### `client_name` ```python theme={null} client_name(self) -> str ``` ### `PrefectCloudAccountEventSubscriber` # filters Source: https://docs.prefect.io/v3/api-ref/python/prefect-events-filters # `prefect.events.filters` ## Classes ### `AutomationFilterCreated` Filter by `Automation.created`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `AutomationFilterName` Filter by `Automation.created`. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `AutomationFilter` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `EventDataFilter` A base class for filtering event data. **Methods:** #### `excludes` ```python theme={null} excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python theme={null} get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python theme={null} includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `EventOccurredFilter` **Methods:** #### `excludes` ```python theme={null} excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python theme={null} get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python theme={null} includes(self, event: Event) -> bool ``` #### `includes` ```python theme={null} includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? ### `EventNameFilter` **Methods:** #### `excludes` ```python theme={null} excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python theme={null} get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python theme={null} includes(self, event: Event) -> bool ``` #### `includes` ```python theme={null} includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? ### `EventResourceFilter` **Methods:** #### `excludes` ```python theme={null} excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python theme={null} get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python theme={null} includes(self, event: Event) -> bool ``` #### `includes` ```python theme={null} includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? ### `EventRelatedFilter` **Methods:** #### `excludes` ```python theme={null} excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python theme={null} get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python theme={null} includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? ### `EventAnyResourceFilter` **Methods:** #### `excludes` ```python theme={null} excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python theme={null} get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python theme={null} includes(self, event: Event) -> bool ``` #### `includes` ```python theme={null} includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? ### `EventIDFilter` **Methods:** #### `excludes` ```python theme={null} excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python theme={null} get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python theme={null} includes(self, event: Event) -> bool ``` #### `includes` ```python theme={null} includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? ### `EventTextFilter` Filter by text search across event content. **Methods:** #### `excludes` ```python theme={null} excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python theme={null} get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python theme={null} includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? ### `EventOrder` **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `EventFilter` **Methods:** #### `excludes` ```python theme={null} excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python theme={null} get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python theme={null} includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? # related Source: https://docs.prefect.io/v3/api-ref/python/prefect-events-related # `prefect.events.related` ## Functions ### `tags_as_related_resources` ```python theme={null} tags_as_related_resources(tags: Iterable[str]) -> List[RelatedResource] ``` ### `object_as_related_resource` ```python theme={null} object_as_related_resource(kind: str, role: str, object: Any) -> RelatedResource ``` ### `related_resources_from_run_context` ```python theme={null} related_resources_from_run_context(client: 'PrefectClient', exclude: Optional[Set[str]] = None) -> List[RelatedResource] ``` # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-events-schemas-__init__ # `prefect.events.schemas` *This module is empty or contains only private/internal implementations.* # automations Source: https://docs.prefect.io/v3/api-ref/python/prefect-events-schemas-automations # `prefect.events.schemas.automations` ## Functions ### `trigger_discriminator` ```python theme={null} trigger_discriminator(value: Any) -> str ``` Discriminator for triggers that defaults to 'event' if no type is specified. ## Classes ### `Posture` **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `Trigger` Base class describing a set of criteria that must be satisfied in order to trigger an automation. **Methods:** #### `actions` ```python theme={null} actions(self) -> List[ActionTypes] ``` #### `as_automation` ```python theme={null} as_automation(self) -> 'AutomationCore' ``` #### `describe_for_cli` ```python theme={null} describe_for_cli(self, indent: int = 0) -> str ``` Return a human-readable description of this trigger for the CLI #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `owner_resource` ```python theme={null} owner_resource(self) -> Optional[str] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `set_deployment_id` ```python theme={null} set_deployment_id(self, deployment_id: UUID) -> None ``` ### `ResourceTrigger` Base class for triggers that may filter by the labels of resources. **Methods:** #### `actions` ```python theme={null} actions(self) -> List[ActionTypes] ``` #### `as_automation` ```python theme={null} as_automation(self) -> 'AutomationCore' ``` #### `coerce_match` ```python theme={null} coerce_match(cls, v: Any) -> Any ``` #### `coerce_match_related` ```python theme={null} coerce_match_related(cls, v: Any) -> Any ``` #### `describe_for_cli` ```python theme={null} describe_for_cli(self, indent: int = 0) -> str ``` Return a human-readable description of this trigger for the CLI #### `owner_resource` ```python theme={null} owner_resource(self) -> Optional[str] ``` #### `set_deployment_id` ```python theme={null} set_deployment_id(self, deployment_id: UUID) -> None ``` ### `EventTrigger` A trigger that fires based on the presence or absence of events within a given period of time. **Methods:** #### `coerce_match` ```python theme={null} coerce_match(cls, v: Any) -> Any ``` #### `coerce_match_related` ```python theme={null} coerce_match_related(cls, v: Any) -> Any ``` #### `describe_for_cli` ```python theme={null} describe_for_cli(self, indent: int = 0) -> str ``` Return a human-readable description of this trigger for the CLI #### `enforce_minimum_within_for_proactive_triggers` ```python theme={null} enforce_minimum_within_for_proactive_triggers(cls, data: Dict[str, Any]) -> Dict[str, Any] ``` ### `MetricTriggerOperator` ### `PrefectMetric` ### `MetricTriggerQuery` Defines a subset of the Trigger subclass, which is specific to Metric automations, that specify the query configurations and breaching conditions for the Automation **Methods:** #### `enforce_minimum_range` ```python theme={null} enforce_minimum_range(cls, value: timedelta) -> timedelta ``` #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `MetricTrigger` A trigger that fires based on the results of a metric query. **Methods:** #### `coerce_match` ```python theme={null} coerce_match(cls, v: Any) -> Any ``` #### `coerce_match_related` ```python theme={null} coerce_match_related(cls, v: Any) -> Any ``` #### `describe_for_cli` ```python theme={null} describe_for_cli(self, indent: int = 0) -> str ``` Return a human-readable description of this trigger for the CLI ### `CompositeTrigger` Requires some number of triggers to have fired within the given time period. **Methods:** #### `actions` ```python theme={null} actions(self) -> List[ActionTypes] ``` #### `as_automation` ```python theme={null} as_automation(self) -> 'AutomationCore' ``` #### `describe_for_cli` ```python theme={null} describe_for_cli(self, indent: int = 0) -> str ``` Return a human-readable description of this trigger for the CLI #### `owner_resource` ```python theme={null} owner_resource(self) -> Optional[str] ``` #### `set_deployment_id` ```python theme={null} set_deployment_id(self, deployment_id: UUID) -> None ``` ### `CompoundTrigger` A composite trigger that requires some number of triggers to have fired within the given time period **Methods:** #### `describe_for_cli` ```python theme={null} describe_for_cli(self, indent: int = 0) -> str ``` Return a human-readable description of this trigger for the CLI #### `validate_require` ```python theme={null} validate_require(self) -> Self ``` ### `SequenceTrigger` A composite trigger that requires some number of triggers to have fired within the given time period in a specific order **Methods:** #### `describe_for_cli` ```python theme={null} describe_for_cli(self, indent: int = 0) -> str ``` Return a human-readable description of this trigger for the CLI ### `AutomationCore` Defines an action a user wants to take when a certain number of events do or don't happen to the matching resources **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `Automation` # deployment_triggers Source: https://docs.prefect.io/v3/api-ref/python/prefect-events-schemas-deployment_triggers # `prefect.events.schemas.deployment_triggers` Schemas for defining triggers within a Prefect deployment YAML. This is a separate parallel hierarchy for representing triggers so that they can also include the information necessary to create an automation. These triggers should follow the validation rules of the main Trigger class hierarchy as closely as possible (because otherwise users will get validation errors creating triggers), but we can be more liberal with the defaults here to make it simpler to create them from YAML. ## Functions ### `deployment_trigger_discriminator` ```python theme={null} deployment_trigger_discriminator(value: Any) -> str ``` Custom discriminator for deployment triggers that defaults to 'event' if no type is specified. ## Classes ### `BaseDeploymentTrigger` Base class describing a set of criteria that must be satisfied in order to trigger an automation. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `DeploymentEventTrigger` A trigger that fires based on the presence or absence of events within a given period of time. **Methods:** #### `describe_for_cli` ```python theme={null} describe_for_cli(self, indent: int = 0) -> str ``` Return a human-readable description of this trigger for the CLI #### `enforce_minimum_within_for_proactive_triggers` ```python theme={null} enforce_minimum_within_for_proactive_triggers(cls, data: Dict[str, Any]) -> Dict[str, Any] ``` ### `DeploymentMetricTrigger` A trigger that fires based on the results of a metric query. **Methods:** #### `describe_for_cli` ```python theme={null} describe_for_cli(self, indent: int = 0) -> str ``` Return a human-readable description of this trigger for the CLI ### `DeploymentCompoundTrigger` A composite trigger that requires some number of triggers to have fired within the given time period **Methods:** #### `describe_for_cli` ```python theme={null} describe_for_cli(self, indent: int = 0) -> str ``` Return a human-readable description of this trigger for the CLI #### `validate_require` ```python theme={null} validate_require(self) -> Self ``` ### `DeploymentSequenceTrigger` A composite trigger that requires some number of triggers to have fired within the given time period in a specific order **Methods:** #### `describe_for_cli` ```python theme={null} describe_for_cli(self, indent: int = 0) -> str ``` Return a human-readable description of this trigger for the CLI # events Source: https://docs.prefect.io/v3/api-ref/python/prefect-events-schemas-events # `prefect.events.schemas.events` ## Functions ### `matches` ```python theme={null} matches(expected: str, value: Optional[str]) -> bool ``` Returns true if the given value matches the expected string, which may include a a negation prefix ("!this-value") or a wildcard suffix ("any-value-starting-with\*") ## Classes ### `Resource` An observable business object of interest to the user **Methods:** #### `as_label_value_array` ```python theme={null} as_label_value_array(self) -> List[Dict[str, str]] ``` #### `enforce_maximum_labels` ```python theme={null} enforce_maximum_labels(self) -> Self ``` #### `get` ```python theme={null} get(self, label: str, default: Optional[str] = None) -> Optional[str] ``` #### `has_all_labels` ```python theme={null} has_all_labels(self, labels: Dict[str, str]) -> bool ``` #### `id` ```python theme={null} id(self) -> str ``` #### `items` ```python theme={null} items(self) -> Iterable[Tuple[str, str]] ``` #### `keys` ```python theme={null} keys(self) -> Iterable[str] ``` #### `labels` ```python theme={null} labels(self) -> LabelDiver ``` #### `name` ```python theme={null} name(self) -> Optional[str] ``` #### `prefect_object_id` ```python theme={null} prefect_object_id(self, kind: str) -> UUID ``` Extracts the UUID from an event's resource ID if it's the expected kind of prefect resource #### `requires_resource_id` ```python theme={null} requires_resource_id(self) -> Self ``` ### `RelatedResource` A Resource with a specific role in an Event **Methods:** #### `enforce_maximum_labels` ```python theme={null} enforce_maximum_labels(self) -> Self ``` #### `id` ```python theme={null} id(self) -> str ``` #### `name` ```python theme={null} name(self) -> Optional[str] ``` #### `prefect_object_id` ```python theme={null} prefect_object_id(self, kind: str) -> UUID ``` Extracts the UUID from an event's resource ID if it's the expected kind of prefect resource #### `requires_resource_id` ```python theme={null} requires_resource_id(self) -> Self ``` #### `requires_resource_role` ```python theme={null} requires_resource_role(self) -> Self ``` #### `role` ```python theme={null} role(self) -> str ``` ### `Event` The client-side view of an event that has happened to a Resource **Methods:** #### `find_resource_label` ```python theme={null} find_resource_label(self, label: str) -> Optional[str] ``` Finds the value of the given label in this event's resource or one of its related resources. If the label starts with `related::`, search for the first matching label in a related resource with that role. #### `involved_resources` ```python theme={null} involved_resources(self) -> Sequence[Resource] ``` #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `resource_in_role` ```python theme={null} resource_in_role(self) -> Mapping[str, RelatedResource] ``` Returns a mapping of roles to the first related resource in that role #### `resources_in_role` ```python theme={null} resources_in_role(self) -> Mapping[str, Sequence[RelatedResource]] ``` Returns a mapping of roles to related resources in that role #### `size_bytes` ```python theme={null} size_bytes(self) -> int ``` ### `ReceivedEvent` The server-side view of an event that has happened to a Resource after it has been received by the server **Methods:** #### `is_set` ```python theme={null} is_set(self) ``` #### `set` ```python theme={null} set(self) -> None ``` Set the flag, notifying all waiters. Unlike `asyncio.Event`, waiters may not be notified immediately when this is called; instead, notification will be placed on the owning loop of each waiter for thread safety. #### `wait` ```python theme={null} wait(self) -> Literal[True] ``` Block until the internal flag is true. If the internal flag is true on entry, return True immediately. Otherwise, block until another `set()` is called, then return True. ### `ResourceSpecification` **Methods:** #### `deepcopy` ```python theme={null} deepcopy(self) -> 'ResourceSpecification' ``` #### `get` ```python theme={null} get(self, key: str, default: Optional[Union[str, List[str]]] = None) -> Optional[List[str]] ``` #### `includes` ```python theme={null} includes(self, candidates: Iterable[Resource]) -> bool ``` #### `items` ```python theme={null} items(self) -> Iterable[Tuple[str, List[str]]] ``` #### `matches` ```python theme={null} matches(self, resource: Resource) -> bool ``` #### `matches_every_resource` ```python theme={null} matches_every_resource(self) -> bool ``` #### `matches_every_resource_of_kind` ```python theme={null} matches_every_resource_of_kind(self, prefix: str) -> bool ``` #### `pop` ```python theme={null} pop(self, key: str, default: Optional[Union[str, List[str]]] = None) -> Optional[List[str]] ``` # labelling Source: https://docs.prefect.io/v3/api-ref/python/prefect-events-schemas-labelling # `prefect.events.schemas.labelling` ## Classes ### `LabelDiver` The LabelDiver supports templating use cases for any Labelled object, by presenting the labels as a graph of objects that may be accessed by attribute. For example: ```python theme={null} diver = LabelDiver({ 'hello.world': 'foo', 'hello.world.again': 'bar' }) assert str(diver.hello.world) == 'foo' assert str(diver.hello.world.again) == 'bar' ``` ### `Labelled` **Methods:** #### `as_label_value_array` ```python theme={null} as_label_value_array(self) -> List[Dict[str, str]] ``` #### `get` ```python theme={null} get(self, label: str, default: Optional[str] = None) -> Optional[str] ``` #### `has_all_labels` ```python theme={null} has_all_labels(self, labels: Dict[str, str]) -> bool ``` #### `items` ```python theme={null} items(self) -> Iterable[Tuple[str, str]] ``` #### `keys` ```python theme={null} keys(self) -> Iterable[str] ``` #### `labels` ```python theme={null} labels(self) -> LabelDiver ``` # subscribers Source: https://docs.prefect.io/v3/api-ref/python/prefect-events-subscribers # `prefect.events.subscribers` Flow run subscriber that interleaves events and logs from a flow run ## Classes ### `FlowRunSubscriber` Subscribes to both events and logs for a specific flow run, yielding them in an interleaved stream. This subscriber combines the event stream and log stream for a flow run into a single async iterator. When a terminal event (Completed, Failed, or Crashed) is received, the event subscription stops but log subscription continues for a configurable timeout to catch any straggler logs. # utilities Source: https://docs.prefect.io/v3/api-ref/python/prefect-events-utilities # `prefect.events.utilities` ## Functions ### `emit_event` ```python theme={null} emit_event(event: str, resource: dict[str, str], occurred: datetime.datetime | None = None, related: list[dict[str, str]] | list[RelatedResource] | None = None, payload: dict[str, Any] | None = None, id: UUID | None = None, follows: Event | None = None, **kwargs: dict[str, Any] | None) -> Event | None ``` Send an event to Prefect. **Args:** * `event`: The name of the event that happened. * `resource`: The primary Resource this event concerns. * `occurred`: When the event happened from the sender's perspective. Defaults to the current datetime. * `related`: A list of additional Resources involved in this event. * `payload`: An open-ended set of data describing what happened. * `id`: The sender-provided identifier for this event. Defaults to a random UUID. * `follows`: The event that preceded this one. If the preceding event happened more than 5 minutes prior to this event the follows relationship will not be set. **Returns:** * The event that was emitted if worker is using a client that emit * events, otherwise None # worker Source: https://docs.prefect.io/v3/api-ref/python/prefect-events-worker # `prefect.events.worker` ## Functions ### `should_emit_events` ```python theme={null} should_emit_events() -> bool ``` ### `emit_events_to_cloud` ```python theme={null} emit_events_to_cloud() -> bool ``` ### `should_emit_events_to_running_server` ```python theme={null} should_emit_events_to_running_server() -> bool ``` ### `should_emit_events_to_ephemeral_server` ```python theme={null} should_emit_events_to_ephemeral_server() -> bool ``` ## Classes ### `ProcessPoolForwardingEventsClient` An events client that forwards events to a parent process queue. **Methods:** #### `client_name` ```python theme={null} client_name(self) -> str ``` #### `emit` ```python theme={null} emit(self, event: Event) -> None ``` Emit a single event ### `EventsWorker` **Methods:** #### `attach_related_resources_from_context` ```python theme={null} attach_related_resources_from_context(self, event: Event) -> None ``` #### `instance` ```python theme={null} instance(cls: Type[Self], client_type: Optional[Type[EventsClient]] = None) -> Self ``` #### `set_client_override` ```python theme={null} set_client_override(cls, client_type: Optional[Type[EventsClient]], **client_kwargs: Any) -> None ``` # exceptions Source: https://docs.prefect.io/v3/api-ref/python/prefect-exceptions # `prefect.exceptions` Prefect-specific exceptions. ## Functions ### `exception_traceback` ```python theme={null} exception_traceback(exc: Exception) -> str ``` Convert an exception to a printable string with a traceback ## Classes ### `PrefectException` Base exception type for Prefect errors. ### `CrashedRun` Raised when the result from a crashed run is retrieved. This occurs when a string is attached to the state instead of an exception or if the state's data is null. ### `FailedRun` Raised when the result from a failed run is retrieved and an exception is not attached. This occurs when a string is attached to the state instead of an exception or if the state's data is null. ### `CancelledRun` Raised when the result from a cancelled run is retrieved and an exception is not attached. This occurs when a string is attached to the state instead of an exception or if the state's data is null. ### `PausedRun` Raised when the result from a paused run is retrieved. ### `UnfinishedRun` Raised when the result from a run that is not finished is retrieved. For example, if a run is in a SCHEDULED, PENDING, CANCELLING, or RUNNING state. ### `MissingFlowError` Raised when a given flow name is not found in the expected script. ### `UnspecifiedFlowError` Raised when multiple flows are found in the expected script and no name is given. ### `MissingResult` Raised when a result is missing from a state; often when result persistence is disabled and the state is retrieved from the API. ### `ScriptError` Raised when a script errors during evaluation while attempting to load data ### `ParameterTypeError` Raised when a parameter does not pass Pydantic type validation. **Methods:** #### `from_validation_error` ```python theme={null} from_validation_error(cls, exc: ValidationError) -> Self ``` ### `ParameterBindError` Raised when args and kwargs cannot be converted to parameters. **Methods:** #### `from_bind_failure` ```python theme={null} from_bind_failure(cls, fn: Callable[..., Any], exc: TypeError, call_args: tuple[Any, ...], call_kwargs: dict[str, Any]) -> Self ``` ### `SignatureMismatchError` Raised when parameters passed to a function do not match its signature. **Methods:** #### `from_bad_params` ```python theme={null} from_bad_params(cls, expected_params: list[str], provided_params: list[str]) -> Self ``` ### `ObjectNotFound` Raised when the client receives a 404 (not found) from the API. ### `ObjectAlreadyExists` Raised when the client receives a 409 (conflict) from the API. ### `ObjectLimitReached` Raised when the client receives a 403 (forbidden) from the API due to reaching an object limit (e.g. maximum number of deployments). ### `ObjectUnsupported` Raised when the client receives a 403 (forbidden) from the API due to an unsupported object (i.e. requires a specific Prefect Cloud tier). ### `UpstreamTaskError` Raised when a task relies on the result of another task but that task is not 'COMPLETE' ### `MissingContextError` Raised when a method is called that requires a task or flow run context to be active but one cannot be found. ### `MissingProfileError` Raised when a profile name does not exist. ### `ReservedArgumentError` Raised when a function used with Prefect has an argument with a name that is reserved for a Prefect feature ### `InvalidNameError` Raised when a name contains characters that are not permitted. ### `PrefectSignal` Base type for signal-like exceptions that should never be caught by users. ### `Abort` Raised when the API sends an 'ABORT' instruction during state proposal. Indicates that the run should exit immediately. ### `Pause` Raised when a flow run is PAUSED and needs to exit for resubmission. ### `ExternalSignal` Base type for external signal-like exceptions that should never be caught by users. ### `TerminationSignal` Raised when a flow run receives a termination signal. ### `PrefectHTTPStatusError` Raised when client receives a `Response` that contains an HTTPStatusError. Used to include API error details in the error messages that the client provides users. **Methods:** #### `from_httpx_error` ```python theme={null} from_httpx_error(cls: type[Self], httpx_error: HTTPStatusError) -> Self ``` Generate a `PrefectHTTPStatusError` from an `httpx.HTTPStatusError`. ### `MappingLengthMismatch` Raised when attempting to call Task.map with arguments of different lengths. ### `MappingMissingIterable` Raised when attempting to call Task.map with all static arguments ### `BlockMissingCapabilities` Raised when a block does not have required capabilities for a given operation. ### `ProtectedBlockError` Raised when an operation is prevented due to block protection. ### `InvalidRepositoryURLError` Raised when an incorrect URL is provided to a GitHub filesystem block. ### `InfrastructureError` A base class for exceptions related to infrastructure blocks ### `InfrastructureNotFound` Raised when infrastructure is missing, likely because it has exited or been deleted. ### `InfrastructureNotAvailable` Raised when infrastructure is not accessible from the current machine. For example, if a process was spawned on another machine it cannot be managed. ### `NotPausedError` Raised when attempting to unpause a run that isn't paused. ### `FlowPauseTimeout` Raised when a flow pause times out ### `FlowRunWaitTimeout` Raised when a flow run takes longer than a given timeout ### `PrefectImportError` An error raised when a Prefect object cannot be imported due to a move or removal. ### `SerializationError` Raised when an object cannot be serialized. ### `ConfigurationError` Raised when a configuration is invalid. ### `EventTooLarge` Raised when an event exceeds the configured maximum size. ### `ProfileSettingsValidationError` Raised when a profile settings are invalid. ### `HashError` Raised when hashing objects fails # filesystems Source: https://docs.prefect.io/v3/api-ref/python/prefect-filesystems # `prefect.filesystems` ## Classes ### `ReadableFileSystem` **Methods:** #### `adelete` ```python theme={null} adelete(cls, name: str, client: Optional['PrefectClient'] = None) -> None ``` Asynchronously deletes the block document with the specified name. **Args:** * `name`: The name of the block document to delete. * `client`: The client to use to delete the block document. If not provided, the default client will be injected. #### `aload` ```python theme={null} aload(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `aload`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Custom.aload("my-custom-message") ``` Load from Block with a block document slug: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Block.aload("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python theme={null} # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = await Custom.aload("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `aload_from_ref` ```python theme={null} aload_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Asynchronously retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `aload_from_ref`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload_from_ref` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `annotation_refers_to_block_class` ```python theme={null} annotation_refers_to_block_class(annotation: Any) -> bool ``` #### `aregister_type_and_schema` ```python theme={null} aregister_type_and_schema(cls, client: Optional['PrefectClient'] = None) -> None ``` Asynchronously makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. #### `asave` ```python theme={null} asave(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) -> UUID ``` Asynchronously saves the values of a block as a block document. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. * `client`: The client to use to save the block document. If not provided, the default client will be injected. **Returns:** * The ID of the saved block document. #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` #### `delete` ```python theme={null} delete(cls, name: str, client: Optional['PrefectClient'] = None) -> None ``` Deletes the block document with the specified name. This function will dispatch to `adelete` when called from an async context. **Args:** * `name`: The name of the block document to delete. * `client`: The client to use to delete the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. #### `get_block_capabilities` ```python theme={null} get_block_capabilities(cls) -> FrozenSet[str] ``` Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset. #### `get_block_class_from_key` ```python theme={null} get_block_class_from_key(cls: type[Self], key: str) -> type[Self] ``` Retrieve the block class implementation given a key. #### `get_block_class_from_schema` ```python theme={null} get_block_class_from_schema(cls: type[Self], schema: BlockSchema) -> type[Self] ``` Retrieve the block class implementation given a schema. #### `get_block_placeholder` ```python theme={null} get_block_placeholder(self) -> str ``` Returns the block placeholder for the current block which can be used for templating. **Returns:** * The block placeholder for the current block in the format `prefect.blocks.{block_type_name}.{block_document_name}` **Raises:** * `BlockNotSavedError`: Raised if the block has not been saved. If a block has not been saved, the return value will be `None`. #### `get_block_schema_version` ```python theme={null} get_block_schema_version(cls) -> str ``` #### `get_block_type_name` ```python theme={null} get_block_type_name(cls) -> str ``` #### `get_block_type_slug` ```python theme={null} get_block_type_slug(cls) -> str ``` #### `get_code_example` ```python theme={null} get_code_example(cls) -> Optional[str] ``` Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided. #### `get_description` ```python theme={null} get_description(cls) -> Optional[str] ``` Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined. #### `is_block_class` ```python theme={null} is_block_class(block: Any) -> TypeGuard[type['Block']] ``` #### `load` ```python theme={null} load(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Custom.load("my-custom-message") ``` Load from Block with a block document slug: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Block.load("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python theme={null} # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = Custom.load("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `load_from_ref` ```python theme={null} load_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. This function will dispatch to `aload_from_ref` when called from an async context. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `load_from_ref`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load_from_ref` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `model_dump` ```python theme={null} model_dump(self) -> dict[str, Any] ``` #### `model_json_schema` ```python theme={null} model_json_schema(cls, by_alias: bool = True, ref_template: str = '#/definitions/{model}', schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema, mode: Literal['validation', 'serialization'] = 'validation') -> dict[str, Any] ``` TODO: stop overriding this method - use GenerateSchema in ConfigDict instead? #### `model_validate` ```python theme={null} model_validate(cls: type[Self], obj: dict[str, Any] | Any) -> Self ``` #### `read_path` ```python theme={null} read_path(self, path: str) -> bytes ``` #### `register_type_and_schema` ```python theme={null} register_type_and_schema(cls, client: Optional['PrefectClient'] = None) -> None ``` Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. This function will dispatch to `aregister_type_and_schema` when called from an async context. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. This is ignored when called from a synchronous context. #### `save` ```python theme={null} save(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) -> UUID ``` Saves the values of a block as a block document. This function will dispatch to `asave` when called from an async context. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. * `client`: The client to use to save the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. **Returns:** * The ID of the saved block document. #### `ser_model` ```python theme={null} ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `validate_block_type_slug` ```python theme={null} validate_block_type_slug(cls, values: Any) -> Any ``` Validates that the `block_type_slug` in the input values matches the expected block type slug for the class. This helps pydantic to correctly discriminate between different Block subclasses when validating Union types of Blocks. ### `WritableFileSystem` **Methods:** #### `adelete` ```python theme={null} adelete(cls, name: str, client: Optional['PrefectClient'] = None) -> None ``` Asynchronously deletes the block document with the specified name. **Args:** * `name`: The name of the block document to delete. * `client`: The client to use to delete the block document. If not provided, the default client will be injected. #### `aload` ```python theme={null} aload(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `aload`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Custom.aload("my-custom-message") ``` Load from Block with a block document slug: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Block.aload("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python theme={null} # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = await Custom.aload("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `aload_from_ref` ```python theme={null} aload_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Asynchronously retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `aload_from_ref`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload_from_ref` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `annotation_refers_to_block_class` ```python theme={null} annotation_refers_to_block_class(annotation: Any) -> bool ``` #### `aregister_type_and_schema` ```python theme={null} aregister_type_and_schema(cls, client: Optional['PrefectClient'] = None) -> None ``` Asynchronously makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. #### `asave` ```python theme={null} asave(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) -> UUID ``` Asynchronously saves the values of a block as a block document. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. * `client`: The client to use to save the block document. If not provided, the default client will be injected. **Returns:** * The ID of the saved block document. #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` #### `delete` ```python theme={null} delete(cls, name: str, client: Optional['PrefectClient'] = None) -> None ``` Deletes the block document with the specified name. This function will dispatch to `adelete` when called from an async context. **Args:** * `name`: The name of the block document to delete. * `client`: The client to use to delete the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. #### `get_block_capabilities` ```python theme={null} get_block_capabilities(cls) -> FrozenSet[str] ``` Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset. #### `get_block_class_from_key` ```python theme={null} get_block_class_from_key(cls: type[Self], key: str) -> type[Self] ``` Retrieve the block class implementation given a key. #### `get_block_class_from_schema` ```python theme={null} get_block_class_from_schema(cls: type[Self], schema: BlockSchema) -> type[Self] ``` Retrieve the block class implementation given a schema. #### `get_block_placeholder` ```python theme={null} get_block_placeholder(self) -> str ``` Returns the block placeholder for the current block which can be used for templating. **Returns:** * The block placeholder for the current block in the format `prefect.blocks.{block_type_name}.{block_document_name}` **Raises:** * `BlockNotSavedError`: Raised if the block has not been saved. If a block has not been saved, the return value will be `None`. #### `get_block_schema_version` ```python theme={null} get_block_schema_version(cls) -> str ``` #### `get_block_type_name` ```python theme={null} get_block_type_name(cls) -> str ``` #### `get_block_type_slug` ```python theme={null} get_block_type_slug(cls) -> str ``` #### `get_code_example` ```python theme={null} get_code_example(cls) -> Optional[str] ``` Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided. #### `get_description` ```python theme={null} get_description(cls) -> Optional[str] ``` Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined. #### `is_block_class` ```python theme={null} is_block_class(block: Any) -> TypeGuard[type['Block']] ``` #### `load` ```python theme={null} load(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Custom.load("my-custom-message") ``` Load from Block with a block document slug: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Block.load("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python theme={null} # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = Custom.load("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `load_from_ref` ```python theme={null} load_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. This function will dispatch to `aload_from_ref` when called from an async context. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `load_from_ref`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load_from_ref` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `model_dump` ```python theme={null} model_dump(self) -> dict[str, Any] ``` #### `model_json_schema` ```python theme={null} model_json_schema(cls, by_alias: bool = True, ref_template: str = '#/definitions/{model}', schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema, mode: Literal['validation', 'serialization'] = 'validation') -> dict[str, Any] ``` TODO: stop overriding this method - use GenerateSchema in ConfigDict instead? #### `model_validate` ```python theme={null} model_validate(cls: type[Self], obj: dict[str, Any] | Any) -> Self ``` #### `read_path` ```python theme={null} read_path(self, path: str) -> bytes ``` #### `register_type_and_schema` ```python theme={null} register_type_and_schema(cls, client: Optional['PrefectClient'] = None) -> None ``` Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. This function will dispatch to `aregister_type_and_schema` when called from an async context. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. This is ignored when called from a synchronous context. #### `save` ```python theme={null} save(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) -> UUID ``` Saves the values of a block as a block document. This function will dispatch to `asave` when called from an async context. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. * `client`: The client to use to save the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. **Returns:** * The ID of the saved block document. #### `ser_model` ```python theme={null} ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `validate_block_type_slug` ```python theme={null} validate_block_type_slug(cls, values: Any) -> Any ``` Validates that the `block_type_slug` in the input values matches the expected block type slug for the class. This helps pydantic to correctly discriminate between different Block subclasses when validating Union types of Blocks. #### `write_path` ```python theme={null} write_path(self, path: str, content: bytes) -> None ``` ### `ReadableDeploymentStorage` **Methods:** #### `adelete` ```python theme={null} adelete(cls, name: str, client: Optional['PrefectClient'] = None) -> None ``` Asynchronously deletes the block document with the specified name. **Args:** * `name`: The name of the block document to delete. * `client`: The client to use to delete the block document. If not provided, the default client will be injected. #### `aload` ```python theme={null} aload(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `aload`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Custom.aload("my-custom-message") ``` Load from Block with a block document slug: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Block.aload("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python theme={null} # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = await Custom.aload("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `aload_from_ref` ```python theme={null} aload_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Asynchronously retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `aload_from_ref`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload_from_ref` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `annotation_refers_to_block_class` ```python theme={null} annotation_refers_to_block_class(annotation: Any) -> bool ``` #### `aregister_type_and_schema` ```python theme={null} aregister_type_and_schema(cls, client: Optional['PrefectClient'] = None) -> None ``` Asynchronously makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. #### `asave` ```python theme={null} asave(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) -> UUID ``` Asynchronously saves the values of a block as a block document. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. * `client`: The client to use to save the block document. If not provided, the default client will be injected. **Returns:** * The ID of the saved block document. #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` #### `delete` ```python theme={null} delete(cls, name: str, client: Optional['PrefectClient'] = None) -> None ``` Deletes the block document with the specified name. This function will dispatch to `adelete` when called from an async context. **Args:** * `name`: The name of the block document to delete. * `client`: The client to use to delete the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. #### `get_block_capabilities` ```python theme={null} get_block_capabilities(cls) -> FrozenSet[str] ``` Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset. #### `get_block_class_from_key` ```python theme={null} get_block_class_from_key(cls: type[Self], key: str) -> type[Self] ``` Retrieve the block class implementation given a key. #### `get_block_class_from_schema` ```python theme={null} get_block_class_from_schema(cls: type[Self], schema: BlockSchema) -> type[Self] ``` Retrieve the block class implementation given a schema. #### `get_block_placeholder` ```python theme={null} get_block_placeholder(self) -> str ``` Returns the block placeholder for the current block which can be used for templating. **Returns:** * The block placeholder for the current block in the format `prefect.blocks.{block_type_name}.{block_document_name}` **Raises:** * `BlockNotSavedError`: Raised if the block has not been saved. If a block has not been saved, the return value will be `None`. #### `get_block_schema_version` ```python theme={null} get_block_schema_version(cls) -> str ``` #### `get_block_type_name` ```python theme={null} get_block_type_name(cls) -> str ``` #### `get_block_type_slug` ```python theme={null} get_block_type_slug(cls) -> str ``` #### `get_code_example` ```python theme={null} get_code_example(cls) -> Optional[str] ``` Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided. #### `get_description` ```python theme={null} get_description(cls) -> Optional[str] ``` Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined. #### `get_directory` ```python theme={null} get_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> None ``` #### `is_block_class` ```python theme={null} is_block_class(block: Any) -> TypeGuard[type['Block']] ``` #### `load` ```python theme={null} load(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Custom.load("my-custom-message") ``` Load from Block with a block document slug: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Block.load("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python theme={null} # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = Custom.load("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `load_from_ref` ```python theme={null} load_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. This function will dispatch to `aload_from_ref` when called from an async context. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `load_from_ref`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load_from_ref` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `model_dump` ```python theme={null} model_dump(self) -> dict[str, Any] ``` #### `model_json_schema` ```python theme={null} model_json_schema(cls, by_alias: bool = True, ref_template: str = '#/definitions/{model}', schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema, mode: Literal['validation', 'serialization'] = 'validation') -> dict[str, Any] ``` TODO: stop overriding this method - use GenerateSchema in ConfigDict instead? #### `model_validate` ```python theme={null} model_validate(cls: type[Self], obj: dict[str, Any] | Any) -> Self ``` #### `register_type_and_schema` ```python theme={null} register_type_and_schema(cls, client: Optional['PrefectClient'] = None) -> None ``` Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. This function will dispatch to `aregister_type_and_schema` when called from an async context. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. This is ignored when called from a synchronous context. #### `save` ```python theme={null} save(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) -> UUID ``` Saves the values of a block as a block document. This function will dispatch to `asave` when called from an async context. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. * `client`: The client to use to save the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. **Returns:** * The ID of the saved block document. #### `ser_model` ```python theme={null} ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `validate_block_type_slug` ```python theme={null} validate_block_type_slug(cls, values: Any) -> Any ``` Validates that the `block_type_slug` in the input values matches the expected block type slug for the class. This helps pydantic to correctly discriminate between different Block subclasses when validating Union types of Blocks. ### `WritableDeploymentStorage` **Methods:** #### `adelete` ```python theme={null} adelete(cls, name: str, client: Optional['PrefectClient'] = None) -> None ``` Asynchronously deletes the block document with the specified name. **Args:** * `name`: The name of the block document to delete. * `client`: The client to use to delete the block document. If not provided, the default client will be injected. #### `aload` ```python theme={null} aload(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `aload`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Custom.aload("my-custom-message") ``` Load from Block with a block document slug: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = await Block.aload("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python theme={null} # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = await Custom.aload("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `aload_from_ref` ```python theme={null} aload_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Asynchronously retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `aload_from_ref`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `aload_from_ref` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `annotation_refers_to_block_class` ```python theme={null} annotation_refers_to_block_class(annotation: Any) -> bool ``` #### `aregister_type_and_schema` ```python theme={null} aregister_type_and_schema(cls, client: Optional['PrefectClient'] = None) -> None ``` Asynchronously makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. #### `asave` ```python theme={null} asave(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) -> UUID ``` Asynchronously saves the values of a block as a block document. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. * `client`: The client to use to save the block document. If not provided, the default client will be injected. **Returns:** * The ID of the saved block document. #### `block_initialization` ```python theme={null} block_initialization(self) -> None ``` #### `delete` ```python theme={null} delete(cls, name: str, client: Optional['PrefectClient'] = None) -> None ``` Deletes the block document with the specified name. This function will dispatch to `adelete` when called from an async context. **Args:** * `name`: The name of the block document to delete. * `client`: The client to use to delete the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. #### `get_block_capabilities` ```python theme={null} get_block_capabilities(cls) -> FrozenSet[str] ``` Returns the block capabilities for this Block. Recursively collects all block capabilities of all parent classes into a single frozenset. #### `get_block_class_from_key` ```python theme={null} get_block_class_from_key(cls: type[Self], key: str) -> type[Self] ``` Retrieve the block class implementation given a key. #### `get_block_class_from_schema` ```python theme={null} get_block_class_from_schema(cls: type[Self], schema: BlockSchema) -> type[Self] ``` Retrieve the block class implementation given a schema. #### `get_block_placeholder` ```python theme={null} get_block_placeholder(self) -> str ``` Returns the block placeholder for the current block which can be used for templating. **Returns:** * The block placeholder for the current block in the format `prefect.blocks.{block_type_name}.{block_document_name}` **Raises:** * `BlockNotSavedError`: Raised if the block has not been saved. If a block has not been saved, the return value will be `None`. #### `get_block_schema_version` ```python theme={null} get_block_schema_version(cls) -> str ``` #### `get_block_type_name` ```python theme={null} get_block_type_name(cls) -> str ``` #### `get_block_type_slug` ```python theme={null} get_block_type_slug(cls) -> str ``` #### `get_code_example` ```python theme={null} get_code_example(cls) -> Optional[str] ``` Returns the code example for the given block. Attempts to parse code example from the class docstring if an override is not provided. #### `get_description` ```python theme={null} get_description(cls) -> Optional[str] ``` Returns the description for the current block. Attempts to parse description from class docstring if an override is not defined. #### `get_directory` ```python theme={null} get_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> None ``` #### `is_block_class` ```python theme={null} is_block_class(block: Any) -> TypeGuard[type['Block']] ``` #### `load` ```python theme={null} load(cls, name: str, validate: bool = True, client: Optional['PrefectClient'] = None) -> 'Self' ``` Retrieves data from the block document with the given name for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. If a block document for a given block type is saved with a different schema than the current class calling `load`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `name`: The name or slug of the block document. A block document slug is a string with the format `/` * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. **Raises:** * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. **Examples:** Load from a Block subclass with a block document name: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Custom.load("my-custom-message") ``` Load from Block with a block document slug: ```python theme={null} class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") loaded_block = Block.load("custom/my-custom-message") ``` Migrate a block document to a new schema: ```python theme={null} # original class class Custom(Block): message: str Custom(message="Hello!").save("my-custom-message") # Updated class with new required field class Custom(Block): message: str number_of_ducks: int loaded_block = Custom.load("my-custom-message", validate=False) # Prints UserWarning about schema mismatch loaded_block.number_of_ducks = 42 loaded_block.save("my-custom-message", overwrite=True) ``` #### `load_from_ref` ```python theme={null} load_from_ref(cls, ref: Union[str, UUID, dict[str, Any]], validate: bool = True, client: 'PrefectClient | None' = None) -> Self ``` Retrieves data from the block document by given reference for the block type that corresponds with the current class and returns an instantiated version of the current class with the data stored in the block document. This function will dispatch to `aload_from_ref` when called from an async context. Provided reference can be a block document ID, or a reference data in dictionary format. Supported dictionary reference formats are: * `{"block_document_id": }` * `{"block_document_slug": }` If a block document for a given block type is saved with a different schema than the current class calling `load_from_ref`, a warning will be raised. If the current class schema is a subset of the block document schema, the block can be loaded as normal using the default `validate = True`. If the current class schema is a superset of the block document schema, `load_from_ref` must be called with `validate` set to False to prevent a validation error. In this case, the block attributes will default to `None` and must be set manually and saved to a new block document before the block can be used as expected. **Args:** * `ref`: The reference to the block document. This can be a block document ID, or one of supported dictionary reference formats. * `validate`: If False, the block document will be loaded without Pydantic validating the block schema. This is useful if the block schema has changed client-side since the block document referred to by `name` was saved. * `client`: The client to use to load the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. **Raises:** * `ValueError`: If invalid reference format is provided. * `ValueError`: If the requested block document is not found. **Returns:** * An instance of the current class hydrated with the data stored in the * block document with the specified name. #### `model_dump` ```python theme={null} model_dump(self) -> dict[str, Any] ``` #### `model_json_schema` ```python theme={null} model_json_schema(cls, by_alias: bool = True, ref_template: str = '#/definitions/{model}', schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema, mode: Literal['validation', 'serialization'] = 'validation') -> dict[str, Any] ``` TODO: stop overriding this method - use GenerateSchema in ConfigDict instead? #### `model_validate` ```python theme={null} model_validate(cls: type[Self], obj: dict[str, Any] | Any) -> Self ``` #### `put_directory` ```python theme={null} put_directory(self, local_path: Optional[str] = None, to_path: Optional[str] = None, ignore_file: Optional[str] = None) -> None ``` #### `register_type_and_schema` ```python theme={null} register_type_and_schema(cls, client: Optional['PrefectClient'] = None) -> None ``` Makes block available for configuration with current Prefect API. Recursively registers all nested blocks. Registration is idempotent. This function will dispatch to `aregister_type_and_schema` when called from an async context. **Args:** * `client`: Optional client to use for registering type and schema with the Prefect API. A new client will be created and used if one is not provided. This is ignored when called from a synchronous context. #### `save` ```python theme={null} save(self, name: Optional[str] = None, overwrite: bool = False, client: Optional['PrefectClient'] = None) -> UUID ``` Saves the values of a block as a block document. This function will dispatch to `asave` when called from an async context. **Args:** * `name`: User specified name to give saved block document which can later be used to load the block document. * `overwrite`: Boolean value specifying if values should be overwritten if a block document with the specified name already exists. * `client`: The client to use to save the block document. If not provided, the default client will be injected. This is ignored when called from a synchronous context. **Returns:** * The ID of the saved block document. #### `ser_model` ```python theme={null} ser_model(self, handler: SerializerFunctionWrapHandler, info: SerializationInfo) -> Any ``` #### `validate_block_type_slug` ```python theme={null} validate_block_type_slug(cls, values: Any) -> Any ``` Validates that the `block_type_slug` in the input values matches the expected block type slug for the class. This helps pydantic to correctly discriminate between different Block subclasses when validating Union types of Blocks. ### `LocalFileSystem` Store data as a file on a local file system. **Methods:** #### `aget_directory` ```python theme={null} aget_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> None ``` Copies a directory from one place to another on the local filesystem. Defaults to copying the entire contents of the block's basepath to the current working directory. #### `aput_directory` ```python theme={null} aput_directory(self, local_path: Optional[str] = None, to_path: Optional[str] = None, ignore_file: Optional[str] = None) -> None ``` Copies a directory from one place to another on the local filesystem. Defaults to copying the entire contents of the current working directory to the block's basepath. An `ignore_file` path may be provided that can include gitignore style expressions for filepaths to ignore. #### `aread_path` ```python theme={null} aread_path(self, path: str) -> bytes ``` #### `awrite_path` ```python theme={null} awrite_path(self, path: str, content: bytes) -> str ``` #### `cast_pathlib` ```python theme={null} cast_pathlib(cls, value: str | Path | None) -> str | None ``` #### `get_directory` ```python theme={null} get_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> None ``` Copies a directory from one place to another on the local filesystem. Defaults to copying the entire contents of the block's basepath to the current working directory. #### `get_directory` ```python theme={null} get_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> None ``` #### `put_directory` ```python theme={null} put_directory(self, local_path: Optional[str] = None, to_path: Optional[str] = None, ignore_file: Optional[str] = None) -> None ``` Copies a directory from one place to another on the local filesystem. Defaults to copying the entire contents of the current working directory to the block's basepath. An `ignore_file` path may be provided that can include gitignore style expressions for filepaths to ignore. #### `put_directory` ```python theme={null} put_directory(self, local_path: Optional[str] = None, to_path: Optional[str] = None, ignore_file: Optional[str] = None) -> None ``` #### `read_path` ```python theme={null} read_path(self, path: str) -> bytes ``` #### `read_path` ```python theme={null} read_path(self, path: str) -> bytes ``` #### `write_path` ```python theme={null} write_path(self, path: str, content: bytes) -> str ``` #### `write_path` ```python theme={null} write_path(self, path: str, content: bytes) -> None ``` ### `RemoteFileSystem` Store data as a file on a remote file system. Supports any remote file system supported by `fsspec`. The file system is specified using a protocol. For example, "s3://my-bucket/my-folder/" will use S3. **Methods:** #### `aget_directory` ```python theme={null} aget_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> None ``` Downloads a directory from a given remote path to a local directory. Defaults to downloading the entire contents of the block's basepath to the current working directory. #### `aput_directory` ```python theme={null} aput_directory(self, local_path: Optional[str] = None, to_path: Optional[str] = None, ignore_file: Optional[str] = None, overwrite: bool = True) -> int ``` Uploads a directory from a given local path to a remote directory. Defaults to uploading the entire contents of the current working directory to the block's basepath. #### `aread_path` ```python theme={null} aread_path(self, path: str) -> bytes ``` #### `awrite_path` ```python theme={null} awrite_path(self, path: str, content: bytes) -> str ``` #### `check_basepath` ```python theme={null} check_basepath(cls, value: str) -> str ``` #### `filesystem` ```python theme={null} filesystem(self) -> fsspec.AbstractFileSystem ``` #### `get_directory` ```python theme={null} get_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> None ``` Downloads a directory from a given remote path to a local directory. Defaults to downloading the entire contents of the block's basepath to the current working directory. #### `get_directory` ```python theme={null} get_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> None ``` #### `put_directory` ```python theme={null} put_directory(self, local_path: Optional[str] = None, to_path: Optional[str] = None, ignore_file: Optional[str] = None, overwrite: bool = True) -> int ``` Uploads a directory from a given local path to a remote directory. Defaults to uploading the entire contents of the current working directory to the block's basepath. #### `put_directory` ```python theme={null} put_directory(self, local_path: Optional[str] = None, to_path: Optional[str] = None, ignore_file: Optional[str] = None) -> None ``` #### `read_path` ```python theme={null} read_path(self, path: str) -> bytes ``` #### `read_path` ```python theme={null} read_path(self, path: str) -> bytes ``` #### `write_path` ```python theme={null} write_path(self, path: str, content: bytes) -> str ``` #### `write_path` ```python theme={null} write_path(self, path: str, content: bytes) -> None ``` ### `SMB` Store data as a file on a SMB share. **Methods:** #### `aget_directory` ```python theme={null} aget_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> bytes ``` Downloads a directory from a given remote path to a local directory. Defaults to downloading the entire contents of the block's basepath to the current working directory. #### `aput_directory` ```python theme={null} aput_directory(self, local_path: Optional[str] = None, to_path: Optional[str] = None, ignore_file: Optional[str] = None) -> int ``` Uploads a directory from a given local path to a remote directory. Defaults to uploading the entire contents of the current working directory to the block's basepath. #### `aread_path` ```python theme={null} aread_path(self, path: str) -> bytes ``` #### `awrite_path` ```python theme={null} awrite_path(self, path: str, content: bytes) -> str ``` #### `basepath` ```python theme={null} basepath(self) -> str ``` #### `filesystem` ```python theme={null} filesystem(self) -> RemoteFileSystem ``` #### `get_directory` ```python theme={null} get_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> bytes ``` Downloads a directory from a given remote path to a local directory. Defaults to downloading the entire contents of the block's basepath to the current working directory. #### `get_directory` ```python theme={null} get_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> None ``` #### `put_directory` ```python theme={null} put_directory(self, local_path: Optional[str] = None, to_path: Optional[str] = None, ignore_file: Optional[str] = None) -> int ``` Uploads a directory from a given local path to a remote directory. Defaults to uploading the entire contents of the current working directory to the block's basepath. #### `put_directory` ```python theme={null} put_directory(self, local_path: Optional[str] = None, to_path: Optional[str] = None, ignore_file: Optional[str] = None) -> None ``` #### `read_path` ```python theme={null} read_path(self, path: str) -> bytes ``` #### `read_path` ```python theme={null} read_path(self, path: str) -> bytes ``` #### `write_path` ```python theme={null} write_path(self, path: str, content: bytes) -> str ``` #### `write_path` ```python theme={null} write_path(self, path: str, content: bytes) -> None ``` ### `NullFileSystem` A file system that does not store any data. **Methods:** #### `get_directory` ```python theme={null} get_directory(self, from_path: Optional[str] = None, local_path: Optional[str] = None) -> None ``` #### `put_directory` ```python theme={null} put_directory(self, local_path: Optional[str] = None, to_path: Optional[str] = None, ignore_file: Optional[str] = None) -> None ``` #### `read_path` ```python theme={null} read_path(self, path: str) -> None ``` #### `write_path` ```python theme={null} write_path(self, path: str, content: bytes) -> None ``` # flow_engine Source: https://docs.prefect.io/v3/api-ref/python/prefect-flow_engine # `prefect.flow_engine` ## Functions ### `load_flow_run` ```python theme={null} load_flow_run(flow_run_id: UUID) -> FlowRun ``` ### `load_flow` ```python theme={null} load_flow(flow_run: FlowRun) -> Flow[..., Any] ``` ### `load_flow_and_flow_run` ```python theme={null} load_flow_and_flow_run(flow_run_id: UUID) -> tuple[FlowRun, Flow[..., Any]] ``` ### `send_heartbeats_sync` ```python theme={null} send_heartbeats_sync(engine: 'FlowRunEngine[Any, Any]') -> Generator[None, None, None] ``` ### `send_heartbeats_async` ```python theme={null} send_heartbeats_async(engine: 'AsyncFlowRunEngine[Any, Any]') -> AsyncGenerator[None, None] ``` ### `run_flow_sync` ```python theme={null} run_flow_sync(flow: Flow[P, R], flow_run: Optional[FlowRun] = None, parameters: Optional[Dict[str, Any]] = None, wait_for: Optional[Iterable[PrefectFuture[Any]]] = None, return_type: Literal['state', 'result'] = 'result', context: Optional[dict[str, Any]] = None) -> Union[R, State, None] ``` ### `run_flow_async` ```python theme={null} run_flow_async(flow: Flow[P, R], flow_run: Optional[FlowRun] = None, parameters: Optional[Dict[str, Any]] = None, wait_for: Optional[Iterable[PrefectFuture[Any]]] = None, return_type: Literal['state', 'result'] = 'result', context: Optional[dict[str, Any]] = None) -> Union[R, State, None] ``` ### `run_generator_flow_sync` ```python theme={null} run_generator_flow_sync(flow: Flow[P, R], flow_run: Optional[FlowRun] = None, parameters: Optional[Dict[str, Any]] = None, wait_for: Optional[Iterable[PrefectFuture[Any]]] = None, return_type: Literal['state', 'result'] = 'result', context: Optional[dict[str, Any]] = None) -> Generator[R, None, None] ``` ### `run_generator_flow_async` ```python theme={null} run_generator_flow_async(flow: Flow[P, R], flow_run: Optional[FlowRun] = None, parameters: Optional[Dict[str, Any]] = None, wait_for: Optional[Iterable[PrefectFuture[R]]] = None, return_type: Literal['state', 'result'] = 'result', context: Optional[dict[str, Any]] = None) -> AsyncGenerator[R, None] ``` ### `run_flow` ```python theme={null} run_flow(flow: Flow[P, R], flow_run: Optional[FlowRun] = None, parameters: Optional[Dict[str, Any]] = None, wait_for: Optional[Iterable[PrefectFuture[R]]] = None, return_type: Literal['state', 'result'] = 'result', error_logger: Optional[logging.Logger] = None, context: Optional[dict[str, Any]] = None) -> R | State | None | Coroutine[Any, Any, R | State | None] | Generator[R, None, None] | AsyncGenerator[R, None] ``` ### `run_flow_in_subprocess` ```python theme={null} run_flow_in_subprocess(flow: 'Flow[..., Any]', flow_run: 'FlowRun | None' = None, parameters: dict[str, Any] | None = None, wait_for: Iterable[PrefectFuture[Any]] | None = None, context: dict[str, Any] | None = None, env: dict[str, str] | None = None) -> multiprocessing.context.SpawnProcess ``` Run a flow in a subprocess. Note the result of the flow will only be accessible if the flow is configured to persist its result. **Args:** * `flow`: The flow to run. * `flow_run`: The flow run object containing run metadata. * `parameters`: The parameters to use when invoking the flow. * `wait_for`: The futures to wait for before starting the flow. * `context`: A serialized context to hydrate before running the flow. If not provided, the current context will be used. A serialized context should be provided if this function is called in a separate memory space from the parent run (e.g. in a subprocess or on another machine). * `env`: Additional environment variables to set in the subprocess. **Returns:** * A multiprocessing.context.SpawnProcess representing the process that is running the flow. ## Classes ### `FlowRunTimeoutError` Raised when a flow run exceeds its defined timeout. ### `BaseFlowRunEngine` **Methods:** #### `cancel_all_tasks` ```python theme={null} cancel_all_tasks(self) -> None ``` #### `heartbeat_seconds` ```python theme={null} heartbeat_seconds(self) -> Optional[int] ``` Get the heartbeat interval from settings. #### `is_pending` ```python theme={null} is_pending(self) -> bool ``` #### `is_running` ```python theme={null} is_running(self) -> bool ``` #### `state` ```python theme={null} state(self) -> State ``` ### `FlowRunEngine` **Methods:** #### `begin_run` ```python theme={null} begin_run(self) -> State ``` #### `call_flow_fn` ```python theme={null} call_flow_fn(self) -> Union[R, Coroutine[Any, Any, R]] ``` Convenience method to call the flow function. Returns a coroutine if the flow is async. #### `call_hooks` ```python theme={null} call_hooks(self, state: Optional[State] = None) -> None ``` #### `client` ```python theme={null} client(self) -> SyncPrefectClient ``` #### `create_flow_run` ```python theme={null} create_flow_run(self, client: SyncPrefectClient) -> FlowRun ``` #### `handle_crash` ```python theme={null} handle_crash(self, exc: BaseException) -> None ``` #### `handle_exception` ```python theme={null} handle_exception(self, exc: Exception, msg: Optional[str] = None, result_store: Optional[ResultStore] = None) -> State ``` #### `handle_success` ```python theme={null} handle_success(self, result: R) -> R ``` #### `handle_timeout` ```python theme={null} handle_timeout(self, exc: TimeoutError) -> None ``` #### `initialize_run` ```python theme={null} initialize_run(self) ``` Enters a client context and creates a flow run if needed. #### `load_subflow_run` ```python theme={null} load_subflow_run(self, parent_task_run: TaskRun, client: SyncPrefectClient, context: FlowRunContext) -> Union[FlowRun, None] ``` This method attempts to load an existing flow run for a subflow task run, if appropriate. If the parent task run is in a final but not COMPLETED state, and not being rerun, then we attempt to load an existing flow run instead of creating a new one. This will prevent the engine from running the subflow again. If no existing flow run is found, or if the subflow should be rerun, then no flow run is returned. #### `result` ```python theme={null} result(self, raise_on_failure: bool = True) -> 'Union[R, State, None]' ``` #### `run_context` ```python theme={null} run_context(self) ``` #### `set_state` ```python theme={null} set_state(self, state: State, force: bool = False) -> State ``` #### `setup_run_context` ```python theme={null} setup_run_context(self, client: Optional[SyncPrefectClient] = None) ``` #### `start` ```python theme={null} start(self) -> Generator[None, None, None] ``` ### `AsyncFlowRunEngine` Async version of the flow run engine. NOTE: This has not been fully asyncified yet which may lead to async flows not being fully asyncified. **Methods:** #### `begin_run` ```python theme={null} begin_run(self) -> State ``` #### `call_flow_fn` ```python theme={null} call_flow_fn(self) -> Coroutine[Any, Any, R] ``` Convenience method to call the flow function. Returns a coroutine if the flow is async. #### `call_hooks` ```python theme={null} call_hooks(self, state: Optional[State] = None) -> None ``` #### `client` ```python theme={null} client(self) -> PrefectClient ``` #### `create_flow_run` ```python theme={null} create_flow_run(self, client: PrefectClient) -> FlowRun ``` #### `handle_crash` ```python theme={null} handle_crash(self, exc: BaseException) -> None ``` #### `handle_exception` ```python theme={null} handle_exception(self, exc: Exception, msg: Optional[str] = None, result_store: Optional[ResultStore] = None) -> State ``` #### `handle_success` ```python theme={null} handle_success(self, result: R) -> R ``` #### `handle_timeout` ```python theme={null} handle_timeout(self, exc: TimeoutError) -> None ``` #### `initialize_run` ```python theme={null} initialize_run(self) ``` Enters a client context and creates a flow run if needed. #### `load_subflow_run` ```python theme={null} load_subflow_run(self, parent_task_run: TaskRun, client: PrefectClient, context: FlowRunContext) -> Union[FlowRun, None] ``` This method attempts to load an existing flow run for a subflow task run, if appropriate. If the parent task run is in a final but not COMPLETED state, and not being rerun, then we attempt to load an existing flow run instead of creating a new one. This will prevent the engine from running the subflow again. If no existing flow run is found, or if the subflow should be rerun, then no flow run is returned. #### `result` ```python theme={null} result(self, raise_on_failure: bool = True) -> 'Union[R, State, None]' ``` #### `run_context` ```python theme={null} run_context(self) ``` #### `set_state` ```python theme={null} set_state(self, state: State, force: bool = False) -> State ``` #### `setup_run_context` ```python theme={null} setup_run_context(self, client: Optional[PrefectClient] = None) ``` #### `start` ```python theme={null} start(self) -> AsyncGenerator[None, None] ``` # flow_runs Source: https://docs.prefect.io/v3/api-ref/python/prefect-flow_runs # `prefect.flow_runs` ## Functions ### `wait_for_flow_run` ```python theme={null} wait_for_flow_run(flow_run_id: UUID, timeout: int | None = 10800, poll_interval: int | None = None, client: 'PrefectClient | None' = None, log_states: bool = False) -> FlowRun ``` Waits for the prefect flow run to finish and returns the FlowRun **Args:** * `flow_run_id`: The flow run ID for the flow run to wait for. * `timeout`: The wait timeout in seconds. Defaults to 10800 (3 hours). * `poll_interval`: Deprecated; polling is no longer used to wait for flow runs. * `client`: Optional Prefect client. If not provided, one will be injected. * `log_states`: If True, log state changes. Defaults to False. **Returns:** * The finished flow run. **Raises:** * `prefect.exceptions.FlowWaitTimeout`: If flow run goes over the timeout. **Examples:** Create a flow run for a deployment and wait for it to finish: ```python theme={null} import asyncio from prefect.client.orchestration import get_client from prefect.flow_runs import wait_for_flow_run async def main(): async with get_client() as client: flow_run = await client.create_flow_run_from_deployment(deployment_id="my-deployment-id") flow_run = await wait_for_flow_run(flow_run_id=flow_run.id) print(flow_run.state) if __name__ == "__main__": asyncio.run(main()) ``` Trigger multiple flow runs and wait for them to finish: ```python theme={null} import asyncio from prefect.client.orchestration import get_client from prefect.flow_runs import wait_for_flow_run async def main(num_runs: int): async with get_client() as client: flow_runs = [ await client.create_flow_run_from_deployment(deployment_id="my-deployment-id") for _ in range(num_runs) ] coros = [wait_for_flow_run(flow_run_id=flow_run.id) for flow_run in flow_runs] finished_flow_runs = await asyncio.gather(*coros) print([flow_run.state for flow_run in finished_flow_runs]) if __name__ == "__main__": asyncio.run(main(num_runs=10)) ``` ### `apause_flow_run` ```python theme={null} apause_flow_run(wait_for_input: Type[T] | None = None, timeout: int = 3600, poll_interval: int = 10, key: str | None = None) -> T | None ``` Pauses the current flow run by blocking execution until resumed. This is the async version of `pause_flow_run`. When called within a flow run, execution will block and no downstream tasks will run until the flow is resumed. Task runs that have already started will continue running. A timeout parameter can be passed that will fail the flow run if it has not been resumed within the specified time. **Args:** * `timeout`: the number of seconds to wait for the flow to be resumed before failing. Defaults to 1 hour (3600 seconds). If the pause timeout exceeds any configured flow-level timeout, the flow might fail even after resuming. * `poll_interval`: The number of seconds between checking whether the flow has been resumed. Defaults to 10 seconds. * `key`: An optional key to prevent calling pauses more than once. This defaults to the number of pauses observed by the flow so far, and prevents pauses that use the "reschedule" option from running the same pause twice. A custom key can be supplied for custom pausing behavior. * `wait_for_input`: a subclass of `RunInput` or any type supported by Pydantic. If provided when the flow pauses, the flow will wait for the input to be provided before resuming. If the flow is resumed without providing the input, the flow will fail. If the flow is resumed with the input, the flow will resume and the input will be loaded and returned from this function. Example: ```python theme={null} @task async def task_one(): for i in range(3): await asyncio.sleep(1) @flow async def my_flow(): terminal_state = await task_one.submit(return_state=True) if terminal_state.type == StateType.COMPLETED: print("Task one succeeded! Pausing flow run..") await apause_flow_run(timeout=2) else: print("Task one failed. Skipping pause flow run..") ``` ### `pause_flow_run` ```python theme={null} pause_flow_run(wait_for_input: Type[T] | None = None, timeout: int = 3600, poll_interval: int = 10, key: str | None = None) -> T | None ``` Pauses the current flow run by blocking execution until resumed. This function will dispatch to `apause_flow_run` when called from an async context. When called within a flow run, execution will block and no downstream tasks will run until the flow is resumed. Task runs that have already started will continue running. A timeout parameter can be passed that will fail the flow run if it has not been resumed within the specified time. **Args:** * `timeout`: the number of seconds to wait for the flow to be resumed before failing. Defaults to 1 hour (3600 seconds). If the pause timeout exceeds any configured flow-level timeout, the flow might fail even after resuming. * `poll_interval`: The number of seconds between checking whether the flow has been resumed. Defaults to 10 seconds. * `key`: An optional key to prevent calling pauses more than once. This defaults to the number of pauses observed by the flow so far, and prevents pauses that use the "reschedule" option from running the same pause twice. A custom key can be supplied for custom pausing behavior. * `wait_for_input`: a subclass of `RunInput` or any type supported by Pydantic. If provided when the flow pauses, the flow will wait for the input to be provided before resuming. If the flow is resumed without providing the input, the flow will fail. If the flow is resumed with the input, the flow will resume and the input will be loaded and returned from this function. Example: ```python theme={null} @task def task_one(): for i in range(3): sleep(1) @flow def my_flow(): terminal_state = task_one.submit(return_state=True) if terminal_state.type == StateType.COMPLETED: print("Task one succeeded! Pausing flow run..") pause_flow_run(timeout=2) else: print("Task one failed. Skipping pause flow run..") ``` ### `asuspend_flow_run` ```python theme={null} asuspend_flow_run(wait_for_input: Type[T] | None = None, flow_run_id: UUID | None = None, timeout: int | None = None, key: str | None = None) -> T | None ``` Suspends a flow run by stopping code execution until resumed. This is the async version of `suspend_flow_run`. When suspended, the flow run will continue execution until the NEXT task is orchestrated, at which point the flow will exit. Any tasks that have already started will run until completion. When resumed, the flow run will be rescheduled to finish execution. In order suspend a flow run in this way, the flow needs to have an associated deployment and results need to be configured with the `persist_result` option. **Args:** * `flow_run_id`: a flow run id. If supplied, this function will attempt to suspend the specified flow run. If not supplied will attempt to suspend the current flow run. * `timeout`: the number of seconds to wait for the flow to be resumed before failing. Defaults to no timeout. If the pause timeout exceeds any configured flow-level timeout, the flow might fail even after resuming. * `key`: An optional key to prevent calling suspend more than once. This defaults to a random string and prevents suspends from running the same suspend twice. A custom key can be supplied for custom suspending behavior. * `wait_for_input`: a subclass of `RunInput` or any type supported by Pydantic. If provided when the flow suspends, the flow will remain suspended until receiving the input before resuming. If the flow is resumed without providing the input, the flow will fail. If the flow is resumed with the input, the flow will resume and the input will be loaded and returned from this function. ### `suspend_flow_run` ```python theme={null} suspend_flow_run(wait_for_input: Type[T] | None = None, flow_run_id: UUID | None = None, timeout: int | None = None, key: str | None = None) -> T | None ``` Suspends a flow run by stopping code execution until resumed. This function will dispatch to `asuspend_flow_run` when called from an async context. When suspended, the flow run will continue execution until the NEXT task is orchestrated, at which point the flow will exit. Any tasks that have already started will run until completion. When resumed, the flow run will be rescheduled to finish execution. In order suspend a flow run in this way, the flow needs to have an associated deployment and results need to be configured with the `persist_result` option. **Args:** * `flow_run_id`: a flow run id. If supplied, this function will attempt to suspend the specified flow run. If not supplied will attempt to suspend the current flow run. * `timeout`: the number of seconds to wait for the flow to be resumed before failing. Defaults to no timeout. If the pause timeout exceeds any configured flow-level timeout, the flow might fail even after resuming. * `key`: An optional key to prevent calling suspend more than once. This defaults to a random string and prevents suspends from running the same suspend twice. A custom key can be supplied for custom suspending behavior. * `wait_for_input`: a subclass of `RunInput` or any type supported by Pydantic. If provided when the flow suspends, the flow will remain suspended until receiving the input before resuming. If the flow is resumed without providing the input, the flow will fail. If the flow is resumed with the input, the flow will resume and the input will be loaded and returned from this function. ### `aresume_flow_run` ```python theme={null} aresume_flow_run(flow_run_id: UUID, run_input: dict[str, Any] | None = None) -> None ``` Resumes a paused flow asynchronously. **Args:** * `flow_run_id`: the flow\_run\_id to resume * `run_input`: a dictionary of inputs to provide to the flow run. ### `resume_flow_run` ```python theme={null} resume_flow_run(flow_run_id: UUID, run_input: dict[str, Any] | None = None) -> None ``` Resumes a paused flow. **Args:** * `flow_run_id`: the flow\_run\_id to resume * `run_input`: a dictionary of inputs to provide to the flow run. # flows Source: https://docs.prefect.io/v3/api-ref/python/prefect-flows # `prefect.flows` Module containing the base workflow class and decorator - for most use cases, using the `@flow` decorator is preferred. ## Functions ### `bind_flow_to_infrastructure` ```python theme={null} bind_flow_to_infrastructure(flow: Flow[P, R], work_pool: str, worker_cls: type['BaseWorker[Any, Any, Any]'], job_variables: dict[str, Any] | None = None, launcher: BundleLauncher | None = None, include_files: Sequence[str] | None = None) -> InfrastructureBoundFlow[P, R] ``` ### `select_flow` ```python theme={null} select_flow(flows: Iterable[Flow[P, R]], flow_name: Optional[str] = None, from_message: Optional[str] = None) -> Flow[P, R] ``` Select the only flow in an iterable or a flow specified by name. Returns A single flow object **Raises:** * `MissingFlowError`: If no flows exist in the iterable * `MissingFlowError`: If a flow name is provided and that flow does not exist * `UnspecifiedFlowError`: If multiple flows exist but no flow name was provided ### `load_flow_from_entrypoint` ```python theme={null} load_flow_from_entrypoint(entrypoint: str, use_placeholder_flow: bool = True) -> Flow[P, Any] ``` Extract a flow object from a script at an entrypoint by running all of the code in the file. **Args:** * `entrypoint`: a string in the format `\:` or a string in the format `\:.` or a module path to a flow function * `use_placeholder_flow`: if True, use a placeholder Flow object if the actual flow object cannot be loaded from the entrypoint (e.g. dependencies are missing) **Returns:** * The flow object from the script **Raises:** * `ScriptError`: If an exception is encountered while running the script * `MissingFlowError`: If the flow function specified in the entrypoint does not exist ### `load_function_and_convert_to_flow` ```python theme={null} load_function_and_convert_to_flow(entrypoint: str) -> Flow[P, Any] ``` Loads a function from an entrypoint and converts it to a flow if it is not already a flow. ### `serve` ```python theme={null} serve(*args: 'RunnerDeployment', **kwargs: Any) -> None ``` Serve the provided list of deployments. **Args:** * `*args`: A list of deployments to serve. * `pause_on_shutdown`: A boolean for whether or not to automatically pause deployment schedules on shutdown. * `print_starting_message`: Whether or not to print message to the console on startup. * `limit`: The maximum number of runs that can be executed concurrently. * `**kwargs`: Additional keyword arguments to pass to the runner. **Examples:** Prepare two deployments and serve them: ```python theme={null} import datetime from prefect import flow, serve @flow def my_flow(name): print(f"hello {name}") @flow def my_other_flow(name): print(f"goodbye {name}") if __name__ == "__main__": # Run once a day hello_deploy = my_flow.to_deployment( "hello", tags=["dev"], interval=datetime.timedelta(days=1) ) # Run every Sunday at 4:00 AM bye_deploy = my_other_flow.to_deployment( "goodbye", tags=["dev"], cron="0 4 * * sun" ) serve(hello_deploy, bye_deploy) ``` ### `aserve` ```python theme={null} aserve(*args: 'RunnerDeployment', **kwargs: Any) -> None ``` Asynchronously serve the provided list of deployments. Use `serve` instead if calling from a synchronous context. **Args:** * `*args`: A list of deployments to serve. * `pause_on_shutdown`: A boolean for whether or not to automatically pause deployment schedules on shutdown. * `print_starting_message`: Whether or not to print message to the console on startup. * `limit`: The maximum number of runs that can be executed concurrently. * `**kwargs`: Additional keyword arguments to pass to the runner. **Examples:** Prepare deployment and asynchronous initialization function and serve them: ````python theme={null} import asyncio import datetime from prefect import flow, aserve, get_client async def init(): await set_concurrency_limit() async def set_concurrency_limit(): async with get_client() as client: await client.create_concurrency_limit(tag='dev', concurrency_limit=3) @flow async def my_flow(name): print(f"hello {name}") async def main(): # Initialization function await init() # Run once a day hello_deploy = await my_flow.to_deployment( "hello", tags=["dev"], interval=datetime.timedelta(days=1) ) await aserve(hello_deploy) if __name__ == "__main__": asyncio.run(main()) ### `load_flow_from_flow_run` ```python load_flow_from_flow_run(client: 'PrefectClient', flow_run: 'FlowRun', ignore_storage: bool = False, storage_base_path: Optional[str] = None, use_placeholder_flow: bool = True) -> Flow[..., Any] ```` Load a flow from the location/script provided in a deployment's storage document. If `ignore_storage=True` is provided, no pull from remote storage occurs. This flag is largely for testing, and assumes the flow is already available locally. ### `load_placeholder_flow` ```python theme={null} load_placeholder_flow(entrypoint: str, raises: Exception) -> Flow[P, Any] ``` Load a placeholder flow that is initialized with the same arguments as the flow specified in the entrypoint. If called the flow will raise `raises`. This is useful when a flow can't be loaded due to missing dependencies or other issues but the base metadata defining the flow is still needed. **Args:** * `entrypoint`: a string in the format `\:` or a module path to a flow function * `raises`: an exception to raise when the flow is called ### `safe_load_flow_from_entrypoint` ```python theme={null} safe_load_flow_from_entrypoint(entrypoint: str) -> Optional[Flow[P, Any]] ``` Safely load a Prefect flow from an entrypoint string. Returns None if loading fails. **Args:** * `entrypoint`: A string identifying the flow to load. Can be in one of the following formats: * `\:` * `\:.` * `.` **Returns:** * Optional\[Flow]: The loaded Prefect flow object, or None if loading fails due to errors * (e.g. unresolved dependencies, syntax errors, or missing objects). ### `load_flow_arguments_from_entrypoint` ```python theme={null} load_flow_arguments_from_entrypoint(entrypoint: str, arguments: Optional[Union[list[str], set[str]]] = None) -> dict[str, Any] ``` Extract flow arguments from an entrypoint string. Loads the source code of the entrypoint and extracts the flow arguments from the `flow` decorator. **Args:** * `entrypoint`: a string in the format `\:` or a module path to a flow function ### `is_entrypoint_async` ```python theme={null} is_entrypoint_async(entrypoint: str) -> bool ``` Determine if the function specified in the entrypoint is asynchronous. **Args:** * `entrypoint`: A string in the format `\:` or a module path to a function. **Returns:** * True if the function is asynchronous, False otherwise. ## Classes ### `FlowStateHook` A callable that is invoked when a flow enters a given state. ### `Flow` A Prefect workflow definition. Wraps a function with an entrypoint to the Prefect engine. To preserve the input and output types, we use the generic type variables `P` and `R` for "Parameters" and "Returns" respectively. **Args:** * `fn`: The function defining the workflow. * `name`: An optional name for the flow; if not provided, the name will be inferred from the given function. * `version`: An optional version string for the flow; if not provided, we will attempt to create a version string as a hash of the file containing the wrapped function; if the file cannot be located, the version will be null. * `flow_run_name`: An optional name to distinguish runs of this flow; this name can be provided as a string template with the flow's parameters as variables, or a function that returns a string. * `task_runner`: An optional task runner to use for task execution within the flow; if not provided, a `ThreadPoolTaskRunner` will be used. * `description`: An optional string description for the flow; if not provided, the description will be pulled from the docstring for the decorated function. * `timeout_seconds`: An optional number of seconds indicating a maximum runtime for the flow. If the flow exceeds this runtime, it will be marked as failed. Flow execution may continue until the next task is called. * `validate_parameters`: By default, parameters passed to flows are validated by Pydantic. This will check that input values conform to the annotated types on the function. Where possible, values will be coerced into the correct type; for example, if a parameter is defined as `x\: int` and "5" is passed, it will be resolved to `5`. If set to `False`, no validation will be performed on flow parameters. * `retries`: An optional number of times to retry on flow run failure. * `retry_delay_seconds`: An optional number of seconds to wait before retrying the flow after failure. This is only applicable if `retries` is nonzero. * `persist_result`: An optional toggle indicating whether the result of this flow should be persisted to result storage. Defaults to `None`, which indicates that Prefect should choose whether the result should be persisted depending on the features being used. * `result_storage`: An optional block to use to persist the result of this flow. This can be either a saved block instance or a string reference (e.g., "local-file-system/my-storage"). Block instances must have `.save()` called first since decorators execute at import time. String references are resolved at runtime and recommended for testing scenarios. This value will be used as the default for any tasks in this flow. If not provided, the local file system will be used unless called as a subflow, at which point the default will be loaded from the parent flow. * `result_serializer`: An optional serializer to use to serialize the result of this flow for persistence. This value will be used as the default for any tasks in this flow. If not provided, the value of `PREFECT_RESULTS_DEFAULT_SERIALIZER` will be used unless called as a subflow, at which point the default will be loaded from the parent flow. * `on_failure`: An optional list of callables to run when the flow enters a failed state. * `on_completion`: An optional list of callables to run when the flow enters a completed state. * `on_cancellation`: An optional list of callables to run when the flow enters a cancelling state. * `on_crashed`: An optional list of callables to run when the flow enters a crashed state. * `on_running`: An optional list of callables to run when the flow enters a running state. **Methods:** #### `adeploy` ```python theme={null} adeploy(self, name: str, work_pool_name: Optional[str] = None, image: Optional[Union[str, 'DockerImage']] = None, build: bool = True, push: bool = True, work_queue_name: Optional[str] = None, job_variables: Optional[dict[str, Any]] = None, interval: Optional[Union[int, float, datetime.timedelta]] = None, cron: Optional[str] = None, rrule: Optional[str] = None, paused: Optional[bool] = None, schedule: Optional[Schedule] = None, schedules: Optional[list[Schedule]] = None, concurrency_limit: Optional[Union[int, ConcurrencyLimitConfig, None]] = None, triggers: Optional[list[Union[DeploymentTriggerTypes, TriggerTypes]]] = None, parameters: Optional[dict[str, Any]] = None, description: Optional[str] = None, tags: Optional[list[str]] = None, version: Optional[str] = None, version_type: Optional[VersionType] = None, enforce_parameter_schema: bool = True, entrypoint_type: EntrypointType = EntrypointType.FILE_PATH, print_next_steps: bool = True, ignore_warnings: bool = False, _sla: Optional[Union[SlaTypes, list[SlaTypes]]] = None) -> UUID ``` Deploys a flow to run on dynamic infrastructure via a work pool. This is the async version of deploy(). By default, calling this method will build a Docker image for the flow, push it to a registry, and create a deployment via the Prefect API that will run the flow on the given schedule. If you want to use an existing image, you can pass `build=False` to skip building and pushing an image. **Args:** * `name`: The name to give the created deployment. * `work_pool_name`: The name of the work pool to use for this deployment. Defaults to the value of `PREFECT_DEFAULT_WORK_POOL_NAME`. * `image`: The name of the Docker image to build, including the registry and repository. Pass a DockerImage instance to customize the Dockerfile used and build arguments. * `build`: Whether or not to build a new image for the flow. If False, the provided image will be used as-is and pulled at runtime. * `push`: Whether or not to skip pushing the built image to a registry. * `work_queue_name`: The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used. * `job_variables`: Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings. * `interval`: An interval on which to execute the deployment. Accepts a number or a timedelta object to create a single schedule. If a number is given, it will be interpreted as seconds. Also accepts an iterable of numbers or timedelta to create multiple schedules. * `cron`: A cron schedule string of when to execute runs of this deployment. Also accepts an iterable of cron schedule strings to create multiple schedules. * `rrule`: An rrule schedule string of when to execute runs of this deployment. Also accepts an iterable of rrule schedule strings to create multiple schedules. * `triggers`: A list of triggers that will kick off runs of this deployment. * `paused`: Whether or not to set this deployment as paused. * `schedule`: A schedule object defining when to execute runs of this deployment. Used to provide additional scheduling options like `timezone` or `parameters`. * `schedules`: A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options like `timezone`. * `concurrency_limit`: The maximum number of runs that can be executed concurrently. * `parameters`: A dictionary of default parameter values to pass to runs of this deployment. * `description`: A description for the created deployment. Defaults to the flow's description if not provided. * `tags`: A list of tags to associate with the created deployment for organizational purposes. * `version`: A version for the created deployment. Defaults to the flow's version. * `version_type`: The type of version to use for the created deployment. The version type will be inferred if not provided. * `enforce_parameter_schema`: Whether or not the Prefect API should enforce the parameter schema for the created deployment. * `entrypoint_type`: Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment. * `print_next_steps_message`: Whether or not to print a message with next steps after deploying the deployments. * `ignore_warnings`: Whether or not to ignore warnings about the work pool type. * `_sla`: (Experimental) SLA configuration for the deployment. May be removed or modified at any time. Currently only supported on Prefect Cloud. Returns: The ID of the created/updated deployment. **Examples:** Deploy a local flow to a work pool: ```python theme={null} import asyncio from prefect import flow @flow def my_flow(name): print(f"hello {name}") if __name__ == "__main__": asyncio.run(my_flow.adeploy( "example-deployment", work_pool_name="my-work-pool", image="my-repository/my-image:dev", )) ``` #### `afrom_source` ```python theme={null} afrom_source(cls, source: Union[str, Path, 'RunnerStorage', ReadableDeploymentStorage], entrypoint: str) -> 'Flow[..., Any]' ``` Loads a flow from a remote source asynchronously. **Args:** * `source`: Either a URL to a git repository or a storage object. * `entrypoint`: The path to a file containing a flow and the name of the flow function in the format `./path/to/file.py\:flow_func_name`, or a module path to a flow function in the format `module.path.flow_func_name`. **Returns:** * A new `Flow` instance. **Examples:** Load a flow from a public git repository: ```python theme={null} from prefect import flow from prefect.runner.storage import GitRepository from prefect.blocks.system import Secret my_flow = flow.from_source( source="https://github.com/org/repo.git", entrypoint="flows.py:my_flow", ) my_flow() ``` Load a flow from a private git repository using an access token stored in a `Secret` block: ```python theme={null} from prefect import flow from prefect.runner.storage import GitRepository from prefect.blocks.system import Secret my_flow = flow.from_source( source=GitRepository( url="https://github.com/org/repo.git", credentials={"access_token": Secret.load("github-access-token")} ), entrypoint="flows.py:my_flow", ) my_flow() ``` Load a flow from a local directory: ```python theme={null} # from_local_source.py from pathlib import Path from prefect import flow @flow(log_prints=True) def my_flow(name: str = "world"): print(f"Hello {name}! I'm a flow from a Python script!") if __name__ == "__main__": my_flow.from_source( source=str(Path(__file__).parent), entrypoint="from_local_source.py:my_flow", ).deploy( name="my-deployment", parameters=dict(name="Marvin"), work_pool_name="local", ) ``` #### `ato_deployment` ```python theme={null} ato_deployment(self, name: str, interval: Optional[Union[Iterable[Union[int, float, datetime.timedelta]], int, float, datetime.timedelta]] = None, cron: Optional[Union[Iterable[str], str]] = None, rrule: Optional[Union[Iterable[str], str]] = None, paused: Optional[bool] = None, schedule: Optional[Schedule] = None, schedules: Optional['FlexibleScheduleList'] = None, concurrency_limit: Optional[Union[int, ConcurrencyLimitConfig, None]] = None, parameters: Optional[dict[str, Any]] = None, triggers: Optional[list[Union[DeploymentTriggerTypes, TriggerTypes]]] = None, description: Optional[str] = None, tags: Optional[list[str]] = None, version: Optional[str] = None, version_type: Optional[VersionType] = None, enforce_parameter_schema: bool = True, work_pool_name: Optional[str] = None, work_queue_name: Optional[str] = None, job_variables: Optional[dict[str, Any]] = None, entrypoint_type: EntrypointType = EntrypointType.FILE_PATH, _sla: Optional[Union[SlaTypes, list[SlaTypes]]] = None) -> 'RunnerDeployment' ``` Asynchronously creates a runner deployment object for this flow. **Args:** * `name`: The name to give the created deployment. * `interval`: An interval on which to execute the new deployment. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds. * `cron`: A cron schedule of when to execute runs of this deployment. * `rrule`: An rrule schedule of when to execute runs of this deployment. * `paused`: Whether or not to set this deployment as paused. * `schedule`: A schedule object defining when to execute runs of this deployment. Used to provide additional scheduling options like `timezone` or `parameters`. * `schedules`: A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options such as `timezone`. * `concurrency_limit`: The maximum number of runs of this deployment that can run at the same time. * `parameters`: A dictionary of default parameter values to pass to runs of this deployment. * `triggers`: A list of triggers that will kick off runs of this deployment. * `description`: A description for the created deployment. Defaults to the flow's description if not provided. * `tags`: A list of tags to associate with the created deployment for organizational purposes. * `version`: A version for the created deployment. Defaults to the flow's version. * `version_type`: The type of version to use for the created deployment. The version type will be inferred if not provided. * `enforce_parameter_schema`: Whether or not the Prefect API should enforce the parameter schema for the created deployment. * `work_pool_name`: The name of the work pool to use for this deployment. * `work_queue_name`: The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used. * `job_variables`: Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for * `entrypoint_type`: Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment. * `_sla`: (Experimental) SLA configuration for the deployment. May be removed or modified at any time. Currently only supported on Prefect Cloud. **Examples:** Prepare two deployments and serve them: ```python theme={null} from prefect import flow, serve @flow def my_flow(name): print(f"hello {name}") @flow def my_other_flow(name): print(f"goodbye {name}") if __name__ == "__main__": hello_deploy = my_flow.to_deployment("hello", tags=["dev"]) bye_deploy = my_other_flow.to_deployment("goodbye", tags=["dev"]) serve(hello_deploy, bye_deploy) ``` #### `avisualize` ```python theme={null} avisualize(self, *args: 'P.args', **kwargs: 'P.kwargs') -> None ``` Generates a graphviz object representing the current flow. In IPython notebooks, it's rendered inline, otherwise in a new window as a PNG. **Raises:** * `- ImportError`: If `graphviz` isn't installed. * `- GraphvizExecutableNotFoundError`: If the `dot` executable isn't found. * `- FlowVisualizationError`: If the flow can't be visualized for any other reason. #### `deploy` ```python theme={null} deploy(self, name: str, work_pool_name: Optional[str] = None, image: Optional[Union[str, 'DockerImage']] = None, build: bool = True, push: bool = True, work_queue_name: Optional[str] = None, job_variables: Optional[dict[str, Any]] = None, interval: Optional[Union[int, float, datetime.timedelta]] = None, cron: Optional[str] = None, rrule: Optional[str] = None, paused: Optional[bool] = None, schedule: Optional[Schedule] = None, schedules: Optional[list[Schedule]] = None, concurrency_limit: Optional[Union[int, ConcurrencyLimitConfig, None]] = None, triggers: Optional[list[Union[DeploymentTriggerTypes, TriggerTypes]]] = None, parameters: Optional[dict[str, Any]] = None, description: Optional[str] = None, tags: Optional[list[str]] = None, version: Optional[str] = None, version_type: Optional[VersionType] = None, enforce_parameter_schema: bool = True, entrypoint_type: EntrypointType = EntrypointType.FILE_PATH, print_next_steps: bool = True, ignore_warnings: bool = False, _sla: Optional[Union[SlaTypes, list[SlaTypes]]] = None) -> UUID ``` Deploys a flow to run on dynamic infrastructure via a work pool. By default, calling this method will build a Docker image for the flow, push it to a registry, and create a deployment via the Prefect API that will run the flow on the given schedule. If you want to use an existing image, you can pass `build=False` to skip building and pushing an image. **Args:** * `name`: The name to give the created deployment. * `work_pool_name`: The name of the work pool to use for this deployment. Defaults to the value of `PREFECT_DEFAULT_WORK_POOL_NAME`. * `image`: The name of the Docker image to build, including the registry and repository. Pass a DockerImage instance to customize the Dockerfile used and build arguments. * `build`: Whether or not to build a new image for the flow. If False, the provided image will be used as-is and pulled at runtime. * `push`: Whether or not to skip pushing the built image to a registry. * `work_queue_name`: The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used. * `job_variables`: Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for available settings. * `interval`: An interval on which to execute the deployment. Accepts a number or a timedelta object to create a single schedule. If a number is given, it will be interpreted as seconds. Also accepts an iterable of numbers or timedelta to create multiple schedules. * `cron`: A cron schedule string of when to execute runs of this deployment. Also accepts an iterable of cron schedule strings to create multiple schedules. * `rrule`: An rrule schedule string of when to execute runs of this deployment. Also accepts an iterable of rrule schedule strings to create multiple schedules. * `triggers`: A list of triggers that will kick off runs of this deployment. * `paused`: Whether or not to set this deployment as paused. * `schedule`: A schedule object defining when to execute runs of this deployment. Used to provide additional scheduling options like `timezone` or `parameters`. * `schedules`: A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options like `timezone`. * `concurrency_limit`: The maximum number of runs that can be executed concurrently. * `parameters`: A dictionary of default parameter values to pass to runs of this deployment. * `description`: A description for the created deployment. Defaults to the flow's description if not provided. * `tags`: A list of tags to associate with the created deployment for organizational purposes. * `version`: A version for the created deployment. Defaults to the flow's version. * `version_type`: The type of version to use for the created deployment. The version type will be inferred if not provided. * `enforce_parameter_schema`: Whether or not the Prefect API should enforce the parameter schema for the created deployment. * `entrypoint_type`: Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment. * `print_next_steps_message`: Whether or not to print a message with next steps after deploying the deployments. * `ignore_warnings`: Whether or not to ignore warnings about the work pool type. * `_sla`: (Experimental) SLA configuration for the deployment. May be removed or modified at any time. Currently only supported on Prefect Cloud. Returns: The ID of the created/updated deployment. **Examples:** Deploy a local flow to a work pool: ```python theme={null} from prefect import flow @flow def my_flow(name): print(f"hello {name}") if __name__ == "__main__": my_flow.deploy( "example-deployment", work_pool_name="my-work-pool", image="my-repository/my-image:dev", ) ``` Deploy a remotely stored flow to a work pool: ```python theme={null} from prefect import flow if __name__ == "__main__": flow.from_source( source="https://github.com/org/repo.git", entrypoint="flows.py:my_flow", ).deploy( "example-deployment", work_pool_name="my-work-pool", image="my-repository/my-image:dev", ) ``` #### `from_source` ```python theme={null} from_source(cls, source: Union[str, Path, 'RunnerStorage', ReadableDeploymentStorage], entrypoint: str) -> 'Flow[..., Any]' ``` Loads a flow from a remote source. **Args:** * `source`: Either a URL to a git repository or a storage object. * `entrypoint`: The path to a file containing a flow and the name of the flow function in the format `./path/to/file.py\:flow_func_name`, or a module path to a flow function in the format `module.path.flow_func_name`. **Returns:** * A new `Flow` instance. **Examples:** Load a flow from a public git repository: ```python theme={null} from prefect import flow from prefect.runner.storage import GitRepository from prefect.blocks.system import Secret my_flow = flow.from_source( source="https://github.com/org/repo.git", entrypoint="flows.py:my_flow", ) my_flow() ``` Load a flow from a private git repository using an access token stored in a `Secret` block: ```python theme={null} from prefect import flow from prefect.runner.storage import GitRepository from prefect.blocks.system import Secret my_flow = flow.from_source( source=GitRepository( url="https://github.com/org/repo.git", credentials={"access_token": Secret.load("github-access-token")} ), entrypoint="flows.py:my_flow", ) my_flow() ``` Load a flow from a local directory: ```python theme={null} # from_local_source.py from pathlib import Path from prefect import flow @flow(log_prints=True) def my_flow(name: str = "world"): print(f"Hello {name}! I'm a flow from a Python script!") if __name__ == "__main__": my_flow.from_source( source=str(Path(__file__).parent), entrypoint="from_local_source.py:my_flow", ).deploy( name="my-deployment", parameters=dict(name="Marvin"), work_pool_name="local", ) ``` #### `isclassmethod` ```python theme={null} isclassmethod(self) -> bool ``` #### `ismethod` ```python theme={null} ismethod(self) -> bool ``` #### `isstaticmethod` ```python theme={null} isstaticmethod(self) -> bool ``` #### `on_cancellation` ```python theme={null} on_cancellation(self, fn: FlowStateHook[P, R]) -> FlowStateHook[P, R] ``` #### `on_completion` ```python theme={null} on_completion(self, fn: FlowStateHook[P, R]) -> FlowStateHook[P, R] ``` #### `on_crashed` ```python theme={null} on_crashed(self, fn: FlowStateHook[P, R]) -> FlowStateHook[P, R] ``` #### `on_failure` ```python theme={null} on_failure(self, fn: FlowStateHook[P, R]) -> FlowStateHook[P, R] ``` #### `on_running` ```python theme={null} on_running(self, fn: FlowStateHook[P, R]) -> FlowStateHook[P, R] ``` #### `serialize_parameters` ```python theme={null} serialize_parameters(self, parameters: dict[str, Any | PrefectFuture[Any] | State]) -> dict[str, Any] ``` Convert parameters to a serializable form. Uses FastAPI's `jsonable_encoder` to convert to JSON compatible objects without converting everything directly to a string. This maintains basic types like integers during API roundtrips. #### `serve` ```python theme={null} serve(self, name: Optional[str] = None, interval: Optional[Union[Iterable[Union[int, float, datetime.timedelta]], int, float, datetime.timedelta]] = None, cron: Optional[Union[Iterable[str], str]] = None, rrule: Optional[Union[Iterable[str], str]] = None, paused: Optional[bool] = None, schedule: Optional[Schedule] = None, schedules: Optional['FlexibleScheduleList'] = None, global_limit: Optional[Union[int, ConcurrencyLimitConfig, None]] = None, triggers: Optional[list[Union[DeploymentTriggerTypes, TriggerTypes]]] = None, parameters: Optional[dict[str, Any]] = None, description: Optional[str] = None, tags: Optional[list[str]] = None, version: Optional[str] = None, enforce_parameter_schema: bool = True, pause_on_shutdown: bool = True, print_starting_message: bool = True, limit: Optional[int] = None, webserver: bool = False, entrypoint_type: EntrypointType = EntrypointType.FILE_PATH) -> None ``` Creates a deployment for this flow and starts a runner to monitor for scheduled work. **Args:** * `name`: The name to give the created deployment. Defaults to the name of the flow. * `interval`: An interval on which to execute the deployment. Accepts a number or a timedelta object to create a single schedule. If a number is given, it will be interpreted as seconds. Also accepts an iterable of numbers or timedelta to create multiple schedules. * `cron`: A cron schedule string of when to execute runs of this deployment. Also accepts an iterable of cron schedule strings to create multiple schedules. * `rrule`: An rrule schedule string of when to execute runs of this deployment. Also accepts an iterable of rrule schedule strings to create multiple schedules. * `triggers`: A list of triggers that will kick off runs of this deployment. * `paused`: Whether or not to set this deployment as paused. * `schedule`: A schedule object defining when to execute runs of this deployment. Used to provide additional scheduling options like `timezone` or `parameters`. * `schedules`: A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options like `timezone`. * `global_limit`: The maximum number of concurrent runs allowed across all served flow instances associated with the same deployment. * `parameters`: A dictionary of default parameter values to pass to runs of this deployment. * `description`: A description for the created deployment. Defaults to the flow's description if not provided. * `tags`: A list of tags to associate with the created deployment for organizational purposes. * `version`: A version for the created deployment. Defaults to the flow's version. * `enforce_parameter_schema`: Whether or not the Prefect API should enforce the parameter schema for the created deployment. * `pause_on_shutdown`: If True, provided schedule will be paused when the serve function is stopped. If False, the schedules will continue running. * `print_starting_message`: Whether or not to print the starting message when flow is served. * `limit`: The maximum number of runs that can be executed concurrently by the created runner; only applies to this served flow. To apply a limit across multiple served flows, use `global_limit`. * `webserver`: Whether or not to start a monitoring webserver for this flow. * `entrypoint_type`: Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment. **Examples:** Serve a flow: ```python theme={null} from prefect import flow @flow def my_flow(name): print(f"hello {name}") if __name__ == "__main__": my_flow.serve("example-deployment") ``` Serve a flow and run it every hour: ```python theme={null} from prefect import flow @flow def my_flow(name): print(f"hello {name}") if __name__ == "__main__": my_flow.serve("example-deployment", interval=3600) ``` #### `to_deployment` ```python theme={null} to_deployment(self, name: str, interval: Optional[Union[Iterable[Union[int, float, datetime.timedelta]], int, float, datetime.timedelta]] = None, cron: Optional[Union[Iterable[str], str]] = None, rrule: Optional[Union[Iterable[str], str]] = None, paused: Optional[bool] = None, schedule: Optional[Schedule] = None, schedules: Optional['FlexibleScheduleList'] = None, concurrency_limit: Optional[Union[int, ConcurrencyLimitConfig, None]] = None, parameters: Optional[dict[str, Any]] = None, triggers: Optional[list[Union[DeploymentTriggerTypes, TriggerTypes]]] = None, description: Optional[str] = None, tags: Optional[list[str]] = None, version: Optional[str] = None, version_type: Optional[VersionType] = None, enforce_parameter_schema: bool = True, work_pool_name: Optional[str] = None, work_queue_name: Optional[str] = None, job_variables: Optional[dict[str, Any]] = None, entrypoint_type: EntrypointType = EntrypointType.FILE_PATH, _sla: Optional[Union[SlaTypes, list[SlaTypes]]] = None) -> 'RunnerDeployment' ``` Creates a runner deployment object for this flow. **Args:** * `name`: The name to give the created deployment. * `interval`: An interval on which to execute the new deployment. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds. * `cron`: A cron schedule of when to execute runs of this deployment. * `rrule`: An rrule schedule of when to execute runs of this deployment. * `paused`: Whether or not to set this deployment as paused. * `schedule`: A schedule object defining when to execute runs of this deployment. Used to provide additional scheduling options like `timezone` or `parameters`. * `schedules`: A list of schedule objects defining when to execute runs of this deployment. Used to define multiple schedules or additional scheduling options such as `timezone`. * `concurrency_limit`: The maximum number of runs of this deployment that can run at the same time. * `parameters`: A dictionary of default parameter values to pass to runs of this deployment. * `triggers`: A list of triggers that will kick off runs of this deployment. * `description`: A description for the created deployment. Defaults to the flow's description if not provided. * `tags`: A list of tags to associate with the created deployment for organizational purposes. * `version`: A version for the created deployment. Defaults to the flow's version. * `version_type`: The type of version to use for the created deployment. The version type will be inferred if not provided. * `enforce_parameter_schema`: Whether or not the Prefect API should enforce the parameter schema for the created deployment. * `work_pool_name`: The name of the work pool to use for this deployment. * `work_queue_name`: The name of the work queue to use for this deployment's scheduled runs. If not provided the default work queue for the work pool will be used. * `job_variables`: Settings used to override the values specified default base job template of the chosen work pool. Refer to the base job template of the chosen work pool for * `entrypoint_type`: Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment. * `_sla`: (Experimental) SLA configuration for the deployment. May be removed or modified at any time. Currently only supported on Prefect Cloud. **Examples:** Prepare two deployments and serve them: ```python theme={null} from prefect import flow, serve @flow def my_flow(name): print(f"hello {name}") @flow def my_other_flow(name): print(f"goodbye {name}") if __name__ == "__main__": hello_deploy = my_flow.to_deployment("hello", tags=["dev"]) bye_deploy = my_other_flow.to_deployment("goodbye", tags=["dev"]) serve(hello_deploy, bye_deploy) ``` #### `validate_parameters` ```python theme={null} validate_parameters(self, parameters: dict[str, Any]) -> dict[str, Any] ``` Validate parameters for compatibility with the flow by attempting to cast the inputs to the associated types specified by the function's type annotations. **Returns:** * A new dict of parameters that have been cast to the appropriate types **Raises:** * `ParameterTypeError`: if the provided parameters are not valid #### `visualize` ```python theme={null} visualize(self, *args: 'P.args', **kwargs: 'P.kwargs') -> None ``` Generates a graphviz object representing the current flow. In IPython notebooks, it's rendered inline, otherwise in a new window as a PNG. **Raises:** * `- ImportError`: If `graphviz` isn't installed. * `- GraphvizExecutableNotFoundError`: If the `dot` executable isn't found. * `- FlowVisualizationError`: If the flow can't be visualized for any other reason. #### `with_options` ```python theme={null} with_options(self) -> 'Flow[P, R]' ``` Create a new flow from the current object, updating provided options. **Args:** * `name`: A new name for the flow. * `version`: A new version for the flow. * `description`: A new description for the flow. * `flow_run_name`: An optional name to distinguish runs of this flow; this name can be provided as a string template with the flow's parameters as variables, or a function that returns a string. * `task_runner`: A new task runner for the flow. * `timeout_seconds`: A new number of seconds to fail the flow after if still running. * `validate_parameters`: A new value indicating if flow calls should validate given parameters. * `retries`: A new number of times to retry on flow run failure. * `retry_delay_seconds`: A new number of seconds to wait before retrying the flow after failure. This is only applicable if `retries` is nonzero. * `persist_result`: A new option for enabling or disabling result persistence. * `result_storage`: A new storage type to use for results. * `result_serializer`: A new serializer to use for results. * `cache_result_in_memory`: A new value indicating if the flow's result should be cached in memory. * `on_failure`: A new list of callables to run when the flow enters a failed state. * `on_completion`: A new list of callables to run when the flow enters a completed state. * `on_cancellation`: A new list of callables to run when the flow enters a cancelling state. * `on_crashed`: A new list of callables to run when the flow enters a crashed state. * `on_running`: A new list of callables to run when the flow enters a running state. **Returns:** * A new `Flow` instance. Examples: Create a new flow from an existing flow and update the name: ```python theme={null} from prefect import flow @flow(name="My flow") def my_flow(): return 1 new_flow = my_flow.with_options(name="My new flow") ``` Create a new flow from an existing flow, update the task runner, and call it without an intermediate variable: ```python theme={null} from prefect.task_runners import ThreadPoolTaskRunner @flow def my_flow(x, y): return x + y state = my_flow.with_options(task_runner=ThreadPoolTaskRunner)(1, 3) assert state.result() == 4 ``` ### `FlowDecorator` ### `InfrastructureBoundFlow` A flow that is bound to running on a specific infrastructure. **Attributes:** * `work_pool`: The name of the work pool to run the flow on. The base job configuration of the work pool will determine the configuration of the infrastructure the flow will run on. * `job_variables`: Infrastructure configuration that will override the base job configuration of the work pool. * `launcher`: Optional upload and execution launcher overrides. * `worker_cls`: The class of the worker to use to spin up infrastructure and submit the flow to it. **Methods:** #### `retry` ```python theme={null} retry(self, flow_run: 'FlowRun') -> R | State[R] ``` Retry an existing flow run on remote infrastructure. This method allows retrying a flow run that was previously executed, reusing the same flow run ID and incrementing the run\_count. **Args:** * `flow_run`: The existing flow run to retry * `return_state`: If True, return the final state instead of the result **Returns:** * The flow result or final state #### `submit` ```python theme={null} submit(self, *args: P.args, **kwargs: P.kwargs) -> PrefectFlowRunFuture[R] ``` Submit the flow to run on remote infrastructure. This method will spin up a local worker to submit the flow to remote infrastructure. To submit the flow to remote infrastructure without spinning up a local worker, use `submit_to_work_pool` instead. **Args:** * `*args`: Positional arguments to pass to the flow. * `**kwargs`: Keyword arguments to pass to the flow. **Returns:** * A `PrefectFlowRunFuture` that can be used to retrieve the result of the flow run. **Examples:** Submit a flow to run on Kubernetes: ```python theme={null} from prefect import flow from prefect_kubernetes.experimental import kubernetes @kubernetes(work_pool="my-kubernetes-work-pool") @flow def my_flow(x: int, y: int): return x + y future = my_flow.submit(x=1, y=2) result = future.result() print(result) ``` #### `submit_to_work_pool` ```python theme={null} submit_to_work_pool(self, *args: P.args, **kwargs: P.kwargs) -> PrefectFlowRunFuture[R] ``` Submits the flow to run on remote infrastructure. This method will create a flow run for an existing worker to submit to remote infrastructure. If you don't have a worker available, use `submit` instead. **Args:** * `*args`: Positional arguments to pass to the flow. * `**kwargs`: Keyword arguments to pass to the flow. **Returns:** * A `PrefectFlowRunFuture` that can be used to retrieve the result of the flow run. **Examples:** Dispatch a flow to run on Kubernetes: ```python theme={null} from prefect import flow from prefect_kubernetes.experimental import kubernetes @kubernetes(work_pool="my-kubernetes-work-pool") @flow def my_flow(x: int, y: int): return x + y future = my_flow.submit_to_work_pool(x=1, y=2) result = future.result() print(result) ``` #### `with_options` ```python theme={null} with_options(self) -> 'InfrastructureBoundFlow[P, R]' ``` # futures Source: https://docs.prefect.io/v3/api-ref/python/prefect-futures # `prefect.futures` ## Functions ### `as_completed` ```python theme={null} as_completed(futures: list[PrefectFuture[R]], timeout: float | None = None) -> Generator[PrefectFuture[R], None] ``` ### `wait` ```python theme={null} wait(futures: list[PrefectFuture[R]], timeout: float | None = None) -> DoneAndNotDoneFutures[R] ``` Wait for the futures in the given sequence to complete. **Args:** * `futures`: The sequence of Futures to wait upon. * `timeout`: The maximum number of seconds to wait. If None, then there is no limit on the wait time. **Returns:** * A named 2-tuple of sets. The first set, named 'done', contains the * futures that completed (is finished or cancelled) before the wait * completed. The second set, named 'not\_done', contains uncompleted * futures. Duplicate futures given to *futures* are removed and will be * returned only once. **Examples:** ```python theme={null} @task def sleep_task(seconds): sleep(seconds) return 42 @flow def flow(): futures = random_task.map(range(10)) done, not_done = wait(futures, timeout=5) print(f"Done: {len(done)}") print(f"Not Done: {len(not_done)}") ``` ### `resolve_futures_to_states` ```python theme={null} resolve_futures_to_states(expr: PrefectFuture[R] | Any) -> PrefectFuture[R] | Any ``` Given a Python built-in collection, recursively find `PrefectFutures` and build a new collection with the same structure with futures resolved to their final states. Resolving futures to their final states may wait for execution to complete. Unsupported object types will be returned without modification. ### `resolve_futures_to_results` ```python theme={null} resolve_futures_to_results(expr: PrefectFuture[R] | Any) -> Any ``` Given a Python built-in collection, recursively find `PrefectFutures` and build a new collection with the same structure with futures resolved to their final results. Resolving futures to their final result may wait for execution to complete. Unsupported object types will be returned without modification. ## Classes ### `PrefectFuture` Abstract base class for Prefect futures. A Prefect future is a handle to the asynchronous execution of a run. It provides methods to wait for the to complete and to retrieve the result of the run. **Methods:** #### `add_done_callback` ```python theme={null} add_done_callback(self, fn: Callable[['PrefectFuture[R]'], None]) -> None ``` Add a callback to be run when the future completes or is cancelled. **Args:** * `fn`: A callable that will be called with this future as its only argument when the future completes or is cancelled. #### `result` ```python theme={null} result(self, timeout: float | None = None, raise_on_failure: bool = True) -> R ``` #### `state` ```python theme={null} state(self) -> State ``` The current state of the task run associated with this future #### `task_run_id` ```python theme={null} task_run_id(self) -> uuid.UUID ``` The ID of the task run associated with this future #### `wait` ```python theme={null} wait(self, timeout: float | None = None) -> None ``` ### `PrefectTaskRunFuture` A Prefect future that represents the eventual execution of a task run. **Methods:** #### `state` ```python theme={null} state(self) -> State ``` The current state of the task run associated with this future #### `task_run_id` ```python theme={null} task_run_id(self) -> uuid.UUID ``` The ID of the task run associated with this future ### `PrefectWrappedFuture` A Prefect future that wraps another future object. **Methods:** #### `add_done_callback` ```python theme={null} add_done_callback(self, fn: Callable[[PrefectFuture[R]], None]) -> None ``` Add a callback to be executed when the future completes. #### `wrapped_future` ```python theme={null} wrapped_future(self) -> F ``` The underlying future object wrapped by this Prefect future ### `PrefectConcurrentFuture` A Prefect future that wraps a concurrent.futures.Future. This future is used when the task run is submitted to a ThreadPoolExecutor. **Methods:** #### `result` ```python theme={null} result(self, timeout: float | None = None, raise_on_failure: bool = True) -> R ``` #### `wait` ```python theme={null} wait(self, timeout: float | None = None) -> None ``` ### `PrefectDistributedFuture` Represents the result of a computation happening anywhere. This class is typically used to interact with the result of a task run scheduled to run in a Prefect task worker but can be used to interact with any task run scheduled in Prefect's API. **Methods:** #### `add_done_callback` ```python theme={null} add_done_callback(self, fn: Callable[[PrefectFuture[R]], None]) -> None ``` #### `result` ```python theme={null} result(self, timeout: float | None = None, raise_on_failure: bool = True) -> R ``` #### `result_async` ```python theme={null} result_async(self, timeout: float | None = None, raise_on_failure: bool = True) -> R ``` #### `wait` ```python theme={null} wait(self, timeout: float | None = None) -> None ``` #### `wait_async` ```python theme={null} wait_async(self, timeout: float | None = None) -> None ``` ### `PrefectFlowRunFuture` A Prefect future that represents the eventual execution of a flow run. **Methods:** #### `add_done_callback` ```python theme={null} add_done_callback(self, fn: Callable[[PrefectFuture[R]], None]) -> None ``` #### `aresult` ```python theme={null} aresult(self, timeout: float | None = None, raise_on_failure: bool = True) -> R ``` #### `flow_run_id` ```python theme={null} flow_run_id(self) -> uuid.UUID ``` The ID of the flow run associated with this future #### `result` ```python theme={null} result(self, timeout: float | None = None, raise_on_failure: bool = True) -> R ``` #### `state` ```python theme={null} state(self) -> State ``` The current state of the flow run associated with this future #### `wait` ```python theme={null} wait(self, timeout: float | None = None) -> None ``` #### `wait_async` ```python theme={null} wait_async(self, timeout: float | None = None) -> None ``` ### `PrefectFutureList` A list of Prefect futures. This class provides methods to wait for all futures in the list to complete and to retrieve the results of all task runs. **Methods:** #### `result` ```python theme={null} result(self: Self, timeout: float | None = None, raise_on_failure: bool = True) -> list[R] ``` Get the results of all task runs associated with the futures in the list. Uses `as_completed` internally so that failures are raised as soon as they occur rather than waiting for earlier, still-running futures to finish first. **Args:** * `timeout`: The maximum number of seconds to wait for all futures to complete. * `raise_on_failure`: If `True`, an exception will be raised if any task run fails. **Returns:** * A list of results of the task runs, in the same order as the * futures in the list. **Raises:** * `TimeoutError`: If the timeout is reached before all futures complete. #### `wait` ```python theme={null} wait(self, timeout: float | None = None) -> None ``` Wait for all futures in the list to complete. **Args:** * `timeout`: The maximum number of seconds to wait for all futures to complete. This method will not raise if the timeout is reached. ### `DoneAndNotDoneFutures` A named 2-tuple of sets. multiple inheritance supported in 3.11+, use typing\_extensions.NamedTuple # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-infrastructure-__init__ # `prefect.infrastructure` 2024-06-27: This surfaces an actionable error message for moved or removed objects in Prefect 3.0 upgrade. # base Source: https://docs.prefect.io/v3/api-ref/python/prefect-infrastructure-base # `prefect.infrastructure.base` 2024-06-27: This surfaces an actionable error message for moved or removed objects in Prefect 3.0 upgrade. # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-infrastructure-provisioners-__init__ # `prefect.infrastructure.provisioners` ## Functions ### `get_infrastructure_provisioner_for_work_pool_type` ```python theme={null} get_infrastructure_provisioner_for_work_pool_type(work_pool_type: str) -> Type[Provisioner] ``` Retrieve an instance of the infrastructure provisioner for the given work pool type. **Args:** * `work_pool_type`: the work pool type **Returns:** * an instance of the infrastructure provisioner for the given work pool type **Raises:** * `ValueError`: if the work pool type is not supported ## Classes ### `Provisioner` **Methods:** #### `console` ```python theme={null} console(self) -> rich.console.Console ``` #### `console` ```python theme={null} console(self, value: rich.console.Console) -> None ``` #### `provision` ```python theme={null} provision(self, work_pool_name: str, base_job_template: Dict[str, Any], client: Optional['PrefectClient'] = None) -> Dict[str, Any] ``` # cloud_run Source: https://docs.prefect.io/v3/api-ref/python/prefect-infrastructure-provisioners-cloud_run # `prefect.infrastructure.provisioners.cloud_run` ## Classes ### `CloudRunPushProvisioner` **Methods:** #### `console` ```python theme={null} console(self) -> Console ``` #### `console` ```python theme={null} console(self, value: Console) -> None ``` #### `provision` ```python theme={null} provision(self, work_pool_name: str, base_job_template: dict, client: Optional['PrefectClient'] = None) -> Dict[str, Any] ``` # coiled Source: https://docs.prefect.io/v3/api-ref/python/prefect-infrastructure-provisioners-coiled # `prefect.infrastructure.provisioners.coiled` ## Classes ### `CoiledPushProvisioner` A infrastructure provisioner for Coiled push work pools. **Methods:** #### `console` ```python theme={null} console(self) -> Console ``` #### `console` ```python theme={null} console(self, value: Console) -> None ``` #### `provision` ```python theme={null} provision(self, work_pool_name: str, base_job_template: Dict[str, Any], client: Optional['PrefectClient'] = None) -> Dict[str, Any] ``` Provisions resources necessary for a Coiled push work pool. **Args:** * `work_pool_name`: The name of the work pool to provision resources for * `base_job_template`: The base job template to update **Returns:** * A copy of the provided base job template with the provisioned resources # container_instance Source: https://docs.prefect.io/v3/api-ref/python/prefect-infrastructure-provisioners-container_instance # `prefect.infrastructure.provisioners.container_instance` This module defines the ContainerInstancePushProvisioner class, which is responsible for provisioning infrastructure using Azure Container Instances for Prefect work pools. The ContainerInstancePushProvisioner class provides methods for provisioning infrastructure and interacting with Azure Container Instances. Classes: AzureCLI: A class to handle Azure CLI commands. ContainerInstancePushProvisioner: A class for provisioning infrastructure using Azure Container Instances. ## Classes ### `AzureCLI` A class for executing Azure CLI commands and handling their output. **Args:** * `console`: A Rich console object for displaying messages. **Methods:** #### `run_command` ```python theme={null} run_command(self, command: str, success_message: Optional[str] = None, failure_message: Optional[str] = None, ignore_if_exists: bool = False, return_json: bool = False) -> str | dict[str, Any] | None ``` Runs an Azure CLI command and processes the output. **Args:** * `command`: The Azure CLI command to execute. * `success_message`: Message to print on success. * `failure_message`: Message to print on failure. * `ignore_if_exists`: Whether to ignore errors if a resource already exists. * `return_json`: Whether to return the output as JSON. **Returns:** * A tuple with two elements: * str: Status, either 'created', 'exists', or 'error'. * str or dict or None: The command output or None if an error occurs (depends on return\_json). **Raises:** * `subprocess.CalledProcessError`: If the command execution fails. * `json.JSONDecodeError`: If output cannot be decoded as JSON when return\_json is True. ### `ContainerInstancePushProvisioner` A class responsible for provisioning Azure resources and setting up a push work pool. **Attributes:** * `_console`: A Rich console object for displaying messages and progress. * `_subscription_id`: Azure subscription ID. * `_subscription_name`: Azure subscription name. * `_resource_group`: Azure resource group name. * `_location`: Azure resource location. * `azure_cli`: An instance of AzureCLI for running Azure commands. **Methods:** #### `console` ```python theme={null} console(self) -> Console ``` #### `console` ```python theme={null} console(self, value: Console) -> None ``` #### `provision` ```python theme={null} provision(self, work_pool_name: str, base_job_template: Dict[str, Any], client: Optional['PrefectClient'] = None) -> Dict[str, Any] ``` Orchestrates the provisioning of Azure resources and setup for the push work pool. **Args:** * `work_pool_name`: The name of the work pool. * `base_job_template`: The base template for job creation. * `client`: An instance of PrefectClient. If None, it will be injected. **Returns:** * Dict\[str, Any]: The updated job template with necessary references and configurations. **Raises:** * `RuntimeError`: If client injection fails or the Azure CLI command execution fails. #### `set_location` ```python theme={null} set_location(self) -> None ``` Set the Azure resource deployment location to the default or 'eastus' on failure. **Raises:** * `RuntimeError`: If unable to execute the Azure CLI command. # ecs Source: https://docs.prefect.io/v3/api-ref/python/prefect-infrastructure-provisioners-ecs # `prefect.infrastructure.provisioners.ecs` ## Functions ### `console_context` ```python theme={null} console_context(value: Console) -> Generator[None, None, None] ``` ## Classes ### `IamPolicyResource` Represents an IAM policy resource for managing ECS tasks. **Args:** * `policy_name`: The name of the IAM policy. Defaults to "prefect-ecs-policy". **Methods:** #### `get_planned_actions` ```python theme={null} get_planned_actions(self) -> List[str] ``` Returns a description of the planned actions for provisioning this resource. **Returns:** * Optional\[str]: A description of the planned actions for provisioning the resource, or None if provisioning is not required. #### `get_task_count` ```python theme={null} get_task_count(self) -> int ``` Returns the number of tasks that will be executed to provision this resource. **Returns:** * The number of tasks to be provisioned. #### `next_steps` ```python theme={null} next_steps(self) -> list[str] ``` #### `provision` ```python theme={null} provision(self, policy_document: dict[str, Any], advance: Callable[[], None]) -> str ``` Provisions an IAM policy. **Args:** * `advance`: A callback function to indicate progress. **Returns:** * The ARN (Amazon Resource Name) of the created IAM policy. #### `requires_provisioning` ```python theme={null} requires_provisioning(self) -> bool ``` Check if this resource requires provisioning. **Returns:** * True if provisioning is required, False otherwise. ### `IamUserResource` Represents an IAM user resource for managing ECS tasks. **Args:** * `user_name`: The desired name of the IAM user. **Methods:** #### `get_planned_actions` ```python theme={null} get_planned_actions(self) -> List[str] ``` Returns a description of the planned actions for provisioning this resource. **Returns:** * Optional\[str]: A description of the planned actions for provisioning the resource, or None if provisioning is not required. #### `get_task_count` ```python theme={null} get_task_count(self) -> int ``` Returns the number of tasks that will be executed to provision this resource. **Returns:** * The number of tasks to be provisioned. #### `next_steps` ```python theme={null} next_steps(self) -> list[str] ``` #### `provision` ```python theme={null} provision(self, advance: Callable[[], None]) -> None ``` Provisions an IAM user. **Args:** * `advance`: A callback function to indicate progress. #### `requires_provisioning` ```python theme={null} requires_provisioning(self) -> bool ``` Check if this resource requires provisioning. **Returns:** * True if provisioning is required, False otherwise. ### `CredentialsBlockResource` **Methods:** #### `get_planned_actions` ```python theme={null} get_planned_actions(self) -> List[str] ``` Returns a description of the planned actions for provisioning this resource. **Returns:** * Optional\[str]: A description of the planned actions for provisioning the resource, or None if provisioning is not required. #### `get_task_count` ```python theme={null} get_task_count(self) -> int ``` Returns the number of tasks that will be executed to provision this resource. **Returns:** * The number of tasks to be provisioned. #### `next_steps` ```python theme={null} next_steps(self) -> list[str] ``` #### `provision` ```python theme={null} provision(self, base_job_template: Dict[str, Any], advance: Callable[[], None], client: Optional['PrefectClient'] = None) ``` Provisions an AWS credentials block. Will generate new credentials if the block does not already exist. Updates the `aws_credentials` variable in the job template to reference the block. **Args:** * `base_job_template`: The base job template. * `advance`: A callback function to indicate progress. * `client`: A Prefect client to use for interacting with the Prefect API. #### `requires_provisioning` ```python theme={null} requires_provisioning(self, client: Optional['PrefectClient'] = None) -> bool ``` ### `AuthenticationResource` **Methods:** #### `get_planned_actions` ```python theme={null} get_planned_actions(self) -> List[str] ``` Returns a description of the planned actions for provisioning this resource. **Returns:** * Optional\[str]: A description of the planned actions for provisioning the resource, or None if provisioning is not required. #### `get_task_count` ```python theme={null} get_task_count(self) -> int ``` Returns the number of tasks that will be executed to provision this resource. **Returns:** * The number of tasks to be provisioned. #### `next_steps` ```python theme={null} next_steps(self) -> list[str] ``` #### `provision` ```python theme={null} provision(self, base_job_template: dict[str, Any], advance: Callable[[], None]) -> None ``` Provisions the authentication resources. **Args:** * `base_job_template`: The base job template of the work pool to provision infrastructure for. * `advance`: A callback function to indicate progress. #### `requires_provisioning` ```python theme={null} requires_provisioning(self) -> bool ``` Check if this resource requires provisioning. **Returns:** * True if provisioning is required, False otherwise. #### `resources` ```python theme={null} resources(self) -> list['ExecutionRoleResource | IamUserResource | IamPolicyResource | CredentialsBlockResource'] ``` ### `ClusterResource` **Methods:** #### `get_planned_actions` ```python theme={null} get_planned_actions(self) -> List[str] ``` Returns a description of the planned actions for provisioning this resource. **Returns:** * Optional\[str]: A description of the planned actions for provisioning the resource, or None if provisioning is not required. #### `get_task_count` ```python theme={null} get_task_count(self) -> int ``` Returns the number of tasks that will be executed to provision this resource. **Returns:** * The number of tasks to be provisioned. #### `next_steps` ```python theme={null} next_steps(self) -> list[str] ``` #### `provision` ```python theme={null} provision(self, base_job_template: dict[str, Any], advance: Callable[[], None]) -> None ``` Provisions an ECS cluster. Will update the `cluster` variable in the job template to reference the cluster. **Args:** * `base_job_template`: The base job template of the work pool to provision infrastructure for. * `advance`: A callback function to indicate progress. #### `requires_provisioning` ```python theme={null} requires_provisioning(self) -> bool ``` Check if this resource requires provisioning. **Returns:** * True if provisioning is required, False otherwise. ### `VpcResource` **Methods:** #### `get_planned_actions` ```python theme={null} get_planned_actions(self) -> List[str] ``` Returns a description of the planned actions for provisioning this resource. **Returns:** * Optional\[str]: A description of the planned actions for provisioning the resource, or None if provisioning is not required. #### `get_task_count` ```python theme={null} get_task_count(self) -> int ``` Returns the number of tasks that will be executed to provision this resource. **Returns:** * The number of tasks to be provisioned. #### `next_steps` ```python theme={null} next_steps(self) -> list[str] ``` #### `provision` ```python theme={null} provision(self, base_job_template: dict[str, Any], advance: Callable[[], None]) -> None ``` Provisions a VPC. Chooses a CIDR block to avoid conflicting with any existing VPCs. Will update the `vpc_id` variable in the job template to reference the VPC. **Args:** * `base_job_template`: The base job template of the work pool to provision infrastructure for. * `advance`: A callback function to indicate progress. #### `requires_provisioning` ```python theme={null} requires_provisioning(self) -> bool ``` Check if this resource requires provisioning. **Returns:** * True if provisioning is required, False otherwise. ### `ContainerRepositoryResource` **Methods:** #### `get_planned_actions` ```python theme={null} get_planned_actions(self) -> List[str] ``` Returns a description of the planned actions for provisioning this resource. **Returns:** * Optional\[str]: A description of the planned actions for provisioning the resource, or None if provisioning is not required. #### `get_task_count` ```python theme={null} get_task_count(self) -> int ``` Returns the number of tasks that will be executed to provision this resource. **Returns:** * The number of tasks to be provisioned. #### `next_steps` ```python theme={null} next_steps(self) -> list[str | Syntax] ``` #### `provision` ```python theme={null} provision(self, base_job_template: dict[str, Any], advance: Callable[[], None]) -> None ``` Provisions an ECR repository. **Args:** * `base_job_template`: The base job template of the work pool to provision infrastructure for. * `advance`: A callback function to indicate progress. #### `requires_provisioning` ```python theme={null} requires_provisioning(self) -> bool ``` Check if this resource requires provisioning. **Returns:** * True if provisioning is required, False otherwise. ### `ExecutionRoleResource` **Methods:** #### `get_planned_actions` ```python theme={null} get_planned_actions(self) -> List[str] ``` Returns a description of the planned actions for provisioning this resource. **Returns:** * Optional\[str]: A description of the planned actions for provisioning the resource, or None if provisioning is not required. #### `get_task_count` ```python theme={null} get_task_count(self) -> int ``` Returns the number of tasks that will be executed to provision this resource. **Returns:** * The number of tasks to be provisioned. #### `next_steps` ```python theme={null} next_steps(self) -> list[str] ``` #### `provision` ```python theme={null} provision(self, base_job_template: dict[str, Any], advance: Callable[[], None]) -> str ``` Provisions an IAM role. **Args:** * `base_job_template`: The base job template of the work pool to provision infrastructure for. * `advance`: A callback function to indicate progress. #### `requires_provisioning` ```python theme={null} requires_provisioning(self) -> bool ``` Check if this resource requires provisioning. **Returns:** * True if provisioning is required, False otherwise. ### `ElasticContainerServicePushProvisioner` An infrastructure provisioner for ECS push work pools. **Methods:** #### `console` ```python theme={null} console(self) -> Console ``` #### `console` ```python theme={null} console(self, value: Console) -> None ``` #### `is_boto3_installed` ```python theme={null} is_boto3_installed() -> bool ``` Check if boto3 is installed. #### `provision` ```python theme={null} provision(self, work_pool_name: str, base_job_template: dict[str, Any]) -> dict[str, Any] ``` Provisions the infrastructure for an ECS push work pool. **Args:** * `work_pool_name`: The name of the work pool to provision infrastructure for. * `base_job_template`: The base job template of the work pool to provision infrastructure for. **Returns:** * An updated copy base job template. # modal Source: https://docs.prefect.io/v3/api-ref/python/prefect-infrastructure-provisioners-modal # `prefect.infrastructure.provisioners.modal` ## Classes ### `ModalPushProvisioner` A infrastructure provisioner for Modal push work pools. **Methods:** #### `console` ```python theme={null} console(self) -> Console ``` #### `console` ```python theme={null} console(self, value: Console) -> None ``` #### `provision` ```python theme={null} provision(self, work_pool_name: str, base_job_template: Dict[str, Any], client: Optional['PrefectClient'] = None) -> Dict[str, Any] ``` Provisions resources necessary for a Modal push work pool. **Args:** * `work_pool_name`: The name of the work pool to provision resources for * `base_job_template`: The base job template to update **Returns:** * A copy of the provided base job template with the provisioned resources # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-input-__init__ # `prefect.input` *This module is empty or contains only private/internal implementations.* # actions Source: https://docs.prefect.io/v3/api-ref/python/prefect-input-actions # `prefect.input.actions` ## Functions ### `ensure_flow_run_id` ```python theme={null} ensure_flow_run_id(flow_run_id: Optional[UUID] = None) -> UUID ``` ### `acreate_flow_run_input_from_model` ```python theme={null} acreate_flow_run_input_from_model(key: str, model_instance: pydantic.BaseModel, flow_run_id: Optional[UUID] = None, sender: Optional[str] = None) -> None ``` Create a new flow run input from a Pydantic model asynchronously. **Args:** * `key`: the flow run input key * `model_instance`: a Pydantic model instance to store * `flow_run_id`: the flow run ID (defaults to current context) * `sender`: optional sender identifier ### `create_flow_run_input_from_model` ```python theme={null} create_flow_run_input_from_model(key: str, model_instance: pydantic.BaseModel, flow_run_id: Optional[UUID] = None, sender: Optional[str] = None) -> None ``` Create a new flow run input from a Pydantic model. **Args:** * `key`: the flow run input key * `model_instance`: a Pydantic model instance to store * `flow_run_id`: the flow run ID (defaults to current context) * `sender`: optional sender identifier ### `acreate_flow_run_input` ```python theme={null} acreate_flow_run_input(client: 'PrefectClient', key: str, value: Any, flow_run_id: Optional[UUID] = None, sender: Optional[str] = None) -> None ``` Create a new flow run input asynchronously. The given `value` will be serialized to JSON and stored as a flow run input value. **Args:** * `key`: the flow run input key * `value`: the flow run input value * `flow_run_id`: the flow run ID (defaults to current context) * `sender`: optional sender identifier ### `create_flow_run_input` ```python theme={null} create_flow_run_input(key: str, value: Any, flow_run_id: Optional[UUID] = None, sender: Optional[str] = None) -> None ``` Create a new flow run input. The given `value` will be serialized to JSON and stored as a flow run input value. **Args:** * `key`: the flow run input key * `value`: the flow run input value * `flow_run_id`: the flow run ID (defaults to current context) * `sender`: optional sender identifier ### `afilter_flow_run_input` ```python theme={null} afilter_flow_run_input(client: 'PrefectClient', key_prefix: str, limit: int = 1, exclude_keys: Optional[Set[str]] = None, flow_run_id: Optional[UUID] = None) -> 'list[FlowRunInput]' ``` Filter flow run inputs by key prefix asynchronously. **Args:** * `key_prefix`: prefix to filter keys by * `limit`: maximum number of results to return * `exclude_keys`: keys to exclude from results * `flow_run_id`: the flow run ID (defaults to current context) **Returns:** * List of matching FlowRunInput objects ### `filter_flow_run_input` ```python theme={null} filter_flow_run_input(key_prefix: str, limit: int = 1, exclude_keys: Optional[Set[str]] = None, flow_run_id: Optional[UUID] = None) -> 'list[FlowRunInput]' ``` Filter flow run inputs by key prefix. **Args:** * `key_prefix`: prefix to filter keys by * `limit`: maximum number of results to return * `exclude_keys`: keys to exclude from results * `flow_run_id`: the flow run ID (defaults to current context) **Returns:** * List of matching FlowRunInput objects ### `aread_flow_run_input` ```python theme={null} aread_flow_run_input(client: 'PrefectClient', key: str, flow_run_id: Optional[UUID] = None) -> Any ``` Read a flow run input asynchronously. **Args:** * `key`: the flow run input key * `flow_run_id`: the flow run ID (defaults to current context) **Returns:** * The deserialized input value, or None if not found ### `read_flow_run_input` ```python theme={null} read_flow_run_input(key: str, flow_run_id: Optional[UUID] = None) -> Any ``` Read a flow run input. **Args:** * `key`: the flow run input key * `flow_run_id`: the flow run ID (defaults to current context) **Returns:** * The deserialized input value, or None if not found ### `adelete_flow_run_input` ```python theme={null} adelete_flow_run_input(client: 'PrefectClient', key: str, flow_run_id: Optional[UUID] = None) -> None ``` Delete a flow run input asynchronously. **Args:** * `key`: the flow run input key * `flow_run_id`: the flow run ID (defaults to current context) ### `delete_flow_run_input` ```python theme={null} delete_flow_run_input(key: str, flow_run_id: Optional[UUID] = None) -> None ``` Delete a flow run input. **Args:** * `key`: the flow run input key * `flow_run_id`: the flow run ID (defaults to current context) # run_input Source: https://docs.prefect.io/v3/api-ref/python/prefect-input-run_input # `prefect.input.run_input` This module contains functions that allow sending type-checked `RunInput` data to flows at runtime. Flows can send back responses, establishing two-way channels with senders. These functions are particularly useful for systems that require ongoing data transfer or need to react to input quickly. real-time interaction and efficient data handling. It's designed to facilitate dynamic communication within distributed or microservices-oriented systems, making it ideal for scenarios requiring continuous data synchronization and processing. It's particularly useful for systems that require ongoing data input and output. The following is an example of two flows. One sends a random number to the other and waits for a response. The other receives the number, squares it, and sends the result back. The sender flow then prints the result. Sender flow: ```python theme={null} import random from uuid import UUID from prefect import flow from prefect.logging import get_run_logger from prefect.input import RunInput class NumberData(RunInput): number: int @flow async def sender_flow(receiver_flow_run_id: UUID): logger = get_run_logger() the_number = random.randint(1, 100) await NumberData(number=the_number).send_to(receiver_flow_run_id) receiver = NumberData.receive(flow_run_id=receiver_flow_run_id) squared = await receiver.next() logger.info(f"{the_number} squared is {squared.number}") ``` Receiver flow: ```python theme={null} import random from uuid import UUID from prefect import flow from prefect.logging import get_run_logger from prefect.input import RunInput class NumberData(RunInput): number: int @flow async def receiver_flow(): async for data in NumberData.receive(): squared = data.number ** 2 data.respond(NumberData(number=squared)) ``` ## Functions ### `keyset_from_paused_state` ```python theme={null} keyset_from_paused_state(state: 'State') -> Keyset ``` Get the keyset for the given Paused state. **Args:** * `- state`: the state to get the keyset for ### `keyset_from_base_key` ```python theme={null} keyset_from_base_key(base_key: str) -> Keyset ``` Get the keyset for the given base key. **Args:** * `- base_key`: the base key to get the keyset for **Returns:** * * Dict\[str, str]: the keyset ### `run_input_subclass_from_type` ```python theme={null} run_input_subclass_from_type(_type: Union[Type[R], Type[T], pydantic.BaseModel]) -> Union[Type[AutomaticRunInput[T]], Type[R]] ``` Create a new `RunInput` subclass from the given type. ### `asend_input` ```python theme={null} asend_input(run_input: Any, flow_run_id: UUID, sender: Optional[str] = None, key_prefix: Optional[str] = None) -> None ``` Send input to a flow run asynchronously. ### `send_input` ```python theme={null} send_input(run_input: Any, flow_run_id: UUID, sender: Optional[str] = None, key_prefix: Optional[str] = None) -> None ``` Send input to a flow run. ### `receive_input` ```python theme={null} receive_input(input_type: Union[Type[R], Type[T], pydantic.BaseModel], timeout: Optional[float] = 3600, poll_interval: float = 10, raise_timeout_error: bool = False, exclude_keys: Optional[Set[str]] = None, key_prefix: Optional[str] = None, flow_run_id: Optional[UUID] = None, with_metadata: bool = False) -> Union[GetAutomaticInputHandler[T], GetInputHandler[R]] ``` ## Classes ### `RunInputMetadata` ### `BaseRunInput` **Methods:** #### `aload` ```python theme={null} aload(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None) -> Self ``` Load the run input response from the given key asynchronously. **Args:** * `- keyset`: the keyset to load the input for * `- flow_run_id`: the flow run ID to load the input for #### `arespond` ```python theme={null} arespond(self, run_input: 'RunInput', sender: Optional[str] = None, key_prefix: Optional[str] = None) -> None ``` Respond to the sender of this input asynchronously. #### `asave` ```python theme={null} asave(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None) -> None ``` Save the run input response to the given key asynchronously. **Args:** * `- keyset`: the keyset to save the input for * `- flow_run_id`: the flow run ID to save the input for #### `asend_to` ```python theme={null} asend_to(self, flow_run_id: UUID, sender: Optional[str] = None, key_prefix: Optional[str] = None) -> None ``` Send this input to a flow run asynchronously. #### `keyset_from_type` ```python theme={null} keyset_from_type(cls) -> Keyset ``` #### `load` ```python theme={null} load(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None) -> Self ``` Load the run input response from the given key. **Args:** * `- keyset`: the keyset to load the input for * `- flow_run_id`: the flow run ID to load the input for #### `load_from_flow_run_input` ```python theme={null} load_from_flow_run_input(cls, flow_run_input: 'FlowRunInput') -> Self ``` Load the run input from a FlowRunInput object. **Args:** * `- flow_run_input`: the flow run input to load the input for #### `metadata` ```python theme={null} metadata(self) -> RunInputMetadata ``` #### `respond` ```python theme={null} respond(self, run_input: 'RunInput', sender: Optional[str] = None, key_prefix: Optional[str] = None) -> None ``` Respond to the sender of this input. #### `save` ```python theme={null} save(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None) -> None ``` Save the run input response to the given key. **Args:** * `- keyset`: the keyset to save the input for * `- flow_run_id`: the flow run ID to save the input for #### `send_to` ```python theme={null} send_to(self, flow_run_id: UUID, sender: Optional[str] = None, key_prefix: Optional[str] = None) -> None ``` Send this input to a flow run. #### `with_initial_data` ```python theme={null} with_initial_data(cls: Type[R], description: Optional[str] = None, **kwargs: Any) -> Type[R] ``` Create a new `RunInput` subclass with the given initial data as field defaults. **Args:** * `- description`: a description to show when resuming a flow run that requires input * `- kwargs`: the initial data to populate the subclass ### `RunInput` **Methods:** #### `aload` ```python theme={null} aload(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None) -> Self ``` Load the run input response from the given key asynchronously. **Args:** * `- keyset`: the keyset to load the input for * `- flow_run_id`: the flow run ID to load the input for #### `arespond` ```python theme={null} arespond(self, run_input: 'RunInput', sender: Optional[str] = None, key_prefix: Optional[str] = None) -> None ``` Respond to the sender of this input asynchronously. #### `asave` ```python theme={null} asave(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None) -> None ``` Save the run input response to the given key asynchronously. **Args:** * `- keyset`: the keyset to save the input for * `- flow_run_id`: the flow run ID to save the input for #### `asend_to` ```python theme={null} asend_to(self, flow_run_id: UUID, sender: Optional[str] = None, key_prefix: Optional[str] = None) -> None ``` Send this input to a flow run asynchronously. #### `keyset_from_type` ```python theme={null} keyset_from_type(cls) -> Keyset ``` #### `load` ```python theme={null} load(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None) -> Self ``` Load the run input response from the given key. **Args:** * `- keyset`: the keyset to load the input for * `- flow_run_id`: the flow run ID to load the input for #### `load_from_flow_run_input` ```python theme={null} load_from_flow_run_input(cls, flow_run_input: 'FlowRunInput') -> Self ``` Load the run input from a FlowRunInput object. **Args:** * `- flow_run_input`: the flow run input to load the input for #### `metadata` ```python theme={null} metadata(self) -> RunInputMetadata ``` #### `receive` ```python theme={null} receive(cls, timeout: Optional[float] = 3600, poll_interval: float = 10, raise_timeout_error: bool = False, exclude_keys: Optional[Set[str]] = None, key_prefix: Optional[str] = None, flow_run_id: Optional[UUID] = None) -> GetInputHandler[Self] ``` #### `respond` ```python theme={null} respond(self, run_input: 'RunInput', sender: Optional[str] = None, key_prefix: Optional[str] = None) -> None ``` Respond to the sender of this input. #### `save` ```python theme={null} save(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None) -> None ``` Save the run input response to the given key. **Args:** * `- keyset`: the keyset to save the input for * `- flow_run_id`: the flow run ID to save the input for #### `send_to` ```python theme={null} send_to(self, flow_run_id: UUID, sender: Optional[str] = None, key_prefix: Optional[str] = None) -> None ``` Send this input to a flow run. #### `subclass_from_base_model_type` ```python theme={null} subclass_from_base_model_type(cls, model_cls: Type[pydantic.BaseModel]) -> Type['RunInput'] ``` Create a new `RunInput` subclass from the given `pydantic.BaseModel` subclass. **Args:** * `- model_cls`: the class from which to create the new `RunInput` subclass #### `with_initial_data` ```python theme={null} with_initial_data(cls: Type[R], description: Optional[str] = None, **kwargs: Any) -> Type[R] ``` Create a new `RunInput` subclass with the given initial data as field defaults. **Args:** * `- description`: a description to show when resuming a flow run that requires input * `- kwargs`: the initial data to populate the subclass ### `AutomaticRunInput` **Methods:** #### `aload` ```python theme={null} aload(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None) -> T ``` Load the run input response from the given key asynchronously. **Args:** * `- keyset`: the keyset to load the input for * `- flow_run_id`: the flow run ID to load the input for #### `aload` ```python theme={null} aload(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None) -> Self ``` Load the run input response from the given key asynchronously. **Args:** * `- keyset`: the keyset to load the input for * `- flow_run_id`: the flow run ID to load the input for #### `arespond` ```python theme={null} arespond(self, run_input: 'RunInput', sender: Optional[str] = None, key_prefix: Optional[str] = None) -> None ``` Respond to the sender of this input asynchronously. #### `asave` ```python theme={null} asave(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None) -> None ``` Save the run input response to the given key asynchronously. **Args:** * `- keyset`: the keyset to save the input for * `- flow_run_id`: the flow run ID to save the input for #### `asend_to` ```python theme={null} asend_to(self, flow_run_id: UUID, sender: Optional[str] = None, key_prefix: Optional[str] = None) -> None ``` Send this input to a flow run asynchronously. #### `keyset_from_type` ```python theme={null} keyset_from_type(cls) -> Keyset ``` #### `load` ```python theme={null} load(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None) -> T ``` Load the run input response from the given key. **Args:** * `- keyset`: the keyset to load the input for * `- flow_run_id`: the flow run ID to load the input for #### `load` ```python theme={null} load(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None) -> Self ``` Load the run input response from the given key. **Args:** * `- keyset`: the keyset to load the input for * `- flow_run_id`: the flow run ID to load the input for #### `load_from_flow_run_input` ```python theme={null} load_from_flow_run_input(cls, flow_run_input: 'FlowRunInput') -> Self ``` Load the run input from a FlowRunInput object. **Args:** * `- flow_run_input`: the flow run input to load the input for #### `metadata` ```python theme={null} metadata(self) -> RunInputMetadata ``` #### `receive` ```python theme={null} receive(cls, timeout: Optional[float] = 3600, poll_interval: float = 10, raise_timeout_error: bool = False, exclude_keys: Optional[Set[str]] = None, key_prefix: Optional[str] = None, flow_run_id: Optional[UUID] = None, with_metadata: bool = False) -> GetAutomaticInputHandler[T] ``` #### `respond` ```python theme={null} respond(self, run_input: 'RunInput', sender: Optional[str] = None, key_prefix: Optional[str] = None) -> None ``` Respond to the sender of this input. #### `save` ```python theme={null} save(cls, keyset: Keyset, flow_run_id: Optional[UUID] = None) -> None ``` Save the run input response to the given key. **Args:** * `- keyset`: the keyset to save the input for * `- flow_run_id`: the flow run ID to save the input for #### `send_to` ```python theme={null} send_to(self, flow_run_id: UUID, sender: Optional[str] = None, key_prefix: Optional[str] = None) -> None ``` Send this input to a flow run. #### `subclass_from_type` ```python theme={null} subclass_from_type(cls, _type: Type[T]) -> Type['AutomaticRunInput[T]'] ``` Create a new `AutomaticRunInput` subclass from the given type. This method uses the type's name as a key prefix to identify related flow run inputs. This helps in ensuring that values saved under a type (like List\[int]) are retrievable under the generic type name (like "list"). #### `with_initial_data` ```python theme={null} with_initial_data(cls: Type[R], description: Optional[str] = None, **kwargs: Any) -> Type[R] ``` Create a new `RunInput` subclass with the given initial data as field defaults. **Args:** * `- description`: a description to show when resuming a flow run that requires input * `- kwargs`: the initial data to populate the subclass ### `GetInputHandler` **Methods:** #### `anext` ```python theme={null} anext(self) -> R ``` Get the next input asynchronously. #### `filter_for_inputs` ```python theme={null} filter_for_inputs(self) -> list['FlowRunInput'] ``` Filter for inputs asynchronously. #### `filter_for_inputs_sync` ```python theme={null} filter_for_inputs_sync(self) -> list['FlowRunInput'] ``` Filter for inputs synchronously. #### `next` ```python theme={null} next(self) -> R ``` Get the next input. #### `to_instance` ```python theme={null} to_instance(self, flow_run_input: 'FlowRunInput') -> R ``` ### `GetAutomaticInputHandler` **Methods:** #### `anext` ```python theme={null} anext(self) -> Union[T, AutomaticRunInput[T]] ``` Get the next input asynchronously. #### `filter_for_inputs` ```python theme={null} filter_for_inputs(self) -> list['FlowRunInput'] ``` Filter for inputs asynchronously. #### `filter_for_inputs_sync` ```python theme={null} filter_for_inputs_sync(self) -> list['FlowRunInput'] ``` Filter for inputs synchronously. #### `next` ```python theme={null} next(self) -> Union[T, AutomaticRunInput[T]] ``` Get the next input. #### `to_instance` ```python theme={null} to_instance(self, flow_run_input: 'FlowRunInput') -> Union[T, AutomaticRunInput[T]] ``` # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-locking-__init__ # `prefect.locking` *This module is empty or contains only private/internal implementations.* # filesystem Source: https://docs.prefect.io/v3/api-ref/python/prefect-locking-filesystem # `prefect.locking.filesystem` ## Classes ### `FileSystemLockManager` A lock manager that implements locking using local files. **Attributes:** * `lock_files_directory`: the directory where lock files are stored **Methods:** #### `aacquire_lock` ```python theme={null} aacquire_lock(self, key: str, holder: str, acquire_timeout: Optional[float] = None, hold_timeout: Optional[float] = None) -> bool ``` #### `aacquire_lock` ```python theme={null} aacquire_lock(self, key: str, holder: str, acquire_timeout: Optional[float] = None, hold_timeout: Optional[float] = None) -> bool ``` Acquire a lock for a transaction record with the given key. Will block other actors from updating this transaction record until the lock is released. **Args:** * `key`: Unique identifier for the transaction record. * `holder`: Unique identifier for the holder of the lock. * `acquire_timeout`: Max number of seconds to wait for the record to become available if it is locked while attempting to acquire a lock. Pass 0 to attempt to acquire a lock without waiting. Blocks indefinitely by default. * `hold_timeout`: Max number of seconds to hold the lock for. Holds the lock indefinitely by default. **Returns:** * True if the lock was successfully acquired; False otherwise. #### `acquire_lock` ```python theme={null} acquire_lock(self, key: str, holder: str, acquire_timeout: Optional[float] = None, hold_timeout: Optional[float] = None) -> bool ``` #### `acquire_lock` ```python theme={null} acquire_lock(self, key: str, holder: str, acquire_timeout: Optional[float] = None, hold_timeout: Optional[float] = None) -> bool ``` Acquire a lock for a transaction record with the given key. Will block other actors from updating this transaction record until the lock is released. **Args:** * `key`: Unique identifier for the transaction record. * `holder`: Unique identifier for the holder of the lock. * `acquire_timeout`: Max number of seconds to wait for the record to become available if it is locked while attempting to acquire a lock. Pass 0 to attempt to acquire a lock without waiting. Blocks indefinitely by default. * `hold_timeout`: Max number of seconds to hold the lock for. Holds the lock indefinitely by default. **Returns:** * True if the lock was successfully acquired; False otherwise. #### `await_for_lock` ```python theme={null} await_for_lock(self, key: str, timeout: Optional[float] = None) -> bool ``` #### `await_for_lock` ```python theme={null} await_for_lock(self, key: str, timeout: Optional[float] = None) -> bool ``` Wait for the corresponding transaction record to become free. **Args:** * `key`: Unique identifier for the transaction record. * `timeout`: Maximum time to wait. None means to wait indefinitely. **Returns:** * True if the lock becomes free within the timeout; False otherwise. #### `is_lock_holder` ```python theme={null} is_lock_holder(self, key: str, holder: str) -> bool ``` #### `is_lock_holder` ```python theme={null} is_lock_holder(self, key: str, holder: str) -> bool ``` Check if the current holder is the lock holder for the transaction record. **Args:** * `key`: Unique identifier for the transaction record. * `holder`: Unique identifier for the holder of the lock. **Returns:** * True if the current holder is the lock holder; False otherwise. #### `is_locked` ```python theme={null} is_locked(self, key: str, use_cache: bool = False) -> bool ``` #### `is_locked` ```python theme={null} is_locked(self, key: str) -> bool ``` Simple check to see if the corresponding record is currently locked. **Args:** * `key`: Unique identifier for the transaction record. **Returns:** * True is the record is locked; False otherwise. #### `release_lock` ```python theme={null} release_lock(self, key: str, holder: str) -> None ``` #### `release_lock` ```python theme={null} release_lock(self, key: str, holder: str) -> None ``` Releases the lock on the corresponding transaction record. **Args:** * `key`: Unique identifier for the transaction record. * `holder`: Unique identifier for the holder of the lock. Must match the holder provided when acquiring the lock. #### `wait_for_lock` ```python theme={null} wait_for_lock(self, key: str, timeout: Optional[float] = None) -> bool ``` #### `wait_for_lock` ```python theme={null} wait_for_lock(self, key: str, timeout: Optional[float] = None) -> bool ``` Wait for the corresponding transaction record to become free. **Args:** * `key`: Unique identifier for the transaction record. * `timeout`: Maximum time to wait. None means to wait indefinitely. **Returns:** * True if the lock becomes free within the timeout; False otherwise. # memory Source: https://docs.prefect.io/v3/api-ref/python/prefect-locking-memory # `prefect.locking.memory` ## Classes ### `MemoryLockManager` A lock manager that stores lock information in memory. Note: because this lock manager stores data in memory, it is not suitable for use in a distributed environment or across different processes. **Methods:** #### `aacquire_lock` ```python theme={null} aacquire_lock(self, key: str, holder: str, acquire_timeout: float | None = None, hold_timeout: float | None = None) -> bool ``` #### `aacquire_lock` ```python theme={null} aacquire_lock(self, key: str, holder: str, acquire_timeout: Optional[float] = None, hold_timeout: Optional[float] = None) -> bool ``` Acquire a lock for a transaction record with the given key. Will block other actors from updating this transaction record until the lock is released. **Args:** * `key`: Unique identifier for the transaction record. * `holder`: Unique identifier for the holder of the lock. * `acquire_timeout`: Max number of seconds to wait for the record to become available if it is locked while attempting to acquire a lock. Pass 0 to attempt to acquire a lock without waiting. Blocks indefinitely by default. * `hold_timeout`: Max number of seconds to hold the lock for. Holds the lock indefinitely by default. **Returns:** * True if the lock was successfully acquired; False otherwise. #### `acquire_lock` ```python theme={null} acquire_lock(self, key: str, holder: str, acquire_timeout: float | None = None, hold_timeout: float | None = None) -> bool ``` #### `acquire_lock` ```python theme={null} acquire_lock(self, key: str, holder: str, acquire_timeout: Optional[float] = None, hold_timeout: Optional[float] = None) -> bool ``` Acquire a lock for a transaction record with the given key. Will block other actors from updating this transaction record until the lock is released. **Args:** * `key`: Unique identifier for the transaction record. * `holder`: Unique identifier for the holder of the lock. * `acquire_timeout`: Max number of seconds to wait for the record to become available if it is locked while attempting to acquire a lock. Pass 0 to attempt to acquire a lock without waiting. Blocks indefinitely by default. * `hold_timeout`: Max number of seconds to hold the lock for. Holds the lock indefinitely by default. **Returns:** * True if the lock was successfully acquired; False otherwise. #### `await_for_lock` ```python theme={null} await_for_lock(self, key: str, timeout: float | None = None) -> bool ``` #### `await_for_lock` ```python theme={null} await_for_lock(self, key: str, timeout: Optional[float] = None) -> bool ``` Wait for the corresponding transaction record to become free. **Args:** * `key`: Unique identifier for the transaction record. * `timeout`: Maximum time to wait. None means to wait indefinitely. **Returns:** * True if the lock becomes free within the timeout; False otherwise. #### `is_lock_holder` ```python theme={null} is_lock_holder(self, key: str, holder: str) -> bool ``` #### `is_lock_holder` ```python theme={null} is_lock_holder(self, key: str, holder: str) -> bool ``` Check if the current holder is the lock holder for the transaction record. **Args:** * `key`: Unique identifier for the transaction record. * `holder`: Unique identifier for the holder of the lock. **Returns:** * True if the current holder is the lock holder; False otherwise. #### `is_locked` ```python theme={null} is_locked(self, key: str) -> bool ``` #### `is_locked` ```python theme={null} is_locked(self, key: str) -> bool ``` Simple check to see if the corresponding record is currently locked. **Args:** * `key`: Unique identifier for the transaction record. **Returns:** * True is the record is locked; False otherwise. #### `release_lock` ```python theme={null} release_lock(self, key: str, holder: str) -> None ``` #### `release_lock` ```python theme={null} release_lock(self, key: str, holder: str) -> None ``` Releases the lock on the corresponding transaction record. **Args:** * `key`: Unique identifier for the transaction record. * `holder`: Unique identifier for the holder of the lock. Must match the holder provided when acquiring the lock. #### `wait_for_lock` ```python theme={null} wait_for_lock(self, key: str, timeout: float | None = None) -> bool ``` #### `wait_for_lock` ```python theme={null} wait_for_lock(self, key: str, timeout: Optional[float] = None) -> bool ``` Wait for the corresponding transaction record to become free. **Args:** * `key`: Unique identifier for the transaction record. * `timeout`: Maximum time to wait. None means to wait indefinitely. **Returns:** * True if the lock becomes free within the timeout; False otherwise. # protocol Source: https://docs.prefect.io/v3/api-ref/python/prefect-locking-protocol # `prefect.locking.protocol` ## Classes ### `LockManager` **Methods:** #### `aacquire_lock` ```python theme={null} aacquire_lock(self, key: str, holder: str, acquire_timeout: Optional[float] = None, hold_timeout: Optional[float] = None) -> bool ``` Acquire a lock for a transaction record with the given key. Will block other actors from updating this transaction record until the lock is released. **Args:** * `key`: Unique identifier for the transaction record. * `holder`: Unique identifier for the holder of the lock. * `acquire_timeout`: Max number of seconds to wait for the record to become available if it is locked while attempting to acquire a lock. Pass 0 to attempt to acquire a lock without waiting. Blocks indefinitely by default. * `hold_timeout`: Max number of seconds to hold the lock for. Holds the lock indefinitely by default. **Returns:** * True if the lock was successfully acquired; False otherwise. #### `acquire_lock` ```python theme={null} acquire_lock(self, key: str, holder: str, acquire_timeout: Optional[float] = None, hold_timeout: Optional[float] = None) -> bool ``` Acquire a lock for a transaction record with the given key. Will block other actors from updating this transaction record until the lock is released. **Args:** * `key`: Unique identifier for the transaction record. * `holder`: Unique identifier for the holder of the lock. * `acquire_timeout`: Max number of seconds to wait for the record to become available if it is locked while attempting to acquire a lock. Pass 0 to attempt to acquire a lock without waiting. Blocks indefinitely by default. * `hold_timeout`: Max number of seconds to hold the lock for. Holds the lock indefinitely by default. **Returns:** * True if the lock was successfully acquired; False otherwise. #### `await_for_lock` ```python theme={null} await_for_lock(self, key: str, timeout: Optional[float] = None) -> bool ``` Wait for the corresponding transaction record to become free. **Args:** * `key`: Unique identifier for the transaction record. * `timeout`: Maximum time to wait. None means to wait indefinitely. **Returns:** * True if the lock becomes free within the timeout; False otherwise. #### `is_lock_holder` ```python theme={null} is_lock_holder(self, key: str, holder: str) -> bool ``` Check if the current holder is the lock holder for the transaction record. **Args:** * `key`: Unique identifier for the transaction record. * `holder`: Unique identifier for the holder of the lock. **Returns:** * True if the current holder is the lock holder; False otherwise. #### `is_locked` ```python theme={null} is_locked(self, key: str) -> bool ``` Simple check to see if the corresponding record is currently locked. **Args:** * `key`: Unique identifier for the transaction record. **Returns:** * True is the record is locked; False otherwise. #### `release_lock` ```python theme={null} release_lock(self, key: str, holder: str) -> None ``` Releases the lock on the corresponding transaction record. **Args:** * `key`: Unique identifier for the transaction record. * `holder`: Unique identifier for the holder of the lock. Must match the holder provided when acquiring the lock. #### `wait_for_lock` ```python theme={null} wait_for_lock(self, key: str, timeout: Optional[float] = None) -> bool ``` Wait for the corresponding transaction record to become free. **Args:** * `key`: Unique identifier for the transaction record. * `timeout`: Maximum time to wait. None means to wait indefinitely. **Returns:** * True if the lock becomes free within the timeout; False otherwise. # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-logging-__init__ # `prefect.logging` *This module is empty or contains only private/internal implementations.* # clients Source: https://docs.prefect.io/v3/api-ref/python/prefect-logging-clients # `prefect.logging.clients` ## Functions ### `http_to_ws` ```python theme={null} http_to_ws(url: str) -> str ``` ### `logs_out_socket_from_api_url` ```python theme={null} logs_out_socket_from_api_url(url: str) -> str ``` ### `get_logs_subscriber` ```python theme={null} get_logs_subscriber(filter: Optional['LogFilter'] = None, reconnection_attempts: int = 10) -> 'PrefectLogsSubscriber' ``` Get a logs subscriber based on the current Prefect configuration. Similar to get\_events\_subscriber, this automatically detects whether you're using Prefect Cloud or OSS and returns the appropriate subscriber. ## Classes ### `PrefectLogsSubscriber` Subscribes to a Prefect logs stream, yielding logs as they occur. Example: from prefect.logging.clients import PrefectLogsSubscriber from prefect.client.schemas.filters import LogFilter, LogFilterLevel import logging filter = LogFilter(level=LogFilterLevel(ge\_=logging.INFO)) async with PrefectLogsSubscriber(filter=filter) as subscriber: async for log in subscriber: print(log.timestamp, log.level, log.message) **Methods:** #### `client_name` ```python theme={null} client_name(self) -> str ``` ### `PrefectCloudLogsSubscriber` Logs subscriber for Prefect Cloud **Methods:** #### `client_name` ```python theme={null} client_name(self) -> str ``` # configuration Source: https://docs.prefect.io/v3/api-ref/python/prefect-logging-configuration # `prefect.logging.configuration` ## Functions ### `load_logging_config` ```python theme={null} load_logging_config(path: Path) -> dict[str, Any] ``` Loads logging configuration from a path allowing override from the environment ### `ensure_logging_setup` ```python theme={null} ensure_logging_setup() -> None ``` Ensure Prefect logging is configured in this process, calling `setup_logging` only if it has not already been called. Use this in remote execution environments (e.g. Dask/Ray workers) where the normal SDK entry point (`import prefect`) may not have triggered logging configuration. ### `setup_logging` ```python theme={null} setup_logging(incremental: bool | None = None) -> dict[str, Any] ``` Sets up logging. Returns the config used. # filters Source: https://docs.prefect.io/v3/api-ref/python/prefect-logging-filters # `prefect.logging.filters` ## Functions ### `redact_substr` ```python theme={null} redact_substr(obj: Any, substr: str) -> Any ``` Redact a string from a potentially nested object. **Args:** * `obj`: The object to redact the string from * `substr`: The string to redact. **Returns:** * The object with the API key redacted. ## Classes ### `ObfuscateApiKeyFilter` A logging filter that obfuscates any string that matches the obfuscate\_string function. **Methods:** #### `filter` ```python theme={null} filter(self, record: logging.LogRecord) -> bool ``` # formatters Source: https://docs.prefect.io/v3/api-ref/python/prefect-logging-formatters # `prefect.logging.formatters` ## Functions ### `format_exception_info` ```python theme={null} format_exception_info(exc_info: ExceptionInfoType) -> dict[str, Any] ``` ## Classes ### `JsonFormatter` Formats log records as a JSON string. The format may be specified as "pretty" to format the JSON with indents and newlines. **Methods:** #### `format` ```python theme={null} format(self, record: logging.LogRecord) -> str ``` ### `PrefectFormatter` **Methods:** #### `formatMessage` ```python theme={null} formatMessage(self, record: logging.LogRecord) -> str ``` # handlers Source: https://docs.prefect.io/v3/api-ref/python/prefect-logging-handlers # `prefect.logging.handlers` ## Functions ### `set_api_log_sink` ```python theme={null} set_api_log_sink(sink: Callable[[Dict[str, Any]], None] | None) -> None ``` ### `emit_api_log` ```python theme={null} emit_api_log(log: Dict[str, Any]) -> None ``` ## Classes ### `APILogWorker` **Methods:** #### `instance` ```python theme={null} instance(cls: Type[Self], *args: Any) -> Self ``` #### `max_batch_size` ```python theme={null} max_batch_size(self) -> int ``` #### `min_interval` ```python theme={null} min_interval(self) -> float | None ``` ### `APILogHandler` A logging handler that sends logs to the Prefect API. Sends log records to the `APILogWorker` which manages sending batches of logs in the background. **Methods:** #### `aflush` ```python theme={null} aflush(cls) -> None ``` Tell the `APILogWorker` to send any currently enqueued logs and block until completion. #### `emit` ```python theme={null} emit(self, record: logging.LogRecord) -> None ``` Send a log to the `APILogWorker` #### `flush` ```python theme={null} flush(self) -> None ``` Tell the `APILogWorker` to send any currently enqueued logs and block until completion. Use `aflush` from async contexts instead. #### `handleError` ```python theme={null} handleError(self, record: logging.LogRecord) -> None ``` #### `prepare` ```python theme={null} prepare(self, record: logging.LogRecord) -> Dict[str, Any] ``` Convert a `logging.LogRecord` to the API `LogCreate` schema and serialize. This infers the linked flow or task run from the log record or the current run context. If a flow run id cannot be found, the log will be dropped. Logs exceeding the maximum size will be dropped. ### `WorkerAPILogHandler` **Methods:** #### `aflush` ```python theme={null} aflush(cls) -> None ``` Tell the `APILogWorker` to send any currently enqueued logs and block until completion. #### `emit` ```python theme={null} emit(self, record: logging.LogRecord) -> None ``` #### `emit` ```python theme={null} emit(self, record: logging.LogRecord) -> None ``` Send a log to the `APILogWorker` #### `flush` ```python theme={null} flush(self) -> None ``` Tell the `APILogWorker` to send any currently enqueued logs and block until completion. Use `aflush` from async contexts instead. #### `handleError` ```python theme={null} handleError(self, record: logging.LogRecord) -> None ``` #### `prepare` ```python theme={null} prepare(self, record: logging.LogRecord) -> Dict[str, Any] ``` Convert a `logging.LogRecord` to the API `LogCreate` schema and serialize. This will add in the worker id to the log. Logs exceeding the maximum size will be dropped. #### `prepare` ```python theme={null} prepare(self, record: logging.LogRecord) -> Dict[str, Any] ``` Convert a `logging.LogRecord` to the API `LogCreate` schema and serialize. This infers the linked flow or task run from the log record or the current run context. If a flow run id cannot be found, the log will be dropped. Logs exceeding the maximum size will be dropped. ### `PrefectConsoleHandler` **Methods:** #### `emit` ```python theme={null} emit(self, record: logging.LogRecord) -> None ``` # highlighters Source: https://docs.prefect.io/v3/api-ref/python/prefect-logging-highlighters # `prefect.logging.highlighters` ## Classes ### `LevelHighlighter` Apply style to log levels. ### `UrlHighlighter` Apply style to urls. ### `NameHighlighter` Apply style to names. ### `StateHighlighter` Apply style to states. ### `PrefectConsoleHighlighter` Applies style from multiple highlighters. # loggers Source: https://docs.prefect.io/v3/api-ref/python/prefect-logging-loggers # `prefect.logging.loggers` ## Functions ### `get_logger` ```python theme={null} get_logger(name: str | None = None) -> logging.Logger ``` Get a `prefect` logger. These loggers are intended for internal use within the `prefect` package. See `get_run_logger` for retrieving loggers for use within task or flow runs. By default, only run-related loggers are connected to the `APILogHandler`. ### `get_run_logger` ```python theme={null} get_run_logger(context: Optional['RunContext'] = None, **kwargs: Any) -> Union[logging.Logger, LoggingAdapter] ``` Get a Prefect logger for the current task run or flow run. The logger will be named either `prefect.task_runs` or `prefect.flow_runs`. Contextual data about the run will be attached to the log records. These loggers are connected to the `APILogHandler` by default to send log records to the API. **Args:** * `context`: A specific context may be provided as an override. By default, the context is inferred from global state and this should not be needed. * `**kwargs`: Additional keyword arguments will be attached to the log records in addition to the run metadata **Raises:** * `MissingContextError`: If no context can be found ### `flow_run_logger` ```python theme={null} flow_run_logger(flow_run: 'FlowRun | None' = None, flow: 'Flow[Any, Any] | None' = None, flow_run_id: UUID | None = None, **kwargs: str) -> PrefectLogAdapter ``` Create a flow run logger with the run's metadata attached. Additional keyword arguments can be provided to attach custom data to the log records. Accepts a `FlowRun` object or a bare `flow_run_id` UUID. At least one must be provided. When only `flow_run_id` is given, `flow_run_name` and `flow_name` default to `""`. If both are provided, `flow_run` takes precedence. If the flow run context is available, see `get_run_logger` instead. **Raises:** * `ValueError`: If neither `flow_run` nor `flow_run_id` is provided. ### `task_run_logger` ```python theme={null} task_run_logger(task_run: 'TaskRun', task: Optional['Task[Any, Any]'] = None, flow_run: Optional['FlowRun'] = None, flow: Optional['Flow[Any, Any]'] = None, **kwargs: Any) -> LoggingAdapter ``` Create a task run logger with the run's metadata attached. Additional keyword arguments can be provided to attach custom data to the log records. If the task run context is available, see `get_run_logger` instead. If only the flow run context is available, it will be used for default values of `flow_run` and `flow`. ### `get_worker_logger` ```python theme={null} get_worker_logger(worker: 'BaseWorker[Any, Any, Any]', name: Optional[str] = None) -> logging.Logger | LoggingAdapter ``` Create a worker logger with the worker's metadata attached. If the worker has a backend\_id, it will be attached to the log records. If the worker does not have a backend\_id a basic logger will be returned. If the worker does not have a backend\_id attribute, a basic logger will be returned. ### `disable_logger` ```python theme={null} disable_logger(name: str) ``` Get a logger by name and disables it within the context manager. Upon exiting the context manager, the logger is returned to its original state. ### `disable_run_logger` ```python theme={null} disable_run_logger() ``` Gets both `prefect.flow_run` and `prefect.task_run` and disables them within the context manager. Upon exiting the context manager, both loggers are returned to their original state. ### `print_as_log` ```python theme={null} print_as_log(*args: Any, **kwargs: Any) -> None ``` A patch for `print` to send printed messages to the Prefect run logger. If no run is active, `print` will behave as if it were not patched. If `print` sends data to a file other than `sys.stdout` or `sys.stderr`, it will not be forwarded to the Prefect logger either. ### `patch_print` ```python theme={null} patch_print() ``` Patches the Python builtin `print` method to use `print_as_log` ## Classes ### `PrefectLogAdapter` Adapter that ensures extra kwargs are passed through correctly; without this the `extra` fields set on the adapter would overshadow any provided on a log-by-log basis. See [https://bugs.python.org/issue32732](https://bugs.python.org/issue32732) — the Python team has declared that this is not a bug in the LoggingAdapter and subclassing is the intended workaround. **Methods:** #### `getChild` ```python theme={null} getChild(self, suffix: str, extra: dict[str, Any] | None = None) -> 'PrefectLogAdapter' ``` #### `process` ```python theme={null} process(self, msg: str, kwargs: MutableMapping[str, Any]) -> tuple[str, MutableMapping[str, Any]] ``` ### `LogEavesdropper` A context manager that collects logs for the duration of the context Example: ````python theme={null} import logging from prefect.logging import LogEavesdropper with LogEavesdropper("my_logger") as eavesdropper: logging.getLogger("my_logger").info("Hello, world!") logging.getLogger("my_logger.child_module").info("Another one!") print(eavesdropper.text()) # Outputs: "Hello, world! Another one!" **Methods:** #### `emit` ```python emit(self, record: LogRecord) -> None ```` The logging.Handler implementation, not intended to be called directly. #### `text` ```python theme={null} text(self) -> str ``` Return the collected logs as a single newline-delimited string # main Source: https://docs.prefect.io/v3/api-ref/python/prefect-main # `prefect.main` *This module is empty or contains only private/internal implementations.* # plugins Source: https://docs.prefect.io/v3/api-ref/python/prefect-plugins # `prefect.plugins` Utilities for loading plugins that extend Prefect's functionality. Plugins are detected by entry point definitions in package setup files. Currently supported entrypoints: * prefect.collections: Identifies this package as a Prefect collection that should be imported when Prefect is imported. ## Functions ### `safe_load_entrypoints` ```python theme={null} safe_load_entrypoints(entrypoints: EntryPoints) -> dict[str, Union[Exception, Any]] ``` Load entry points for a group capturing any exceptions that occur. ### `load_prefect_collections` ```python theme={null} load_prefect_collections() -> dict[str, Union[ModuleType, Exception]] ``` Load all Prefect collections that define an entrypoint in the group `prefect.collections`. # results Source: https://docs.prefect.io/v3/api-ref/python/prefect-results # `prefect.results` ## Functions ### `DEFAULT_STORAGE_KEY_FN` ```python theme={null} DEFAULT_STORAGE_KEY_FN() -> str ``` ### `aget_default_result_storage` ```python theme={null} aget_default_result_storage() -> WritableFileSystem ``` Generate a default file system for result storage. ### `get_default_result_storage` ```python theme={null} get_default_result_storage() -> WritableFileSystem ``` Generate a default file system for result storage. ### `aresolve_result_storage` ```python theme={null} aresolve_result_storage(result_storage: ResultStorage | UUID | Path) -> WritableFileSystem ``` Resolve one of the valid `ResultStorage` input types into a saved block document id and an instance of the block. ### `resolve_result_storage` ```python theme={null} resolve_result_storage(result_storage: ResultStorage | UUID | Path) -> WritableFileSystem ``` Resolve one of the valid `ResultStorage` input types into a saved block document id and an instance of the block. ### `resolve_serializer` ```python theme={null} resolve_serializer(serializer: ResultSerializer) -> Serializer ``` Resolve one of the valid `ResultSerializer` input types into a serializer instance. ### `get_or_create_default_task_scheduling_storage` ```python theme={null} get_or_create_default_task_scheduling_storage() -> ResultStorage ``` Generate a default file system for background task parameter/result storage. ### `get_default_result_serializer` ```python theme={null} get_default_result_serializer() -> Serializer ``` Generate a default file system for result storage. ### `get_default_persist_setting` ```python theme={null} get_default_persist_setting() -> bool ``` Return the default option for result persistence. ### `get_default_persist_setting_for_tasks` ```python theme={null} get_default_persist_setting_for_tasks() -> bool ``` Return the default option for result persistence for tasks. ### `should_persist_result` ```python theme={null} should_persist_result() -> bool ``` Return the default option for result persistence determined by the current run context. If there is no current run context, the value of `results.persist_by_default` on the current settings will be returned. ### `default_cache` ```python theme={null} default_cache() -> LRUCache[str, 'ResultRecord[Any]'] ``` ### `result_storage_discriminator` ```python theme={null} result_storage_discriminator(x: Any) -> str ``` ### `get_result_store` ```python theme={null} get_result_store() -> ResultStore ``` Get the current result store. ## Classes ### `ResultStore` Manages the storage and retrieval of results. **Attributes:** * `result_storage`: The storage for result records. If not provided, the default result storage will be used. * `metadata_storage`: The storage for result record metadata. If not provided, the metadata will be stored alongside the results. * `lock_manager`: The lock manager to use for locking result records. If not provided, the store cannot be used in transactions with the SERIALIZABLE isolation level. * `cache_result_in_memory`: Whether to cache results in memory. * `serializer`: The serializer to use for results. * `storage_key_fn`: The function to generate storage keys. **Methods:** #### `aacquire_lock` ```python theme={null} aacquire_lock(self, key: str, holder: str | None = None, timeout: float | None = None) -> bool ``` Acquire a lock for a result record. **Args:** * `key`: The key to acquire the lock for. * `holder`: The holder of the lock. If not provided, a default holder based on the current host, process, and thread will be used. * `timeout`: The timeout for the lock. **Returns:** * True if the lock was successfully acquired; False otherwise. #### `acquire_lock` ```python theme={null} acquire_lock(self, key: str, holder: str | None = None, timeout: float | None = None) -> bool ``` Acquire a lock for a result record. **Args:** * `key`: The key to acquire the lock for. * `holder`: The holder of the lock. If not provided, a default holder based on the current host, process, and thread will be used. * `timeout`: The timeout for the lock. **Returns:** * True if the lock was successfully acquired; False otherwise. #### `aexists` ```python theme={null} aexists(self, key: str) -> bool ``` Check if a result record exists in storage. **Args:** * `key`: The key to check for the existence of a result record. **Returns:** * True if the result record exists, False otherwise. #### `apersist_result_record` ```python theme={null} apersist_result_record(self, result_record: 'ResultRecord[Any]', holder: str | None = None) -> None ``` Persist a result record to storage. **Args:** * `result_record`: The result record to persist. #### `aread` ```python theme={null} aread(self, key: str, holder: str | None = None) -> 'ResultRecord[Any]' ``` Read a result record from storage. **Args:** * `key`: The key to read the result record from. * `holder`: The holder of the lock if a lock was set on the record. **Returns:** * A result record. #### `aupdate_for_flow` ```python theme={null} aupdate_for_flow(self, flow: 'Flow[..., Any]') -> Self ``` Create a new result store for a flow with updated settings. **Args:** * `flow`: The flow to update the result store for. **Returns:** * An updated result store. #### `aupdate_for_task` ```python theme={null} aupdate_for_task(self: Self, task: 'Task[P, R]') -> Self ``` Create a new result store for a task. **Args:** * `task`: The task to update the result store for. **Returns:** * An updated result store. #### `await_for_lock` ```python theme={null} await_for_lock(self, key: str, timeout: float | None = None) -> bool ``` Wait for the corresponding transaction record to become free. #### `awrite` ```python theme={null} awrite(self, obj: Any, key: str | None = None, expiration: DateTime | None = None, holder: str | None = None) -> None ``` Write a result to storage. **Args:** * `key`: The key to write the result record to. * `obj`: The object to write to storage. * `expiration`: The expiration time for the result record. * `holder`: The holder of the lock if a lock was set on the record. #### `create_result_record` ```python theme={null} create_result_record(self, obj: Any, key: str | None = None, expiration: DateTime | None = None) -> 'ResultRecord[Any]' ``` Create a result record. **Args:** * `key`: The key to create the result record for. * `obj`: The object to create the result record for. * `expiration`: The expiration time for the result record. #### `exists` ```python theme={null} exists(self, key: str) -> bool ``` Check if a result record exists in storage. **Args:** * `key`: The key to check for the existence of a result record. **Returns:** * True if the result record exists, False otherwise. #### `generate_default_holder` ```python theme={null} generate_default_holder() -> str ``` Generate a default holder string using hostname, PID, and thread ID. **Returns:** * A unique identifier string. #### `is_lock_holder` ```python theme={null} is_lock_holder(self, key: str, holder: str | None = None) -> bool ``` Check if the current holder is the lock holder for the result record. **Args:** * `key`: The key to check the lock for. * `holder`: The holder of the lock. If not provided, a default holder based on the current host, process, and thread will be used. **Returns:** * True if the current holder is the lock holder; False otherwise. #### `is_locked` ```python theme={null} is_locked(self, key: str) -> bool ``` Check if a result record is locked. #### `persist_result_record` ```python theme={null} persist_result_record(self, result_record: 'ResultRecord[Any]', holder: str | None = None) -> None ``` Persist a result record to storage. **Args:** * `result_record`: The result record to persist. #### `read` ```python theme={null} read(self, key: str, holder: str | None = None) -> 'ResultRecord[Any]' ``` Read a result record from storage. **Args:** * `key`: The key to read the result record from. * `holder`: The holder of the lock if a lock was set on the record. **Returns:** * A result record. #### `release_lock` ```python theme={null} release_lock(self, key: str, holder: str | None = None) -> None ``` Release a lock for a result record. **Args:** * `key`: The key to release the lock for. * `holder`: The holder of the lock. Must match the holder that acquired the lock. If not provided, a default holder based on the current host, process, and thread will be used. #### `result_storage_block_id` ```python theme={null} result_storage_block_id(self) -> UUID | None ``` #### `supports_isolation_level` ```python theme={null} supports_isolation_level(self, level: 'IsolationLevel') -> bool ``` Check if the result store supports a given isolation level. **Args:** * `level`: The isolation level to check. **Returns:** * True if the isolation level is supported, False otherwise. #### `update_for_flow` ```python theme={null} update_for_flow(self, flow: 'Flow[..., Any]') -> Self ``` Create a new result store for a flow with updated settings. **Args:** * `flow`: The flow to update the result store for. **Returns:** * An updated result store. #### `update_for_task` ```python theme={null} update_for_task(self: Self, task: 'Task[P, R]') -> Self ``` Create a new result store for a task. **Args:** * `task`: The task to update the result store for. **Returns:** * An updated result store. #### `wait_for_lock` ```python theme={null} wait_for_lock(self, key: str, timeout: float | None = None) -> bool ``` Wait for the corresponding transaction record to become free. #### `write` ```python theme={null} write(self, obj: Any, key: str | None = None, expiration: DateTime | None = None, holder: str | None = None) -> None ``` Write a result to storage. Handles the creation of a `ResultRecord` and its serialization to storage. **Args:** * `key`: The key to write the result record to. * `obj`: The object to write to storage. * `expiration`: The expiration time for the result record. * `holder`: The holder of the lock if a lock was set on the record. # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-runner-__init__ # `prefect.runner` *This module is empty or contains only private/internal implementations.* # runner Source: https://docs.prefect.io/v3/api-ref/python/prefect-runner-runner # `prefect.runner.runner` Runners are responsible for managing the execution of all deployments. When creating a deployment using either `flow.serve` or the `serve` utility, they also will poll for scheduled runs. Example: ```python theme={null} import time from prefect import flow, serve @flow def slow_flow(sleep: int = 60): "Sleepy flow - sleeps the provided amount of time (in seconds)." time.sleep(sleep) @flow def fast_flow(): "Fastest flow this side of the Mississippi." return if __name__ == "__main__": slow_deploy = slow_flow.to_deployment(name="sleeper", interval=45) fast_deploy = fast_flow.to_deployment(name="fast") # serve generates a Runner instance serve(slow_deploy, fast_deploy) ``` ## Classes ### `ProcessMapEntry` ### `Runner` **Methods:** #### `aadd_deployment` ```python theme={null} aadd_deployment(self, deployment: 'RunnerDeployment') -> UUID ``` Registers the deployment with the Prefect API and will monitor for work once the runner is started. Async version. **Args:** * `deployment`: A deployment for the runner to register. #### `aadd_flow` ```python theme={null} aadd_flow(self, flow: Flow[Any, Any], name: Optional[str] = None, interval: Optional[Union[Iterable[Union[int, float, datetime.timedelta]], int, float, datetime.timedelta]] = None, cron: Optional[Union[Iterable[str], str]] = None, rrule: Optional[Union[Iterable[str], str]] = None, paused: Optional[bool] = None, schedule: Optional[Schedule] = None, schedules: Optional['FlexibleScheduleList'] = None, concurrency_limit: Optional[Union[int, ConcurrencyLimitConfig, None]] = None, parameters: Optional[dict[str, Any]] = None, triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None, description: Optional[str] = None, tags: Optional[List[str]] = None, version: Optional[str] = None, enforce_parameter_schema: bool = True, entrypoint_type: EntrypointType = EntrypointType.FILE_PATH) -> UUID ``` Provides a flow to the runner to be run based on the provided configuration. Async version. Will create a deployment for the provided flow and register the deployment with the runner. **Args:** * `flow`: A flow for the runner to run. * `name`: The name to give the created deployment. Will default to the name of the runner. * `interval`: An interval on which to execute the current flow. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds. * `cron`: A cron schedule of when to execute runs of this flow. * `rrule`: An rrule schedule of when to execute runs of this flow. * `paused`: Whether or not to set the created deployment as paused. * `schedule`: A schedule object defining when to execute runs of this deployment. Used to provide additional scheduling options like `timezone` or `parameters`. * `schedules`: A list of schedule objects defining when to execute runs of this flow. Used to define multiple schedules or additional scheduling options like `timezone`. * `concurrency_limit`: The maximum number of concurrent runs of this flow to allow. * `triggers`: A list of triggers that should kick of a run of this flow. * `parameters`: A dictionary of default parameter values to pass to runs of this flow. * `description`: A description for the created deployment. Defaults to the flow's description if not provided. * `tags`: A list of tags to associate with the created deployment for organizational purposes. * `version`: A version for the created deployment. Defaults to the flow's version. * `entrypoint_type`: Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment. #### `add_deployment` ```python theme={null} add_deployment(self, deployment: 'RunnerDeployment') -> UUID ``` Registers the deployment with the Prefect API and will monitor for work once the runner is started. **Args:** * `deployment`: A deployment for the runner to register. #### `add_flow` ```python theme={null} add_flow(self, flow: Flow[Any, Any], name: Optional[str] = None, interval: Optional[Union[Iterable[Union[int, float, datetime.timedelta]], int, float, datetime.timedelta]] = None, cron: Optional[Union[Iterable[str], str]] = None, rrule: Optional[Union[Iterable[str], str]] = None, paused: Optional[bool] = None, schedule: Optional[Schedule] = None, schedules: Optional['FlexibleScheduleList'] = None, concurrency_limit: Optional[Union[int, ConcurrencyLimitConfig, None]] = None, parameters: Optional[dict[str, Any]] = None, triggers: Optional[List[Union[DeploymentTriggerTypes, TriggerTypes]]] = None, description: Optional[str] = None, tags: Optional[List[str]] = None, version: Optional[str] = None, enforce_parameter_schema: bool = True, entrypoint_type: EntrypointType = EntrypointType.FILE_PATH) -> UUID ``` Provides a flow to the runner to be run based on the provided configuration. Will create a deployment for the provided flow and register the deployment with the runner. **Args:** * `flow`: A flow for the runner to run. * `name`: The name to give the created deployment. Will default to the name of the runner. * `interval`: An interval on which to execute the current flow. Accepts either a number or a timedelta object. If a number is given, it will be interpreted as seconds. * `cron`: A cron schedule of when to execute runs of this flow. * `rrule`: An rrule schedule of when to execute runs of this flow. * `paused`: Whether or not to set the created deployment as paused. * `schedule`: A schedule object defining when to execute runs of this deployment. Used to provide additional scheduling options like `timezone` or `parameters`. * `schedules`: A list of schedule objects defining when to execute runs of this flow. Used to define multiple schedules or additional scheduling options like `timezone`. * `concurrency_limit`: The maximum number of concurrent runs of this flow to allow. * `triggers`: A list of triggers that should kick of a run of this flow. * `parameters`: A dictionary of default parameter values to pass to runs of this flow. * `description`: A description for the created deployment. Defaults to the flow's description if not provided. * `tags`: A list of tags to associate with the created deployment for organizational purposes. * `version`: A version for the created deployment. Defaults to the flow's version. * `entrypoint_type`: Type of entrypoint to use for the deployment. When using a module path entrypoint, ensure that the module will be importable in the execution environment. #### `astop` ```python theme={null} astop(self) -> None ``` Stops the runner's polling cycle. Async version. #### `cancel_all` ```python theme={null} cancel_all(self) -> None ``` #### `execute_bundle` ```python theme={null} execute_bundle(self, bundle: SerializedBundle, cwd: Path | str | None = None, env: dict[str, str | None] | None = None) -> None ``` Executes a bundle in a subprocess. Deprecated: Use `execute_bundle()` from `prefect._experimental.bundles.execute` instead. #### `execute_flow_run` ```python theme={null} execute_flow_run(self, flow_run_id: UUID, entrypoint: str | None = None, command: str | None = None, cwd: Path | str | None = None, env: dict[str, str | None] | None = None, task_status: anyio.abc.TaskStatus[int] = anyio.TASK_STATUS_IGNORED, stream_output: bool = True) -> anyio.abc.Process | multiprocessing.context.SpawnProcess | None ``` Executes a single flow run with the given ID. Deprecated: Use `FlowRunExecutorContext` with `EngineCommandStarter` instead. Execution will wait to monitor for cancellation requests. Exits once the flow run process has exited. **Returns:** * The flow run process. #### `execute_in_background` ```python theme={null} execute_in_background(self, func: Callable[..., Any], *args: Any, **kwargs: Any) -> 'concurrent.futures.Future[Any]' ``` Executes a function in the background. #### `handle_sigterm` ```python theme={null} handle_sigterm(self, *args: Any, **kwargs: Any) -> None ``` Gracefully shuts down the runner when a SIGTERM is received. #### `has_slots_available` ```python theme={null} has_slots_available(self) -> bool ``` Determine if the flow run limit has been reached. **Returns:** * * bool: True if the limit has not been reached, False otherwise. #### `last_polled` ```python theme={null} last_polled(self) -> datetime.datetime | None ``` #### `last_polled` ```python theme={null} last_polled(self, value: datetime.datetime | None) -> None ``` #### `reschedule_current_flow_runs` ```python theme={null} reschedule_current_flow_runs(self) -> None ``` Reschedules all flow runs that are currently running. Deprecated: SIGTERM rescheduling is now handled inline by the CLI execute path. This should only be called when the runner is shutting down because it kill all child processes and short-circuit the crash detection logic. #### `start` ```python theme={null} start(self, run_once: bool = False, webserver: Optional[bool] = None) -> None ``` Starts a runner. The runner will begin monitoring for and executing any scheduled work for all added flows. **Args:** * `run_once`: If True, the runner will through one query loop and then exit. * `webserver`: a boolean for whether to start a webserver for this runner. If provided, overrides the default on the runner **Examples:** Initialize a Runner, add two flows, and serve them by starting the Runner: ```python theme={null} import asyncio from prefect import flow, Runner @flow def hello_flow(name): print(f"hello {name}") @flow def goodbye_flow(name): print(f"goodbye {name}") if __name__ == "__main__" runner = Runner(name="my-runner") # Will be runnable via the API runner.add_flow(hello_flow) # Run on a cron schedule runner.add_flow(goodbye_flow, schedule={"cron": "0 * * * *"}) asyncio.run(runner.start()) ``` #### `stop` ```python theme={null} stop(self) ``` Stops the runner's polling cycle. # server Source: https://docs.prefect.io/v3/api-ref/python/prefect-runner-server # `prefect.runner.server` ## Functions ### `perform_health_check` ```python theme={null} perform_health_check(runner: 'Runner', delay_threshold: int | None = None) -> Callable[..., JSONResponse] ``` ### `run_count` ```python theme={null} run_count(runner: 'Runner') -> Callable[..., int] ``` ### `shutdown` ```python theme={null} shutdown(runner: 'Runner') -> Callable[..., JSONResponse] ``` ### `build_server` ```python theme={null} build_server(runner: 'Runner') -> FastAPI ``` Build a FastAPI server for a runner. **Args:** * `runner`: the runner this server interacts with and monitors ### `start_webserver` ```python theme={null} start_webserver(runner: 'Runner', log_level: str | None = None) -> None ``` Run a FastAPI server for a runner. **Args:** * `runner`: the runner this server interacts with and monitors * `log_level`: the log level to use for the server ## Classes ### `RunnerGenericFlowRunRequest` # storage Source: https://docs.prefect.io/v3/api-ref/python/prefect-runner-storage # `prefect.runner.storage` ## Functions ### `create_storage_from_source` ```python theme={null} create_storage_from_source(source: str, pull_interval: Optional[int] = 60) -> RunnerStorage ``` Creates a storage object from a URL. **Args:** * `url`: The URL to create a storage object from. Supports git and `fsspec` URLs. * `pull_interval`: The interval at which to pull contents from remote storage to local storage **Returns:** * A runner storage compatible object ## Classes ### `RunnerStorage` A storage interface for a runner to use to retrieve remotely stored flow code. **Methods:** #### `destination` ```python theme={null} destination(self) -> Path ``` The local file path to pull contents from remote storage to. #### `pull_code` ```python theme={null} pull_code(self) -> None ``` Pulls contents from remote storage to the local filesystem. #### `pull_interval` ```python theme={null} pull_interval(self) -> Optional[int] ``` The interval at which contents from remote storage should be pulled to local storage. If None, remote storage will perform a one-time sync. #### `set_base_path` ```python theme={null} set_base_path(self, path: Path) -> None ``` Sets the base path to use when pulling contents from remote storage to local storage. #### `to_pull_step` ```python theme={null} to_pull_step(self) -> dict[str, Any] | list[dict[str, Any]] ``` Returns a dictionary representation of the storage object that can be used as a deployment pull step. ### `GitCredentials` ### `GitRepository` Pulls the contents of a git repository to the local filesystem. **Args:** * `url`: The URL of the git repository to pull from * `credentials`: A dictionary of credentials to use when pulling from the repository. If a username is provided, an access token must also be provided. * `name`: The name of the repository. If not provided, the name will be inferred from the repository URL. * `branch`: The branch to pull from. Defaults to "main". * `pull_interval`: The interval in seconds at which to pull contents from remote storage to local storage. If None, remote storage will perform a one-time sync. * `directories`: The directories to pull from the Git repository (uses git sparse-checkout) **Examples:** Pull the contents of a private git repository to the local filesystem: ```python theme={null} from prefect.runner.storage import GitRepository storage = GitRepository( url="https://github.com/org/repo.git", credentials={"username": "oauth2", "access_token": "my-access-token"}, ) await storage.pull_code() ``` **Methods:** #### `destination` ```python theme={null} destination(self) -> Path ``` #### `is_current_commit` ```python theme={null} is_current_commit(self) -> bool ``` Check if the current commit is the same as the commit SHA #### `is_shallow_clone` ```python theme={null} is_shallow_clone(self) -> bool ``` Check if the repository is a shallow clone #### `is_sparsely_checked_out` ```python theme={null} is_sparsely_checked_out(self) -> bool ``` Check if existing repo is sparsely checked out #### `pull_code` ```python theme={null} pull_code(self) -> None ``` Pulls the contents of the configured repository to the local filesystem. #### `pull_interval` ```python theme={null} pull_interval(self) -> Optional[int] ``` #### `set_base_path` ```python theme={null} set_base_path(self, path: Path) -> None ``` #### `to_pull_step` ```python theme={null} to_pull_step(self) -> dict[str, Any] ``` ### `RemoteStorage` Pulls the contents of a remote storage location to the local filesystem. **Args:** * `url`: The URL of the remote storage location to pull from. Supports `fsspec` URLs. Some protocols may require an additional `fsspec` dependency to be installed. Refer to the [`fsspec` docs](https://filesystem-spec.readthedocs.io/en/latest/api.html#other-known-implementations) for more details. * `pull_interval`: The interval in seconds at which to pull contents from remote storage to local storage. If None, remote storage will perform a one-time sync. * `**settings`: Any additional settings to pass the `fsspec` filesystem class. **Examples:** Pull the contents of a remote storage location to the local filesystem: ```python theme={null} from prefect.runner.storage import RemoteStorage storage = RemoteStorage(url="s3://my-bucket/my-folder") await storage.pull_code() ``` Pull the contents of a remote storage location to the local filesystem with additional settings: ```python theme={null} from prefect.runner.storage import RemoteStorage from prefect.blocks.system import Secret storage = RemoteStorage( url="s3://my-bucket/my-folder", # Use Secret blocks to keep credentials out of your code key=Secret.load("my-aws-access-key"), secret=Secret.load("my-aws-secret-key"), ) await storage.pull_code() ``` **Methods:** #### `destination` ```python theme={null} destination(self) -> Path ``` The local file path to pull contents from remote storage to. #### `pull_code` ```python theme={null} pull_code(self) -> None ``` Pulls contents from remote storage to the local filesystem. #### `pull_interval` ```python theme={null} pull_interval(self) -> Optional[int] ``` The interval at which contents from remote storage should be pulled to local storage. If None, remote storage will perform a one-time sync. #### `set_base_path` ```python theme={null} set_base_path(self, path: Path) -> None ``` #### `to_pull_step` ```python theme={null} to_pull_step(self) -> dict[str, Any] ``` Returns a dictionary representation of the storage object that can be used as a deployment pull step. ### `BlockStorageAdapter` A storage adapter for a storage block object to allow it to be used as a runner storage object. **Methods:** #### `destination` ```python theme={null} destination(self) -> Path ``` #### `pull_code` ```python theme={null} pull_code(self) -> None ``` #### `pull_interval` ```python theme={null} pull_interval(self) -> Optional[int] ``` #### `set_base_path` ```python theme={null} set_base_path(self, path: Path) -> None ``` #### `to_pull_step` ```python theme={null} to_pull_step(self) -> dict[str, Any] ``` ### `LocalStorage` Sets the working directory in the local filesystem. Parameters: Path: Local file path to set the working directory for the flow Examples: Sets the working directory for the local path to the flow: ```python theme={null} from prefect.runner.storage import Localstorage storage = LocalStorage( path="/path/to/local/flow_directory", ) ``` **Methods:** #### `destination` ```python theme={null} destination(self) -> Path ``` #### `pull_code` ```python theme={null} pull_code(self) -> None ``` #### `pull_interval` ```python theme={null} pull_interval(self) -> Optional[int] ``` #### `set_base_path` ```python theme={null} set_base_path(self, path: Path) -> None ``` #### `to_pull_step` ```python theme={null} to_pull_step(self) -> dict[str, Any] ``` Returns a dictionary representation of the storage object that can be used as a deployment pull step. # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-runtime-__init__ # `prefect.runtime` Module for easily accessing dynamic attributes for a given run, especially those generated from deployments. Example usage: ```python theme={null} from prefect.runtime import deployment print(f"This script is running from deployment {deployment.id} with parameters {deployment.parameters}") ``` # deployment Source: https://docs.prefect.io/v3/api-ref/python/prefect-runtime-deployment # `prefect.runtime.deployment` Access attributes of the current deployment run dynamically. Note that if a deployment is not currently being run, all attributes will return empty values. You can mock the runtime attributes for testing purposes by setting environment variables prefixed with `PREFECT__RUNTIME__DEPLOYMENT`. Example usage: ```python theme={null} from prefect.runtime import deployment def get_task_runner(): task_runner_config = deployment.parameters.get("runner_config", "default config here") return DummyTaskRunner(task_runner_specs=task_runner_config) ``` Available attributes: * `id`: the deployment's unique ID * `name`: the deployment's name * `version`: the deployment's version * `flow_run_id`: the current flow run ID for this deployment * `parameters`: the parameters that were passed to this run; note that these do not necessarily include default values set on the flow function, only the parameter values set on the deployment object or those directly provided via API for this run ## Functions ### `get_id` ```python theme={null} get_id() -> Optional[str] ``` ### `get_parameters` ```python theme={null} get_parameters() -> dict[str, Any] ``` ### `get_name` ```python theme={null} get_name() -> Optional[str] ``` ### `get_version` ```python theme={null} get_version() -> Optional[str] ``` ### `get_flow_run_id` ```python theme={null} get_flow_run_id() -> Optional[str] ``` # flow_run Source: https://docs.prefect.io/v3/api-ref/python/prefect-runtime-flow_run # `prefect.runtime.flow_run` Access attributes of the current flow run dynamically. Note that if a flow run cannot be discovered, all attributes will return empty values. You can mock the runtime attributes for testing purposes by setting environment variables prefixed with `PREFECT__RUNTIME__FLOW_RUN`. Available attributes: * `id`: the flow run's unique ID * `tags`: the flow run's set of tags * `scheduled_start_time`: the flow run's expected scheduled start time; defaults to now if not present * `name`: the name of the flow run * `flow_name`: the name of the flow * `flow_version`: the version of the flow * `parameters`: the parameters that were passed to this run; note that these do not necessarily include default values set on the flow function, only the parameter values explicitly passed for the run * `parent_flow_run_id`: the ID of the flow run that triggered this run, if any * `parent_deployment_id`: the ID of the deployment that triggered this run, if any * `run_count`: the number of times this flow run has been run ## Functions ### `get_id` ```python theme={null} get_id() -> Optional[str] ``` ### `get_tags` ```python theme={null} get_tags() -> List[str] ``` ### `get_run_count` ```python theme={null} get_run_count() -> int ``` ### `get_name` ```python theme={null} get_name() -> Optional[str] ``` ### `get_flow_name` ```python theme={null} get_flow_name() -> Optional[str] ``` ### `get_flow_version` ```python theme={null} get_flow_version() -> Optional[str] ``` ### `get_scheduled_start_time` ```python theme={null} get_scheduled_start_time() -> DateTime ``` ### `get_parameters` ```python theme={null} get_parameters() -> Dict[str, Any] ``` ### `get_parent_flow_run_id` ```python theme={null} get_parent_flow_run_id() -> Optional[str] ``` ### `get_parent_deployment_id` ```python theme={null} get_parent_deployment_id() -> Optional[str] ``` ### `get_root_flow_run_id` ```python theme={null} get_root_flow_run_id() -> str ``` ### `get_flow_run_api_url` ```python theme={null} get_flow_run_api_url() -> Optional[str] ``` ### `get_flow_run_ui_url` ```python theme={null} get_flow_run_ui_url() -> Optional[str] ``` ### `get_job_variables` ```python theme={null} get_job_variables() -> Optional[Dict[str, Any]] ``` # task_run Source: https://docs.prefect.io/v3/api-ref/python/prefect-runtime-task_run # `prefect.runtime.task_run` Access attributes of the current task run dynamically. Note that if a task run cannot be discovered, all attributes will return empty values. You can mock the runtime attributes for testing purposes by setting environment variables prefixed with `PREFECT__RUNTIME__TASK_RUN`. Available attributes: * `id`: the task run's unique ID * `name`: the name of the task run * `tags`: the task run's set of tags * `parameters`: the parameters the task was called with * `run_count`: the number of times this task run has been run * `task_name`: the name of the task ## Functions ### `get_id` ```python theme={null} get_id() -> str | None ``` ### `get_tags` ```python theme={null} get_tags() -> list[str] ``` ### `get_run_count` ```python theme={null} get_run_count() -> int ``` ### `get_name` ```python theme={null} get_name() -> str | None ``` ### `get_task_name` ```python theme={null} get_task_name() -> str | None ``` ### `get_parameters` ```python theme={null} get_parameters() -> dict[str, Any] ``` ### `get_task_run_api_url` ```python theme={null} get_task_run_api_url() -> str | None ``` ### `get_task_run_ui_url` ```python theme={null} get_task_run_ui_url() -> str | None ``` # schedules Source: https://docs.prefect.io/v3/api-ref/python/prefect-schedules # `prefect.schedules` This module contains functionality for creating schedules for deployments. ## Functions ### `Cron` ```python theme={null} Cron(timezone: str | None = None, day_or: bool = True, active: bool = True, parameters: dict[str, Any] | None = None, slug: str | None = None) -> Schedule ``` Creates a cron schedule. **Args:** * `cron`: A valid cron string (e.g. "0 0 \* \* \*"). * `timezone`: A valid timezone string in IANA tzdata format (e.g. America/New\_York). * `day_or`: Control how `day` and `day_of_week` entries are handled. Defaults to True, matching cron which connects those values using OR. If the switch is set to False, the values are connected using AND. This behaves like fcron and enables you to e.g. define a job that executes each 2nd friday of a month by setting the days of month and the weekday. * `active`: Whether or not the schedule is active. * `parameters`: A dictionary containing parameter overrides for the schedule. * `slug`: A unique identifier for the schedule. **Returns:** * A cron schedule. **Examples:** Create a cron schedule that runs every day at 12:00 AM UTC: ```python theme={null} from prefect.schedules import Cron Cron("0 0 * * *") ``` Create a cron schedule that runs every Monday at 8:00 AM in the America/New\_York timezone: ```python theme={null} from prefect.schedules import Cron Cron("0 8 * * 1", timezone="America/New_York") ``` ### `Interval` ```python theme={null} Interval(anchor_date: datetime.datetime | None = None, timezone: str | None = None, active: bool = True, parameters: dict[str, Any] | None = None, slug: str | None = None) -> Schedule ``` Creates an interval schedule. **Args:** * `interval`: The interval to use for the schedule. If an integer is provided, it will be interpreted as seconds. * `anchor_date`: The anchor date to use for the schedule. * `timezone`: A valid timezone string in IANA tzdata format (e.g. America/New\_York). * `active`: Whether or not the schedule is active. * `parameters`: A dictionary containing parameter overrides for the schedule. * `slug`: A unique identifier for the schedule. **Returns:** * An interval schedule. **Examples:** Create an interval schedule that runs every hour: ```python theme={null} from datetime import timedelta from prefect.schedules import Interval Interval(timedelta(hours=1)) ``` Create an interval schedule that runs every 60 seconds starting at a specific date: ```python theme={null} from datetime import datetime from prefect.schedules import Interval Interval(60, anchor_date=datetime(2024, 1, 1)) ``` ### `RRule` ```python theme={null} RRule(timezone: str | None = None, active: bool = True, parameters: dict[str, Any] | None = None, slug: str | None = None) -> Schedule ``` Creates an RRule schedule. **Args:** * `rrule`: A valid RRule string (e.g. "RRULE:FREQ=DAILY;INTERVAL=1"). * `timezone`: A valid timezone string in IANA tzdata format (e.g. America/New\_York). * `active`: Whether or not the schedule is active. * `parameters`: A dictionary containing parameter overrides for the schedule. * `slug`: A unique identifier for the schedule. **Returns:** * An RRule schedule. **Examples:** Create an RRule schedule that runs every day at 12:00 AM UTC: ```python theme={null} from prefect.schedules import RRule RRule("RRULE:FREQ=DAILY;INTERVAL=1") ``` Create an RRule schedule that runs every 2nd friday of the month in the America/Chicago timezone: ```python theme={null} from prefect.schedules import RRule RRule("RRULE:FREQ=MONTHLY;INTERVAL=1;BYDAY=2FR", timezone="America/Chicago") ``` ## Classes ### `Schedule` A dataclass representing a schedule. Note that only one of `interval`, `cron`, or `rrule` can be defined at a time. **Attributes:** * `interval`: A timedelta representing the frequency of the schedule. * `cron`: A valid cron string (e.g. "0 0 \* \* \*"). * `rrule`: A valid RRule string (e.g. "RRULE:FREQ=DAILY;INTERVAL=1"). * `timezone`: A valid timezone string in IANA tzdata format (e.g. America/New\_York). * `anchor_date`: An anchor date to schedule increments against; if not provided, the current timestamp will be used. * `day_or`: Control how `day` and `day_of_week` entries are handled. Defaults to True, matching cron which connects those values using OR. If the switch is set to False, the values are connected using AND. This behaves like fcron and enables you to e.g. define a job that executes each 2nd friday of a month by setting the days of month and the weekday. * `active`: Whether or not the schedule is active. * `parameters`: A dictionary containing parameter overrides for the schedule. * `slug`: A unique identifier for the schedule. # serializers Source: https://docs.prefect.io/v3/api-ref/python/prefect-serializers # `prefect.serializers` Serializer implementations for converting objects to bytes and bytes to objects. All serializers are based on the `Serializer` class and include a `type` string that allows them to be referenced without referencing the actual class. For example, you can get often specify the `JSONSerializer` with the string "json". Some serializers support additional settings for configuration of serialization. These are stored on the instance so the same settings can be used to load saved objects. All serializers must implement `dumps` and `loads` which convert objects to bytes and bytes to an object respectively. ## Functions ### `prefect_json_object_encoder` ```python theme={null} prefect_json_object_encoder(obj: Any) -> Any ``` `JSONEncoder.default` for encoding objects into JSON with extended type support. Raises a `TypeError` to fallback on other encoders on failure. ### `prefect_json_object_decoder` ```python theme={null} prefect_json_object_decoder(result: dict[str, Any]) -> Any ``` `JSONDecoder.object_hook` for decoding objects from JSON when previously encoded with `prefect_json_object_encoder` ## Classes ### `Serializer` A serializer that can encode objects of type 'D' into bytes. **Methods:** #### `dumps` ```python theme={null} dumps(self, obj: D) -> bytes ``` Encode the object into a blob of bytes. #### `loads` ```python theme={null} loads(self, blob: bytes) -> D ``` Decode the blob of bytes into an object. ### `PickleSerializer` Serializes objects using the pickle protocol. * Uses `cloudpickle` by default. See `picklelib` for using alternative libraries. * Stores the version of the pickle library to check for compatibility during deserialization. * Wraps pickles in base64 for safe transmission. **Methods:** #### `check_picklelib` ```python theme={null} check_picklelib(cls, value: str) -> str ``` #### `dumps` ```python theme={null} dumps(self, obj: D) -> bytes ``` #### `loads` ```python theme={null} loads(self, blob: bytes) -> D ``` ### `JSONSerializer` Serializes data to JSON. Input types must be compatible with the stdlib json library. Wraps the `json` library to serialize to UTF-8 bytes instead of string types. **Methods:** #### `dumps` ```python theme={null} dumps(self, obj: D) -> bytes ``` #### `dumps_kwargs_cannot_contain_default` ```python theme={null} dumps_kwargs_cannot_contain_default(cls, value: dict[str, Any]) -> dict[str, Any] ``` #### `loads` ```python theme={null} loads(self, blob: bytes) -> D ``` #### `loads_kwargs_cannot_contain_object_hook` ```python theme={null} loads_kwargs_cannot_contain_object_hook(cls, value: dict[str, Any]) -> dict[str, Any] ``` ### `CompressedSerializer` Wraps another serializer, compressing its output. Uses `lzma` by default. See `compressionlib` for using alternative libraries. **Attributes:** * `serializer`: The serializer to use before compression. * `compressionlib`: The import path of a compression module to use. Must have methods `compress(bytes) -> bytes` and `decompress(bytes) -> bytes`. * `level`: If not null, the level of compression to pass to `compress`. **Methods:** #### `check_compressionlib` ```python theme={null} check_compressionlib(cls, value: str) -> str ``` #### `dumps` ```python theme={null} dumps(self, obj: D) -> bytes ``` #### `loads` ```python theme={null} loads(self, blob: bytes) -> D ``` #### `validate_serializer` ```python theme={null} validate_serializer(cls, value: Union[str, Serializer[D]]) -> Serializer[D] ``` ### `CompressedPickleSerializer` A compressed serializer preconfigured to use the pickle serializer. ### `CompressedJSONSerializer` A compressed serializer preconfigured to use the json serializer. # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-__init__ # `prefect.server` *This module is empty or contains only private/internal implementations.* # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-__init__ # `prefect.server.api` *This module is empty or contains only private/internal implementations.* # admin Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-admin # `prefect.server.api.admin` Routes for admin-level interactions with the Prefect REST API. ## Functions ### `read_settings` ```python theme={null} read_settings() -> prefect.settings.Settings ``` Get the current Prefect REST API settings. Secret setting values will be obfuscated. ### `read_version` ```python theme={null} read_version() -> str ``` Returns the Prefect version number # artifacts Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-artifacts # `prefect.server.api.artifacts` Routes for interacting with artifact objects. ## Functions ### `create_artifact` ```python theme={null} create_artifact(artifact: actions.ArtifactCreate, response: Response, db: PrefectDBInterface = Depends(provide_database_interface)) -> core.Artifact ``` Create an artifact. For more information, see [https://docs.prefect.io/v3/concepts/artifacts](https://docs.prefect.io/v3/concepts/artifacts). ### `read_artifact` ```python theme={null} read_artifact(artifact_id: UUID = Path(..., description='The ID of the artifact to retrieve.', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> core.Artifact ``` Retrieve an artifact from the database. ### `read_latest_artifact` ```python theme={null} read_latest_artifact(key: str = Path(..., description='The key of the artifact to retrieve.'), db: PrefectDBInterface = Depends(provide_database_interface)) -> core.Artifact ``` Retrieve the latest artifact from the artifact table. ### `read_artifacts` ```python theme={null} read_artifacts(sort: sorting.ArtifactSort = Body(sorting.ArtifactSort.ID_DESC), limit: int = dependencies.LimitBody(), offset: int = Body(0, ge=0), artifacts: filters.ArtifactFilter = None, flow_runs: filters.FlowRunFilter = None, task_runs: filters.TaskRunFilter = None, flows: filters.FlowFilter = None, deployments: filters.DeploymentFilter = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> List[core.Artifact] ``` Retrieve artifacts from the database. ### `read_latest_artifacts` ```python theme={null} read_latest_artifacts(sort: sorting.ArtifactCollectionSort = Body(sorting.ArtifactCollectionSort.ID_DESC), limit: int = dependencies.LimitBody(), offset: int = Body(0, ge=0), artifacts: filters.ArtifactCollectionFilter = None, flow_runs: filters.FlowRunFilter = None, task_runs: filters.TaskRunFilter = None, flows: filters.FlowFilter = None, deployments: filters.DeploymentFilter = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> List[core.ArtifactCollection] ``` Retrieve artifacts from the database. ### `count_artifacts` ```python theme={null} count_artifacts(artifacts: filters.ArtifactFilter = None, flow_runs: filters.FlowRunFilter = None, task_runs: filters.TaskRunFilter = None, flows: filters.FlowFilter = None, deployments: filters.DeploymentFilter = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> int ``` Count artifacts from the database. ### `count_latest_artifacts` ```python theme={null} count_latest_artifacts(artifacts: filters.ArtifactCollectionFilter = None, flow_runs: filters.FlowRunFilter = None, task_runs: filters.TaskRunFilter = None, flows: filters.FlowFilter = None, deployments: filters.DeploymentFilter = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> int ``` Count artifacts from the database. ### `update_artifact` ```python theme={null} update_artifact(artifact: actions.ArtifactUpdate, artifact_id: UUID = Path(..., description='The ID of the artifact to update.', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Update an artifact in the database. ### `delete_artifact` ```python theme={null} delete_artifact(artifact_id: UUID = Path(..., description='The ID of the artifact to delete.', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Delete an artifact from the database. # automations Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-automations # `prefect.server.api.automations` ## Functions ### `create_automation` ```python theme={null} create_automation(automation: AutomationCreate, db: PrefectDBInterface = Depends(provide_database_interface)) -> Automation ``` Create an automation. For more information, see [https://docs.prefect.io/v3/concepts/automations](https://docs.prefect.io/v3/concepts/automations). ### `update_automation` ```python theme={null} update_automation(automation: AutomationUpdate, automation_id: UUID = Path(..., alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `patch_automation` ```python theme={null} patch_automation(automation: AutomationPartialUpdate, automation_id: UUID = Path(..., alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `delete_automation` ```python theme={null} delete_automation(automation_id: UUID = Path(..., alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `read_automations` ```python theme={null} read_automations(sort: AutomationSort = Body(AutomationSort.NAME_ASC), limit: int = LimitBody(), offset: int = Body(0, ge=0), automations: Optional[AutomationFilter] = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> Sequence[Automation] ``` ### `count_automations` ```python theme={null} count_automations(db: PrefectDBInterface = Depends(provide_database_interface)) -> int ``` ### `read_automation` ```python theme={null} read_automation(automation_id: UUID = Path(..., alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> Automation ``` ### `read_automations_related_to_resource` ```python theme={null} read_automations_related_to_resource(resource_id: str = Path(..., alias='resource_id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> Sequence[Automation] ``` ### `delete_automations_owned_by_resource` ```python theme={null} delete_automations_owned_by_resource(resource_id: str = Path(..., alias='resource_id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` # background_workers Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-background_workers # `prefect.server.api.background_workers` ## Functions ### `background_worker` ```python theme={null} background_worker(docket: Docket, ephemeral: bool = False, webserver_only: bool = False) -> AsyncGenerator[None, None] ``` # block_capabilities Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-block_capabilities # `prefect.server.api.block_capabilities` Routes for interacting with block capabilities. ## Functions ### `read_available_block_capabilities` ```python theme={null} read_available_block_capabilities(db: PrefectDBInterface = Depends(provide_database_interface)) -> List[str] ``` Get available block capabilities. For more information, see [https://docs.prefect.io/v3/concepts/blocks](https://docs.prefect.io/v3/concepts/blocks). # block_documents Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-block_documents # `prefect.server.api.block_documents` Routes for interacting with block objects. ## Functions ### `create_block_document` ```python theme={null} create_block_document(block_document: schemas.actions.BlockDocumentCreate, db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.BlockDocument ``` Create a new block document. For more information, see [https://docs.prefect.io/v3/concepts/blocks](https://docs.prefect.io/v3/concepts/blocks). ### `read_block_documents` ```python theme={null} read_block_documents(limit: int = dependencies.LimitBody(), block_documents: Optional[schemas.filters.BlockDocumentFilter] = None, block_types: Optional[schemas.filters.BlockTypeFilter] = None, block_schemas: Optional[schemas.filters.BlockSchemaFilter] = None, include_secrets: bool = Body(False, description='Whether to include sensitive values in the block document.'), sort: Optional[schemas.sorting.BlockDocumentSort] = Body(schemas.sorting.BlockDocumentSort.NAME_ASC), offset: int = Body(0, ge=0), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.core.BlockDocument] ``` Query for block documents. ### `count_block_documents` ```python theme={null} count_block_documents(block_documents: Optional[schemas.filters.BlockDocumentFilter] = None, block_types: Optional[schemas.filters.BlockTypeFilter] = None, block_schemas: Optional[schemas.filters.BlockSchemaFilter] = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> int ``` Count block documents. ### `read_block_document_by_id` ```python theme={null} read_block_document_by_id(block_document_id: UUID = Path(..., description='The block document id', alias='id'), include_secrets: bool = Query(False, description='Whether to include sensitive values in the block document.'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.BlockDocument ``` ### `delete_block_document` ```python theme={null} delete_block_document(block_document_id: UUID = Path(..., description='The block document id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `update_block_document_data` ```python theme={null} update_block_document_data(block_document: schemas.actions.BlockDocumentUpdate, block_document_id: UUID = Path(..., description='The block document id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` # block_schemas Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-block_schemas # `prefect.server.api.block_schemas` Routes for interacting with block schema objects. ## Functions ### `create_block_schema` ```python theme={null} create_block_schema(block_schema: schemas.actions.BlockSchemaCreate, response: Response, db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.BlockSchema ``` Create a block schema. For more information, see [https://docs.prefect.io/v3/concepts/blocks](https://docs.prefect.io/v3/concepts/blocks). ### `delete_block_schema` ```python theme={null} delete_block_schema(block_schema_id: UUID = Path(..., description='The block schema id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface), api_version: str = Depends(dependencies.provide_request_api_version)) -> None ``` Delete a block schema by id. ### `read_block_schemas` ```python theme={null} read_block_schemas(block_schemas: Optional[schemas.filters.BlockSchemaFilter] = None, limit: int = dependencies.LimitBody(), offset: int = Body(0, ge=0), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.core.BlockSchema] ``` Read all block schemas, optionally filtered by type ### `read_block_schema_by_id` ```python theme={null} read_block_schema_by_id(block_schema_id: UUID = Path(..., description='The block schema id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.BlockSchema ``` Get a block schema by id. ### `read_block_schema_by_checksum` ```python theme={null} read_block_schema_by_checksum(block_schema_checksum: str = Path(..., description='The block schema checksum', alias='checksum'), db: PrefectDBInterface = Depends(provide_database_interface), version: Optional[str] = Query(None, description='Version of block schema. If not provided the most recently created block schema with the matching checksum will be returned.')) -> schemas.core.BlockSchema ``` # block_types Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-block_types # `prefect.server.api.block_types` ## Functions ### `create_block_type` ```python theme={null} create_block_type(block_type: schemas.actions.BlockTypeCreate, db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.BlockType ``` Create a new block type. For more information, see [https://docs.prefect.io/v3/concepts/blocks](https://docs.prefect.io/v3/concepts/blocks). ### `read_block_type_by_id` ```python theme={null} read_block_type_by_id(block_type_id: UUID = Path(..., description='The block type ID', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.BlockType ``` Get a block type by ID. ### `read_block_type_by_slug` ```python theme={null} read_block_type_by_slug(block_type_slug: str = Path(..., description='The block type name', alias='slug'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.BlockType ``` Get a block type by name. ### `read_block_types` ```python theme={null} read_block_types(block_types: Optional[schemas.filters.BlockTypeFilter] = None, block_schemas: Optional[schemas.filters.BlockSchemaFilter] = None, limit: int = dependencies.LimitBody(), offset: int = Body(0, ge=0), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.core.BlockType] ``` Gets all block types. Optionally limit return with limit and offset. ### `update_block_type` ```python theme={null} update_block_type(block_type: schemas.actions.BlockTypeUpdate, block_type_id: UUID = Path(..., description='The block type ID', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Update a block type. ### `delete_block_type` ```python theme={null} delete_block_type(block_type_id: UUID = Path(..., description='The block type ID', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `read_block_documents_for_block_type` ```python theme={null} read_block_documents_for_block_type(db: PrefectDBInterface = Depends(provide_database_interface), block_type_slug: str = Path(..., description='The block type name', alias='slug'), include_secrets: bool = Query(False, description='Whether to include sensitive values in the block document.')) -> List[schemas.core.BlockDocument] ``` ### `read_block_document_by_name_for_block_type` ```python theme={null} read_block_document_by_name_for_block_type(db: PrefectDBInterface = Depends(provide_database_interface), block_type_slug: str = Path(..., description='The block type name', alias='slug'), block_document_name: str = Path(..., description='The block type name'), include_secrets: bool = Query(False, description='Whether to include sensitive values in the block document.')) -> schemas.core.BlockDocument ``` ### `install_system_block_types` ```python theme={null} install_system_block_types(db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` # clients Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-clients # `prefect.server.api.clients` ## Classes ### `BaseClient` ### `OrchestrationClient` **Methods:** #### `create_flow_run` ```python theme={null} create_flow_run(self, deployment_id: UUID, flow_run_create: DeploymentFlowRunCreate) -> Response ``` #### `pause_deployment` ```python theme={null} pause_deployment(self, deployment_id: UUID) -> Response ``` #### `pause_work_pool` ```python theme={null} pause_work_pool(self, work_pool_name: str) -> Response ``` #### `pause_work_queue` ```python theme={null} pause_work_queue(self, work_queue_id: UUID) -> Response ``` #### `read_block_document_raw` ```python theme={null} read_block_document_raw(self, block_document_id: UUID, include_secrets: bool = True) -> Response ``` #### `read_concurrency_limit_v2_raw` ```python theme={null} read_concurrency_limit_v2_raw(self, concurrency_limit_id: UUID) -> Response ``` #### `read_deployment` ```python theme={null} read_deployment(self, deployment_id: UUID) -> Optional[DeploymentResponse] ``` #### `read_deployment_raw` ```python theme={null} read_deployment_raw(self, deployment_id: UUID) -> Response ``` #### `read_flow_raw` ```python theme={null} read_flow_raw(self, flow_id: UUID) -> Response ``` #### `read_flow_run_raw` ```python theme={null} read_flow_run_raw(self, flow_run_id: UUID) -> Response ``` #### `read_task_run_raw` ```python theme={null} read_task_run_raw(self, task_run_id: UUID) -> Response ``` #### `read_work_pool` ```python theme={null} read_work_pool(self, work_pool_id: UUID) -> Optional[WorkPool] ``` #### `read_work_pool_raw` ```python theme={null} read_work_pool_raw(self, work_pool_id: UUID) -> Response ``` #### `read_work_queue_raw` ```python theme={null} read_work_queue_raw(self, work_queue_id: UUID) -> Response ``` #### `read_work_queue_status_raw` ```python theme={null} read_work_queue_status_raw(self, work_queue_id: UUID) -> Response ``` #### `read_workspace_variables` ```python theme={null} read_workspace_variables(self, names: Optional[List[str]] = None) -> Dict[str, StrictVariableValue] ``` #### `request` ```python theme={null} request(self, method: HTTP_METHODS, path: 'ServerRoutes', params: dict[str, Any] | None = None, path_params: dict[str, Any] | None = None, **kwargs: Any) -> 'Response' ``` #### `resume_deployment` ```python theme={null} resume_deployment(self, deployment_id: UUID) -> Response ``` #### `resume_flow_run` ```python theme={null} resume_flow_run(self, flow_run_id: UUID) -> OrchestrationResult ``` #### `resume_work_pool` ```python theme={null} resume_work_pool(self, work_pool_name: str) -> Response ``` #### `resume_work_queue` ```python theme={null} resume_work_queue(self, work_queue_id: UUID) -> Response ``` #### `set_flow_run_state` ```python theme={null} set_flow_run_state(self, flow_run_id: UUID, state: StateCreate, force: bool = False) -> Response ``` ### `WorkPoolsOrchestrationClient` **Methods:** #### `read_work_pool` ```python theme={null} read_work_pool(self, work_pool_name: str) -> WorkPool ``` Reads information for a given work pool Args: work\_pool\_name: The name of the work pool to for which to get information. Returns: Information about the requested work pool. #### `request` ```python theme={null} request(self, method: HTTP_METHODS, path: 'ServerRoutes', params: dict[str, Any] | None = None, path_params: dict[str, Any] | None = None, **kwargs: Any) -> 'Response' ``` # collections Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-collections # `prefect.server.api.collections` ## Functions ### `read_view_content` ```python theme={null} read_view_content(view: str) -> Dict[str, Any] ``` Reads the content of a view from the prefect-collection-registry. ### `get_collection_view` ```python theme={null} get_collection_view(view: str) -> dict[str, Any] ``` # concurrency_limits Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-concurrency_limits # `prefect.server.api.concurrency_limits` Routes for interacting with concurrency limit objects. This module provides a V1 API adapter that routes requests to the V2 concurrency system. After the migration, V1 limits are converted to V2, but the V1 API continues to work for backward compatibility. ## Functions ### `create_concurrency_limit` ```python theme={null} create_concurrency_limit(concurrency_limit: schemas.actions.ConcurrencyLimitCreate, response: Response, db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.ConcurrencyLimit ``` Create a task run concurrency limit. For more information, see [https://docs.prefect.io/v3/concepts/tag-based-concurrency-limits](https://docs.prefect.io/v3/concepts/tag-based-concurrency-limits). ### `read_concurrency_limit` ```python theme={null} read_concurrency_limit(concurrency_limit_id: UUID = Path(..., description='The concurrency limit id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.ConcurrencyLimit ``` Get a concurrency limit by id. The `active slots` field contains a list of TaskRun IDs currently using a concurrency slot for the specified tag. ### `read_concurrency_limit_by_tag` ```python theme={null} read_concurrency_limit_by_tag(tag: str = Path(..., description='The tag name', alias='tag'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.ConcurrencyLimit ``` Get a concurrency limit by tag. The `active slots` field contains a list of TaskRun IDs currently using a concurrency slot for the specified tag. ### `read_concurrency_limits` ```python theme={null} read_concurrency_limits(limit: int = dependencies.LimitBody(), offset: int = Body(0, ge=0), db: PrefectDBInterface = Depends(provide_database_interface)) -> Sequence[schemas.core.ConcurrencyLimit] ``` Query for concurrency limits. For each concurrency limit the `active slots` field contains a list of TaskRun IDs currently using a concurrency slot for the specified tag. ### `reset_concurrency_limit_by_tag` ```python theme={null} reset_concurrency_limit_by_tag(tag: str = Path(..., description='The tag name'), slot_override: Optional[List[UUID]] = Body(None, embed=True, description='Manual override for active concurrency limit slots.'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `delete_concurrency_limit` ```python theme={null} delete_concurrency_limit(concurrency_limit_id: UUID = Path(..., description='The concurrency limit id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `delete_concurrency_limit_by_tag` ```python theme={null} delete_concurrency_limit_by_tag(tag: str = Path(..., description='The tag name'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `increment_concurrency_limits_v1` ```python theme={null} increment_concurrency_limits_v1(names: List[str] = Body(..., description='The tags to acquire a slot for'), task_run_id: UUID = Body(..., description='The ID of the task run acquiring the slot'), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[MinimalConcurrencyLimitResponse] ``` Increment concurrency limits for the given tags. During migration, this handles both V1 and V2 limits to support mixed states. Post-migration, it only uses V2 with lease-based concurrency. ### `decrement_concurrency_limits_v1` ```python theme={null} decrement_concurrency_limits_v1(names: List[str] = Body(..., description='The tags to release a slot for'), task_run_id: UUID = Body(..., description='The ID of the task run releasing the slot'), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[MinimalConcurrencyLimitResponse] ``` Decrement concurrency limits for the given tags. Finds and revokes the lease for V2 limits or decrements V1 active slots. Returns the list of limits that were decremented. ## Classes ### `Abort` ### `Delay` # concurrency_limits_v2 Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-concurrency_limits_v2 # `prefect.server.api.concurrency_limits_v2` ## Functions ### `create_concurrency_limit_v2` ```python theme={null} create_concurrency_limit_v2(concurrency_limit: actions.ConcurrencyLimitV2Create, db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.ConcurrencyLimitV2 ``` Create a task run concurrency limit. For more information, see [https://docs.prefect.io/v3/how-to-guides/workflows/global-concurrency-limits](https://docs.prefect.io/v3/how-to-guides/workflows/global-concurrency-limits). ### `read_concurrency_limit_v2` ```python theme={null} read_concurrency_limit_v2(id_or_name: Union[UUID, str] = Path(..., description='The ID or name of the concurrency limit', alias='id_or_name'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.responses.GlobalConcurrencyLimitResponse ``` ### `read_all_concurrency_limits_v2` ```python theme={null} read_all_concurrency_limits_v2(limit: int = LimitBody(), offset: int = Body(0, ge=0), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.responses.GlobalConcurrencyLimitResponse] ``` ### `update_concurrency_limit_v2` ```python theme={null} update_concurrency_limit_v2(concurrency_limit: actions.ConcurrencyLimitV2Update, id_or_name: Union[UUID, str] = Path(..., description='The ID or name of the concurrency limit', alias='id_or_name'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `delete_concurrency_limit_v2` ```python theme={null} delete_concurrency_limit_v2(id_or_name: Union[UUID, str] = Path(..., description='The ID or name of the concurrency limit', alias='id_or_name'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `bulk_increment_active_slots` ```python theme={null} bulk_increment_active_slots(slots: int = Body(..., gt=0), names: List[str] = Body(..., min_items=1), mode: Literal['concurrency', 'rate_limit'] = Body('concurrency'), create_if_missing: Optional[bool] = Body(None, deprecated='Limits must be explicitly created before acquiring concurrency slots.'), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[MinimalConcurrencyLimitResponse] ``` ### `bulk_increment_active_slots_with_lease` ```python theme={null} bulk_increment_active_slots_with_lease(slots: int = Body(..., gt=0), names: List[str] = Body(..., min_items=1), mode: Literal['concurrency', 'rate_limit'] = Body('concurrency'), lease_duration: float = Body(300, ge=60, le=60 * 60 * 24, description='The duration of the lease in seconds.'), holder: Optional[ConcurrencyLeaseHolder] = Body(None, description='The holder of the lease with type (flow_run, task_run, or deployment) and id.'), db: PrefectDBInterface = Depends(provide_database_interface)) -> ConcurrencyLimitWithLeaseResponse ``` ### `bulk_decrement_active_slots` ```python theme={null} bulk_decrement_active_slots(slots: int = Body(..., gt=0), names: List[str] = Body(..., min_items=1), occupancy_seconds: Optional[float] = Body(None, gt=0.0), create_if_missing: bool = Body(None, deprecated='Limits must be explicitly created before decrementing active slots.'), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[MinimalConcurrencyLimitResponse] ``` ### `bulk_decrement_active_slots_with_lease` ```python theme={null} bulk_decrement_active_slots_with_lease(lease_id: UUID = Body(..., description='The ID of the lease corresponding to the concurrency limits to decrement.', embed=True), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `renew_concurrency_lease` ```python theme={null} renew_concurrency_lease(lease_id: UUID = Path(..., description='The ID of the lease to renew'), lease_duration: float = Body(300, ge=60, le=60 * 60 * 24, description='The duration of the lease in seconds.', embed=True)) -> None ``` ## Classes ### `MinimalConcurrencyLimitResponse` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `ConcurrencyLimitWithLeaseResponse` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. # csrf_token Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-csrf_token # `prefect.server.api.csrf_token` ## Functions ### `create_csrf_token` ```python theme={null} create_csrf_token(db: PrefectDBInterface = Depends(provide_database_interface), client: str = Query(..., description='The client to create a CSRF token for')) -> schemas.core.CsrfToken ``` Create or update a CSRF token for a client # dependencies Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-dependencies # `prefect.server.api.dependencies` Utilities for injecting FastAPI dependencies. ## Functions ### `provide_request_api_version` ```python theme={null} provide_request_api_version(x_prefect_api_version: str = Header(None)) -> Version | None ``` ### `LimitBody` ```python theme={null} LimitBody() -> Any ``` A `fastapi.Depends` factory for pulling a `limit: int` parameter from the request body while determining the default from the current settings. ### `get_created_by` ```python theme={null} get_created_by(prefect_automation_id: Optional[UUID] = Header(None, include_in_schema=False), prefect_automation_name: Optional[str] = Header(None, include_in_schema=False)) -> Optional[schemas.core.CreatedBy] ``` A dependency that returns the provenance information to use when creating objects during this API call. ### `get_updated_by` ```python theme={null} get_updated_by(prefect_automation_id: Optional[UUID] = Header(None, include_in_schema=False), prefect_automation_name: Optional[str] = Header(None, include_in_schema=False)) -> Optional[schemas.core.UpdatedBy] ``` A dependency that returns the provenance information to use when updating objects during this API call. ### `is_ephemeral_request` ```python theme={null} is_ephemeral_request(request: Request) -> bool ``` A dependency that returns whether the request is to an ephemeral server. ### `get_prefect_client_version` ```python theme={null} get_prefect_client_version(user_agent: Annotated[Optional[str], Header(include_in_schema=False)] = None) -> Optional[str] ``` Attempts to parse out the Prefect client version from the User-Agent header. ### `docket` ```python theme={null} docket(request: Request) -> Docket_ ``` ## Classes ### `EnforceMinimumAPIVersion` FastAPI Dependency used to check compatibility between the version of the api and a given request. Looks for the header 'X-PREFECT-API-VERSION' in the request and compares it to the api's version. Rejects requests that are lower than the minimum version. # deployments Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-deployments # `prefect.server.api.deployments` Routes for interacting with Deployment objects. ## Functions ### `create_deployment` ```python theme={null} create_deployment(deployment: schemas.actions.DeploymentCreate, response: Response, worker_lookups: WorkerLookups = Depends(WorkerLookups), created_by: Optional[schemas.core.CreatedBy] = Depends(dependencies.get_created_by), updated_by: Optional[schemas.core.UpdatedBy] = Depends(dependencies.get_updated_by), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.responses.DeploymentResponse ``` Creates a new deployment from the provided schema. If a deployment with the same name and flow\_id already exists, the deployment is updated. If the deployment has an active schedule, flow runs will be scheduled. When upserting, any scheduled runs from the existing deployment will be deleted. For more information, see [https://docs.prefect.io/v3/concepts/deployments](https://docs.prefect.io/v3/concepts/deployments). ### `update_deployment` ```python theme={null} update_deployment(deployment: schemas.actions.DeploymentUpdate, deployment_id: UUID = Path(..., description='The deployment id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `read_deployment_by_name` ```python theme={null} read_deployment_by_name(flow_name: str = Path(..., description='The name of the flow'), deployment_name: str = Path(..., description='The name of the deployment'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.responses.DeploymentResponse ``` Get a deployment using the name of the flow and the deployment. ### `read_deployment` ```python theme={null} read_deployment(deployment_id: UUID = Path(..., description='The deployment id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.responses.DeploymentResponse ``` Get a deployment by id. ### `read_deployments` ```python theme={null} read_deployments(limit: int = dependencies.LimitBody(), offset: int = Body(0, ge=0), flows: Optional[schemas.filters.FlowFilter] = None, flow_runs: Optional[schemas.filters.FlowRunFilter] = None, task_runs: Optional[schemas.filters.TaskRunFilter] = None, deployments: Optional[schemas.filters.DeploymentFilter] = None, work_pools: Optional[schemas.filters.WorkPoolFilter] = None, work_pool_queues: Optional[schemas.filters.WorkQueueFilter] = None, sort: schemas.sorting.DeploymentSort = Body(schemas.sorting.DeploymentSort.NAME_ASC), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.responses.DeploymentResponse] ``` Query for deployments. ### `paginate_deployments` ```python theme={null} paginate_deployments(limit: int = dependencies.LimitBody(), page: int = Body(1, ge=1), flows: Optional[schemas.filters.FlowFilter] = None, flow_runs: Optional[schemas.filters.FlowRunFilter] = None, task_runs: Optional[schemas.filters.TaskRunFilter] = None, deployments: Optional[schemas.filters.DeploymentFilter] = None, work_pools: Optional[schemas.filters.WorkPoolFilter] = None, work_pool_queues: Optional[schemas.filters.WorkQueueFilter] = None, sort: schemas.sorting.DeploymentSort = Body(schemas.sorting.DeploymentSort.NAME_ASC), db: PrefectDBInterface = Depends(provide_database_interface)) -> DeploymentPaginationResponse ``` Pagination query for flow runs. ### `get_scheduled_flow_runs_for_deployments` ```python theme={null} get_scheduled_flow_runs_for_deployments(docket: dependencies.Docket, deployment_ids: list[UUID] = Body(default=..., description='The deployment IDs to get scheduled runs for'), scheduled_before: DateTime = Body(None, description='The maximum time to look for scheduled flow runs'), limit: int = dependencies.LimitBody(), db: PrefectDBInterface = Depends(provide_database_interface)) -> list[schemas.responses.FlowRunResponse] ``` Get scheduled runs for a set of deployments. Used by a runner to poll for work. ### `count_deployments` ```python theme={null} count_deployments(flows: Optional[schemas.filters.FlowFilter] = None, flow_runs: Optional[schemas.filters.FlowRunFilter] = None, task_runs: Optional[schemas.filters.TaskRunFilter] = None, deployments: Optional[schemas.filters.DeploymentFilter] = None, work_pools: Optional[schemas.filters.WorkPoolFilter] = None, work_pool_queues: Optional[schemas.filters.WorkQueueFilter] = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> int ``` Count deployments. ### `delete_deployment` ```python theme={null} delete_deployment(deployment_id: UUID = Path(..., description='The deployment id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Delete a deployment by id. ### `bulk_delete_deployments` ```python theme={null} bulk_delete_deployments(deployments: Optional[schemas.filters.DeploymentFilter] = Body(None, description='Filter criteria for deployments to delete'), limit: int = Body(BULK_OPERATION_LIMIT, ge=1, le=BULK_OPERATION_LIMIT, description=f'Maximum number of deployments to delete. Defaults to {BULK_OPERATION_LIMIT}.'), db: PrefectDBInterface = Depends(provide_database_interface)) -> DeploymentBulkDeleteResponse ``` Bulk delete deployments matching the specified filter criteria. Returns the IDs of deployments that were deleted. ### `schedule_deployment` ```python theme={null} schedule_deployment(deployment_id: UUID = Path(..., description='The deployment id', alias='id'), start_time: datetime.datetime = Body(None, description='The earliest date to schedule'), end_time: datetime.datetime = Body(None, description='The latest date to schedule'), min_time: float = Body(None, description='Runs will be scheduled until at least this long after the `start_time`', json_schema_extra={'format': 'time-delta'}), min_runs: int = Body(None, description='The minimum number of runs to schedule'), max_runs: int = Body(None, description='The maximum number of runs to schedule'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Schedule runs for a deployment. For backfills, provide start/end times in the past. This function will generate the minimum number of runs that satisfy the min and max times, and the min and max counts. Specifically, the following order will be respected. * Runs will be generated starting on or after the `start_time` * No more than `max_runs` runs will be generated * No runs will be generated after `end_time` is reached * At least `min_runs` runs will be generated * Runs will be generated until at least `start_time + min_time` is reached ### `resume_deployment` ```python theme={null} resume_deployment(deployment_id: UUID = Path(..., description='The deployment id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Set a deployment schedule to active. Runs will be scheduled immediately. ### `pause_deployment` ```python theme={null} pause_deployment(deployment_id: UUID = Path(..., description='The deployment id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Set a deployment schedule to inactive. Any auto-scheduled runs still in a Scheduled state will be deleted. ### `create_flow_run_from_deployment` ```python theme={null} create_flow_run_from_deployment(flow_run: schemas.actions.DeploymentFlowRunCreate, deployment_id: UUID = Path(..., description='The deployment id', alias='id'), created_by: Optional[schemas.core.CreatedBy] = Depends(dependencies.get_created_by), db: PrefectDBInterface = Depends(provide_database_interface), worker_lookups: WorkerLookups = Depends(WorkerLookups), response: Response = None) -> schemas.responses.FlowRunResponse ``` Create a flow run from a deployment. Any parameters not provided will be inferred from the deployment's parameters. If tags are not provided, the deployment's tags will be used. If no state is provided, the flow run will be created in a SCHEDULED state. ### `bulk_create_flow_runs_from_deployment` ```python theme={null} bulk_create_flow_runs_from_deployment(flow_runs: List[schemas.actions.DeploymentFlowRunCreate] = Body(..., description='List of flow run configurations to create'), deployment_id: UUID = Path(..., description='The deployment id', alias='id'), created_by: Optional[schemas.core.CreatedBy] = Depends(dependencies.get_created_by), db: PrefectDBInterface = Depends(provide_database_interface), worker_lookups: WorkerLookups = Depends(WorkerLookups)) -> FlowRunBulkCreateResponse ``` Create multiple flow runs from a deployment. Any parameters not provided will be inferred from the deployment's parameters. If tags are not provided, the deployment's tags will be used. If no state is provided, the flow runs will be created in a SCHEDULED state. ### `work_queue_check_for_deployment` ```python theme={null} work_queue_check_for_deployment(deployment_id: UUID = Path(..., description='The deployment id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.core.WorkQueue] ``` Get list of work-queues that are able to pick up the specified deployment. This endpoint is intended to be used by the UI to provide users warnings about deployments that are unable to be executed because there are no work queues that will pick up their runs, based on existing filter criteria. It may be deprecated in the future because there is not a strict relationship between work queues and deployments. ### `read_deployment_schedules` ```python theme={null} read_deployment_schedules(deployment_id: UUID = Path(..., description='The deployment id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.core.DeploymentSchedule] ``` ### `create_deployment_schedules` ```python theme={null} create_deployment_schedules(deployment_id: UUID = Path(..., description='The deployment id', alias='id'), schedules: List[schemas.actions.DeploymentScheduleCreate] = Body(default=..., description='The schedules to create'), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.core.DeploymentSchedule] ``` ### `update_deployment_schedule` ```python theme={null} update_deployment_schedule(deployment_id: UUID = Path(..., description='The deployment id', alias='id'), schedule_id: UUID = Path(..., description='The schedule id', alias='schedule_id'), schedule: schemas.actions.DeploymentScheduleUpdate = Body(default=..., description='The updated schedule'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `delete_deployment_schedule` ```python theme={null} delete_deployment_schedule(deployment_id: UUID = Path(..., description='The deployment id', alias='id'), schedule_id: UUID = Path(..., description='The schedule id', alias='schedule_id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` # events Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-events # `prefect.server.api.events` ## Functions ### `create_events` ```python theme={null} create_events(events: List[Event], ephemeral_request: bool = Depends(is_ephemeral_request)) -> None ``` Record a batch of Events. For more information, see [https://docs.prefect.io/v3/concepts/events](https://docs.prefect.io/v3/concepts/events). ### `stream_events_in` ```python theme={null} stream_events_in(websocket: WebSocket) -> None ``` Open a WebSocket to stream incoming Events ### `stream_workspace_events_out` ```python theme={null} stream_workspace_events_out(websocket: WebSocket) -> None ``` Open a WebSocket to stream Events ### `verified_page_token` ```python theme={null} verified_page_token(page_token: str = Query(..., alias='page-token')) -> str ``` ### `read_events` ```python theme={null} read_events(request: Request, filter: Optional[EventFilter] = Body(None, description='Additional optional filter criteria to narrow down the set of Events'), limit: int = Body(INTERACTIVE_PAGE_SIZE, ge=0, le=INTERACTIVE_PAGE_SIZE, embed=True, description='The number of events to return with each page'), db: PrefectDBInterface = Depends(provide_database_interface)) -> EventPage ``` Queries for Events matching the given filter criteria in the given Account. Returns the first page of results, and the URL to request the next page (if there are more results). ### `read_account_events_page` ```python theme={null} read_account_events_page(request: Request, page_token: str = Depends(verified_page_token), db: PrefectDBInterface = Depends(provide_database_interface)) -> EventPage ``` Returns the next page of Events for a previous query against the given Account, and the URL to request the next page (if there are more results). ### `generate_next_page_link` ```python theme={null} generate_next_page_link(request: Request, page_token: Optional[str]) -> Optional[str] ``` ### `count_account_events` ```python theme={null} count_account_events(filter: EventFilter, countable: Countable = Path(...), time_unit: TimeUnit = Body(default=TimeUnit.day), time_interval: float = Body(default=1.0, ge=0.01), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[EventCount] ``` Returns distinct objects and the count of events associated with them. Objects that can be counted include the day the event occurred, the type of event, or the IDs of the resources associated with the event. ### `handle_event_count_request` ```python theme={null} handle_event_count_request(session: AsyncSession, filter: EventFilter, countable: Countable, time_unit: TimeUnit, time_interval: float) -> List[EventCount] ``` # flow_run_states Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-flow_run_states # `prefect.server.api.flow_run_states` Routes for interacting with flow run state objects. ## Functions ### `read_flow_run_state` ```python theme={null} read_flow_run_state(flow_run_state_id: UUID = Path(..., description='The flow run state id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.states.State ``` Get a flow run state by id. For more information, see [https://docs.prefect.io/v3/concepts/flows#final-state-determination](https://docs.prefect.io/v3/concepts/flows#final-state-determination). ### `read_flow_run_states` ```python theme={null} read_flow_run_states(flow_run_id: UUID, db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.states.State] ``` Get states associated with a flow run. # flow_runs Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-flow_runs # `prefect.server.api.flow_runs` Routes for interacting with flow run objects. ## Functions ### `create_flow_run` ```python theme={null} create_flow_run(flow_run: schemas.actions.FlowRunCreate, db: PrefectDBInterface = Depends(provide_database_interface), response: Response = None, created_by: Optional[schemas.core.CreatedBy] = Depends(dependencies.get_created_by), orchestration_parameters: Dict[str, Any] = Depends(orchestration_dependencies.provide_flow_orchestration_parameters), api_version: str = Depends(dependencies.provide_request_api_version), worker_lookups: WorkerLookups = Depends(WorkerLookups)) -> schemas.responses.FlowRunResponse ``` Create a flow run. If a flow run with the same flow\_id and idempotency key already exists, the existing flow run will be returned. If no state is provided, the flow run will be created in a PENDING state. For more information, see [https://docs.prefect.io/v3/concepts/flows](https://docs.prefect.io/v3/concepts/flows). ### `update_flow_run` ```python theme={null} update_flow_run(flow_run: schemas.actions.FlowRunUpdate, flow_run_id: UUID = Path(..., description='The flow run id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Updates a flow run. ### `count_flow_runs` ```python theme={null} count_flow_runs(flows: Optional[schemas.filters.FlowFilter] = None, flow_runs: Optional[schemas.filters.FlowRunFilter] = None, task_runs: Optional[schemas.filters.TaskRunFilter] = None, deployments: Optional[schemas.filters.DeploymentFilter] = None, work_pools: Optional[schemas.filters.WorkPoolFilter] = None, work_pool_queues: Optional[schemas.filters.WorkQueueFilter] = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> int ``` Query for flow runs. ### `average_flow_run_lateness` ```python theme={null} average_flow_run_lateness(flows: Optional[schemas.filters.FlowFilter] = None, flow_runs: Optional[schemas.filters.FlowRunFilter] = None, task_runs: Optional[schemas.filters.TaskRunFilter] = None, deployments: Optional[schemas.filters.DeploymentFilter] = None, work_pools: Optional[schemas.filters.WorkPoolFilter] = None, work_pool_queues: Optional[schemas.filters.WorkQueueFilter] = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> Optional[float] ``` Query for average flow-run lateness in seconds. ### `flow_run_history` ```python theme={null} flow_run_history(history_start: DateTime = Body(..., description="The history's start time."), history_end: DateTime = Body(..., description="The history's end time."), history_interval_seconds: float = Body(..., description='The size of each history interval, in seconds. Must be at least 1 second.', json_schema_extra={'format': 'time-delta'}), flows: Optional[schemas.filters.FlowFilter] = None, flow_runs: Optional[schemas.filters.FlowRunFilter] = None, task_runs: Optional[schemas.filters.TaskRunFilter] = None, deployments: Optional[schemas.filters.DeploymentFilter] = None, work_pools: Optional[schemas.filters.WorkPoolFilter] = None, work_queues: Optional[schemas.filters.WorkQueueFilter] = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.responses.HistoryResponse] ``` Query for flow run history data across a given range and interval. ### `read_flow_run` ```python theme={null} read_flow_run(flow_run_id: UUID = Path(..., description='The flow run id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.responses.FlowRunResponse ``` Get a flow run by id. ### `read_flow_run_graph_v1` ```python theme={null} read_flow_run_graph_v1(flow_run_id: UUID = Path(..., description='The flow run id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[DependencyResult] ``` Get a task run dependency map for a given flow run. ### `read_flow_run_graph_v2` ```python theme={null} read_flow_run_graph_v2(flow_run_id: UUID = Path(..., description='The flow run id', alias='id'), since: datetime.datetime = Query(default=jsonable_encoder(earliest_possible_datetime()), description='Only include runs that start or end after this time.'), db: PrefectDBInterface = Depends(provide_database_interface)) -> Graph ``` Get a graph of the tasks and subflow runs for the given flow run ### `resume_flow_run` ```python theme={null} resume_flow_run(response: Response, flow_run_id: UUID = Path(..., description='The flow run id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface), run_input: Optional[dict[str, Any]] = Body(default=None, embed=True), flow_policy: type[FlowRunOrchestrationPolicy] = Depends(orchestration_dependencies.provide_flow_policy), task_policy: type[TaskRunOrchestrationPolicy] = Depends(orchestration_dependencies.provide_task_policy), orchestration_parameters: Dict[str, Any] = Depends(orchestration_dependencies.provide_flow_orchestration_parameters), api_version: str = Depends(dependencies.provide_request_api_version), client_version: Optional[str] = Depends(dependencies.get_prefect_client_version)) -> OrchestrationResult ``` Resume a paused flow run. ### `read_flow_runs` ```python theme={null} read_flow_runs(sort: schemas.sorting.FlowRunSort = Body(schemas.sorting.FlowRunSort.ID_DESC), limit: int = dependencies.LimitBody(), offset: int = Body(0, ge=0), flows: Optional[schemas.filters.FlowFilter] = None, flow_runs: Optional[schemas.filters.FlowRunFilter] = None, task_runs: Optional[schemas.filters.TaskRunFilter] = None, deployments: Optional[schemas.filters.DeploymentFilter] = None, work_pools: Optional[schemas.filters.WorkPoolFilter] = None, work_pool_queues: Optional[schemas.filters.WorkQueueFilter] = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.responses.FlowRunResponse] ``` Query for flow runs. ### `delete_flow_run` ```python theme={null} delete_flow_run(docket: dependencies.Docket, flow_run_id: UUID = Path(..., description='The flow run id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Delete a flow run by id. ### `delete_flow_run_logs` ```python theme={null} delete_flow_run_logs() -> None ``` ### `bulk_delete_flow_runs` ```python theme={null} bulk_delete_flow_runs(docket: dependencies.Docket, flow_runs: Optional[schemas.filters.FlowRunFilter] = Body(None, description='Filter criteria for flow runs to delete'), limit: int = Body(BULK_OPERATION_LIMIT, ge=1, le=BULK_OPERATION_LIMIT, description=f'Maximum number of flow runs to delete. Defaults to {BULK_OPERATION_LIMIT}.'), db: PrefectDBInterface = Depends(provide_database_interface)) -> FlowRunBulkDeleteResponse ``` Bulk delete flow runs matching the specified filter criteria. Returns the IDs of flow runs that were deleted. ### `bulk_set_flow_run_state` ```python theme={null} bulk_set_flow_run_state(flow_runs: Optional[schemas.filters.FlowRunFilter] = Body(None, description='Filter criteria for flow runs to update'), state: schemas.actions.StateCreate = Body(..., description='The state to set'), force: bool = Body(False, description='If false, orchestration rules will be applied that may alter or prevent the state transition. If True, orchestration rules are not applied.'), limit: int = Body(BULK_OPERATION_LIMIT, ge=1, le=BULK_OPERATION_LIMIT, description=f'Maximum number of flow runs to update. Defaults to {BULK_OPERATION_LIMIT}.'), db: PrefectDBInterface = Depends(provide_database_interface), flow_policy: type[FlowRunOrchestrationPolicy] = Depends(orchestration_dependencies.provide_flow_policy), orchestration_parameters: Dict[str, Any] = Depends(orchestration_dependencies.provide_flow_orchestration_parameters), api_version: str = Depends(dependencies.provide_request_api_version), client_version: Optional[str] = Depends(dependencies.get_prefect_client_version)) -> FlowRunBulkSetStateResponse ``` Bulk set state for flow runs matching the specified filter criteria. Returns the orchestration results for each flow run. ### `set_flow_run_state` ```python theme={null} set_flow_run_state(response: Response, flow_run_id: UUID = Path(..., description='The flow run id', alias='id'), state: schemas.actions.StateCreate = Body(..., description='The intended state.'), force: bool = Body(False, description='If false, orchestration rules will be applied that may alter or prevent the state transition. If True, orchestration rules are not applied.'), db: PrefectDBInterface = Depends(provide_database_interface), flow_policy: type[FlowRunOrchestrationPolicy] = Depends(orchestration_dependencies.provide_flow_policy), orchestration_parameters: Dict[str, Any] = Depends(orchestration_dependencies.provide_flow_orchestration_parameters), api_version: str = Depends(dependencies.provide_request_api_version), client_version: Optional[str] = Depends(dependencies.get_prefect_client_version)) -> OrchestrationResult ``` Set a flow run state, invoking any orchestration rules. ### `create_flow_run_input` ```python theme={null} create_flow_run_input(flow_run_id: UUID = Path(..., description='The flow run id', alias='id'), key: str = Body(..., description='The input key'), value: bytes = Body(..., description='The value of the input'), sender: Optional[str] = Body(None, description='The sender of the input'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Create a key/value input for a flow run. ### `filter_flow_run_input` ```python theme={null} filter_flow_run_input(flow_run_id: UUID = Path(..., description='The flow run id', alias='id'), prefix: str = Body(..., description='The input key prefix', embed=True), limit: int = Body(1, description='The maximum number of results to return', embed=True), exclude_keys: List[str] = Body([], description='Exclude inputs with these keys', embed=True), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.core.FlowRunInput] ``` Filter flow run inputs by key prefix ### `read_flow_run_input` ```python theme={null} read_flow_run_input(flow_run_id: UUID = Path(..., description='The flow run id', alias='id'), key: str = Path(..., description='The input key', alias='key'), db: PrefectDBInterface = Depends(provide_database_interface)) -> PlainTextResponse ``` Create a value from a flow run input ### `delete_flow_run_input` ```python theme={null} delete_flow_run_input(flow_run_id: UUID = Path(..., description='The flow run id', alias='id'), key: str = Path(..., description='The input key', alias='key'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Delete a flow run input ### `paginate_flow_runs` ```python theme={null} paginate_flow_runs(sort: schemas.sorting.FlowRunSort = Body(schemas.sorting.FlowRunSort.ID_DESC), limit: int = dependencies.LimitBody(), page: int = Body(1, ge=1), flows: Optional[schemas.filters.FlowFilter] = None, flow_runs: Optional[schemas.filters.FlowRunFilter] = None, task_runs: Optional[schemas.filters.TaskRunFilter] = None, deployments: Optional[schemas.filters.DeploymentFilter] = None, work_pools: Optional[schemas.filters.WorkPoolFilter] = None, work_pool_queues: Optional[schemas.filters.WorkQueueFilter] = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> FlowRunPaginationResponse ``` Pagination query for flow runs. ### `download_logs` ```python theme={null} download_logs(flow_run_id: UUID = Path(..., description='The flow run id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> StreamingResponse ``` Download all flow run logs as a CSV file, collecting all logs until there are no more logs to retrieve. ### `update_flow_run_labels` ```python theme={null} update_flow_run_labels(flow_run_id: UUID = Path(..., description='The flow run id', alias='id'), labels: Dict[str, Any] = Body(..., description='The labels to update'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Update the labels of a flow run. # flows Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-flows # `prefect.server.api.flows` Routes for interacting with flow objects. ## Functions ### `create_flow` ```python theme={null} create_flow(flow: schemas.actions.FlowCreate, response: Response, db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.Flow ``` Creates a new flow from the provided schema. If a flow with the same name already exists, the existing flow is returned. For more information, see [https://docs.prefect.io/v3/concepts/flows](https://docs.prefect.io/v3/concepts/flows). ### `update_flow` ```python theme={null} update_flow(flow: schemas.actions.FlowUpdate, flow_id: UUID = Path(..., description='The flow id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Updates a flow. ### `count_flows` ```python theme={null} count_flows(flows: schemas.filters.FlowFilter = None, flow_runs: schemas.filters.FlowRunFilter = None, task_runs: schemas.filters.TaskRunFilter = None, deployments: schemas.filters.DeploymentFilter = None, work_pools: schemas.filters.WorkPoolFilter = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> int ``` Count flows. ### `read_flow_by_name` ```python theme={null} read_flow_by_name(name: str = Path(..., description='The name of the flow'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.Flow ``` Get a flow by name. ### `read_flow` ```python theme={null} read_flow(flow_id: UUID = Path(..., description='The flow id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.Flow ``` Get a flow by id. ### `read_flows` ```python theme={null} read_flows(limit: int = dependencies.LimitBody(), offset: int = Body(0, ge=0), flows: schemas.filters.FlowFilter = None, flow_runs: schemas.filters.FlowRunFilter = None, task_runs: schemas.filters.TaskRunFilter = None, deployments: schemas.filters.DeploymentFilter = None, work_pools: schemas.filters.WorkPoolFilter = None, sort: schemas.sorting.FlowSort = Body(schemas.sorting.FlowSort.NAME_ASC), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.core.Flow] ``` Query for flows. ### `delete_flow` ```python theme={null} delete_flow(flow_id: UUID = Path(..., description='The flow id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Delete a flow by id. ### `bulk_delete_flows` ```python theme={null} bulk_delete_flows(flows: Optional[schemas.filters.FlowFilter] = Body(None, description='Filter criteria for flows to delete'), limit: int = Body(BULK_OPERATION_LIMIT, ge=1, le=BULK_OPERATION_LIMIT, description=f'Maximum number of flows to delete. Defaults to {BULK_OPERATION_LIMIT}.'), db: PrefectDBInterface = Depends(provide_database_interface)) -> FlowBulkDeleteResponse ``` Bulk delete flows matching the specified filter criteria. This also deletes all associated deployments. Returns the IDs of flows that were deleted. ### `paginate_flows` ```python theme={null} paginate_flows(limit: int = dependencies.LimitBody(), page: int = Body(1, ge=1), flows: Optional[schemas.filters.FlowFilter] = None, flow_runs: Optional[schemas.filters.FlowRunFilter] = None, task_runs: Optional[schemas.filters.TaskRunFilter] = None, deployments: Optional[schemas.filters.DeploymentFilter] = None, work_pools: Optional[schemas.filters.WorkPoolFilter] = None, sort: schemas.sorting.FlowSort = Body(schemas.sorting.FlowSort.NAME_ASC), db: PrefectDBInterface = Depends(provide_database_interface)) -> FlowPaginationResponse ``` Pagination query for flows. # logs Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-logs # `prefect.server.api.logs` Routes for interacting with log objects. ## Functions ### `create_logs` ```python theme={null} create_logs(logs: Sequence[LogCreate], db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Create new logs from the provided schema. For more information, see [https://docs.prefect.io/v3/how-to-guides/workflows/add-logging](https://docs.prefect.io/v3/how-to-guides/workflows/add-logging). ### `read_logs` ```python theme={null} read_logs(limit: int = dependencies.LimitBody(), offset: int = Body(0, ge=0), logs: Optional[LogFilter] = None, sort: LogSort = Body(LogSort.TIMESTAMP_ASC), db: PrefectDBInterface = Depends(provide_database_interface)) -> Sequence[Log] ``` Query for logs. ### `stream_logs_out` ```python theme={null} stream_logs_out(websocket: WebSocket) -> None ``` Serve a WebSocket to stream live logs # middleware Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-middleware # `prefect.server.api.middleware` ## Classes ### `CsrfMiddleware` Middleware for CSRF protection. This middleware will check for a CSRF token in the headers of any POST, PUT, PATCH, or DELETE request. If the token is not present or does not match the token stored in the database for the client, the request will be rejected with a 403 status code. **Methods:** #### `dispatch` ```python theme={null} dispatch(self, request: Request, call_next: NextMiddlewareFunction) -> Response ``` Dispatch method for the middleware. This method will check for the presence of a CSRF token in the headers of the request and compare it to the token stored in the database for the client. If the token is not present or does not match, the request will be rejected with a 403 status code. # root Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-root # `prefect.server.api.root` Contains the `hello` route for testing and healthcheck purposes. ## Functions ### `hello` ```python theme={null} hello() -> str ``` Say hello! ### `perform_readiness_check` ```python theme={null} perform_readiness_check(db: PrefectDBInterface = Depends(provide_database_interface)) -> JSONResponse ``` # run_history Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-run_history # `prefect.server.api.run_history` Utilities for querying flow and task run history. ## Functions ### `run_history` ```python theme={null} run_history(db: PrefectDBInterface, session: sa.orm.Session, run_type: Literal['flow_run', 'task_run'], history_start: DateTime, history_end: DateTime, history_interval: datetime.timedelta, flows: Optional[schemas.filters.FlowFilter] = None, flow_runs: Optional[schemas.filters.FlowRunFilter] = None, task_runs: Optional[schemas.filters.TaskRunFilter] = None, deployments: Optional[schemas.filters.DeploymentFilter] = None, work_pools: Optional[schemas.filters.WorkPoolFilter] = None, work_queues: Optional[schemas.filters.WorkQueueFilter] = None) -> list[schemas.responses.HistoryResponse] ``` Produce a history of runs aggregated by interval and state # saved_searches Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-saved_searches # `prefect.server.api.saved_searches` Routes for interacting with saved search objects. ## Functions ### `create_saved_search` ```python theme={null} create_saved_search(saved_search: schemas.actions.SavedSearchCreate, response: Response, db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.SavedSearch ``` Creates a new saved search from the provided schema. If a saved search with the same name already exists, the saved search's fields are replaced. ### `read_saved_search` ```python theme={null} read_saved_search(saved_search_id: UUID = Path(..., description='The saved search id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.SavedSearch ``` Get a saved search by id. ### `read_saved_searches` ```python theme={null} read_saved_searches(limit: int = dependencies.LimitBody(), offset: int = Body(0, ge=0), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.core.SavedSearch] ``` Query for saved searches. ### `delete_saved_search` ```python theme={null} delete_saved_search(saved_search_id: UUID = Path(..., description='The saved search id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Delete a saved search by id. # server Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-server # `prefect.server.api.server` Defines the Prefect REST API FastAPI app. ## Functions ### `validation_exception_handler` ```python theme={null} validation_exception_handler(request: Request, exc: RequestValidationError) -> JSONResponse ``` Provide a detailed message for request validation errors. ### `integrity_exception_handler` ```python theme={null} integrity_exception_handler(request: Request, exc: Exception) -> JSONResponse ``` Capture database integrity errors. ### `is_client_retryable_exception` ```python theme={null} is_client_retryable_exception(exc: Exception) -> bool ``` ### `replace_placeholder_string_in_files` ```python theme={null} replace_placeholder_string_in_files(directory: str, placeholder: str, replacement: str, allowed_extensions: list[str] | None = None) -> None ``` Recursively loops through all files in the given directory and replaces a placeholder string. ### `copy_directory` ```python theme={null} copy_directory(directory: str, path: str) -> None ``` ### `custom_internal_exception_handler` ```python theme={null} custom_internal_exception_handler(request: Request, exc: Exception) -> JSONResponse ``` Log a detailed exception for internal server errors before returning. Send 503 for errors clients can retry on. ### `prefect_object_not_found_exception_handler` ```python theme={null} prefect_object_not_found_exception_handler(request: Request, exc: ObjectNotFoundError) -> JSONResponse ``` Return 404 status code on object not found exceptions. ### `create_api_app` ```python theme={null} create_api_app(dependencies: list[Any] | None = None, health_check_path: str = '/health', version_check_path: str = '/version', fast_api_app_kwargs: dict[str, Any] | None = None, final: bool = False, ignore_cache: bool = False) -> FastAPI ``` Create a FastAPI app that includes the Prefect REST API **Args:** * `dependencies`: a list of global dependencies to add to each Prefect REST API router * `health_check_path`: the health check route path * `fast_api_app_kwargs`: kwargs to pass to the FastAPI constructor * `final`: whether this will be the last instance of the Prefect server to be created in this process, so that additional optimizations may be applied * `ignore_cache`: if set, a new app will be created even if the settings and fast\_api\_app\_kwargs match an existing app in the cache **Returns:** * a FastAPI app that serves the Prefect REST API ### `create_ui_app` ```python theme={null} create_ui_app(ephemeral: bool) -> FastAPI ``` ### `create_app` ```python theme={null} create_app(settings: Optional[prefect.settings.Settings] = None, ephemeral: bool = False, webserver_only: bool = False, final: bool = False, ignore_cache: bool = False) -> FastAPI ``` Create a FastAPI app that includes the Prefect REST API and UI **Args:** * `settings`: The settings to use to create the app. If not set, settings are pulled from the context. * `ephemeral`: If set, the application will be treated as ephemeral. The UI and services will be disabled. * `webserver_only`: If set, the webserver and UI will be available but all background services will be disabled. * `final`: whether this will be the last instance of the Prefect server to be created in this process, so that additional optimizations may be applied * `ignore_cache`: If set, a new application will be created even if the settings match. Otherwise, an application is returned from the cache. ## Classes ### `SPAStaticFiles` Implementation of `StaticFiles` for serving single page applications. Adds `get_response` handling to ensure that when a resource isn't found the application still returns the index. **Methods:** #### `get_response` ```python theme={null} get_response(self, path: str, scope: Any) -> Response ``` ### `RequestLimitMiddleware` A middleware that limits the number of concurrent requests handled by the API. This is a blunt tool for limiting SQLite concurrent writes which will cause failures at high volume. Ideally, we would only apply the limit to routes that perform writes. ### `SubprocessASGIServer` **Methods:** #### `address` ```python theme={null} address(self) -> str ``` #### `api_url` ```python theme={null} api_url(self) -> str ``` #### `find_available_port` ```python theme={null} find_available_port(self) -> int ``` #### `is_port_available` ```python theme={null} is_port_available(port: int) -> bool ``` #### `start` ```python theme={null} start(self, timeout: Optional[int] = None) -> None ``` Start the server in a separate process. Safe to call multiple times; only starts the server once. **Args:** * `timeout`: The maximum time to wait for the server to start #### `stop` ```python theme={null} stop(self) -> None ``` # task_run_states Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-task_run_states # `prefect.server.api.task_run_states` Routes for interacting with task run state objects. ## Functions ### `read_task_run_state` ```python theme={null} read_task_run_state(task_run_state_id: UUID = Path(..., description='The task run state id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.states.State ``` Get a task run state by id. For more information, see [https://docs.prefect.io/v3/concepts/tasks](https://docs.prefect.io/v3/concepts/tasks). ### `read_task_run_states` ```python theme={null} read_task_run_states(task_run_id: UUID, db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.states.State] ``` Get states associated with a task run. # task_runs Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-task_runs # `prefect.server.api.task_runs` Routes for interacting with task run objects. ## Functions ### `create_task_run` ```python theme={null} create_task_run(task_run: schemas.actions.TaskRunCreate, response: Response, db: PrefectDBInterface = Depends(provide_database_interface), orchestration_parameters: Dict[str, Any] = Depends(orchestration_dependencies.provide_task_orchestration_parameters)) -> schemas.core.TaskRun ``` Create a task run. If a task run with the same flow\_run\_id, task\_key, and dynamic\_key already exists, the existing task run will be returned. If no state is provided, the task run will be created in a PENDING state. For more information, see [https://docs.prefect.io/v3/concepts/tasks](https://docs.prefect.io/v3/concepts/tasks). ### `update_task_run` ```python theme={null} update_task_run(task_run: schemas.actions.TaskRunUpdate, task_run_id: UUID = Path(..., description='The task run id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Updates a task run. ### `count_task_runs` ```python theme={null} count_task_runs(db: PrefectDBInterface = Depends(provide_database_interface), flows: schemas.filters.FlowFilter = None, flow_runs: schemas.filters.FlowRunFilter = None, task_runs: schemas.filters.TaskRunFilter = None, deployments: schemas.filters.DeploymentFilter = None) -> int ``` Count task runs. ### `task_run_history` ```python theme={null} task_run_history(history_start: DateTime = Body(..., description="The history's start time."), history_end: DateTime = Body(..., description="The history's end time."), history_interval_seconds: float = Body(..., description='The size of each history interval, in seconds. Must be at least 1 second.', json_schema_extra={'format': 'time-delta'}), flows: schemas.filters.FlowFilter = None, flow_runs: schemas.filters.FlowRunFilter = None, task_runs: schemas.filters.TaskRunFilter = None, deployments: schemas.filters.DeploymentFilter = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.responses.HistoryResponse] ``` Query for task run history data across a given range and interval. ### `read_task_run` ```python theme={null} read_task_run(task_run_id: UUID = Path(..., description='The task run id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.TaskRun ``` Get a task run by id. ### `read_task_runs` ```python theme={null} read_task_runs(sort: schemas.sorting.TaskRunSort = Body(schemas.sorting.TaskRunSort.ID_DESC), limit: int = dependencies.LimitBody(), offset: int = Body(0, ge=0), flows: Optional[schemas.filters.FlowFilter] = None, flow_runs: Optional[schemas.filters.FlowRunFilter] = None, task_runs: Optional[schemas.filters.TaskRunFilter] = None, deployments: Optional[schemas.filters.DeploymentFilter] = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.core.TaskRun] ``` Query for task runs. ### `paginate_task_runs` ```python theme={null} paginate_task_runs(sort: schemas.sorting.TaskRunSort = Body(schemas.sorting.TaskRunSort.ID_DESC), limit: int = dependencies.LimitBody(), page: int = Body(1, ge=1), flows: Optional[schemas.filters.FlowFilter] = None, flow_runs: Optional[schemas.filters.FlowRunFilter] = None, task_runs: Optional[schemas.filters.TaskRunFilter] = None, deployments: Optional[schemas.filters.DeploymentFilter] = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> TaskRunPaginationResponse ``` Pagination query for task runs. ### `delete_task_run` ```python theme={null} delete_task_run(docket: dependencies.Docket, task_run_id: UUID = Path(..., description='The task run id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Delete a task run by id. ### `delete_task_run_logs` ```python theme={null} delete_task_run_logs() -> None ``` ### `set_task_run_state` ```python theme={null} set_task_run_state(task_run_id: UUID = Path(..., description='The task run id', alias='id'), state: schemas.actions.StateCreate = Body(..., description='The intended state.'), force: bool = Body(False, description='If false, orchestration rules will be applied that may alter or prevent the state transition. If True, orchestration rules are not applied.'), db: PrefectDBInterface = Depends(provide_database_interface), response: Response = None, task_policy: TaskRunOrchestrationPolicy = Depends(orchestration_dependencies.provide_task_policy), orchestration_parameters: Dict[str, Any] = Depends(orchestration_dependencies.provide_task_orchestration_parameters)) -> OrchestrationResult ``` Set a task run state, invoking any orchestration rules. ### `scheduled_task_subscription` ```python theme={null} scheduled_task_subscription(websocket: WebSocket) -> None ``` # task_workers Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-task_workers # `prefect.server.api.task_workers` ## Functions ### `read_task_workers` ```python theme={null} read_task_workers(task_worker_filter: Optional[TaskWorkerFilter] = Body(default=None, description='The task worker filter', embed=True)) -> List[TaskWorkerResponse] ``` Read active task workers. Optionally filter by task keys. For more information, see [https://docs.prefect.io/v3/how-to-guides/workflows/run-background-tasks](https://docs.prefect.io/v3/how-to-guides/workflows/run-background-tasks). ## Classes ### `TaskWorkerFilter` # templates Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-templates # `prefect.server.api.templates` ## Functions ### `validate_template` ```python theme={null} validate_template(template: str = Body(default='')) -> Response ``` # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-ui-__init__ # `prefect.server.api.ui` Routes primarily for use by the UI # flow_runs Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-ui-flow_runs # `prefect.server.api.ui.flow_runs` ## Functions ### `read_flow_run_history` ```python theme={null} read_flow_run_history(sort: schemas.sorting.FlowRunSort = Body(schemas.sorting.FlowRunSort.EXPECTED_START_TIME_DESC), limit: int = Body(1000, le=1000), offset: int = Body(0, ge=0), flows: schemas.filters.FlowFilter = None, flow_runs: schemas.filters.FlowRunFilter = None, task_runs: schemas.filters.TaskRunFilter = None, deployments: schemas.filters.DeploymentFilter = None, work_pools: schemas.filters.WorkPoolFilter = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> List[SimpleFlowRun] ``` ### `count_task_runs_by_flow_run` ```python theme={null} count_task_runs_by_flow_run(flow_run_ids: list[UUID] = Body(default=..., embed=True, max_items=200), db: PrefectDBInterface = Depends(provide_database_interface)) -> dict[UUID, int] ``` Get task run counts by flow run id. ## Classes ### `SimpleFlowRun` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. # flows Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-ui-flows # `prefect.server.api.ui.flows` ## Functions ### `count_deployments_by_flow` ```python theme={null} count_deployments_by_flow(flow_ids: List[UUID] = Body(default=..., embed=True, max_items=200), db: PrefectDBInterface = Depends(provide_database_interface)) -> Dict[UUID, int] ``` Get deployment counts by flow id. ### `next_runs_by_flow` ```python theme={null} next_runs_by_flow(flow_ids: List[UUID] = Body(default=..., embed=True, max_items=200), db: PrefectDBInterface = Depends(provide_database_interface)) -> Dict[UUID, Optional[SimpleNextFlowRun]] ``` Get the next flow run by flow id. ## Classes ### `SimpleNextFlowRun` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `validate_next_scheduled_start_time` ```python theme={null} validate_next_scheduled_start_time(cls, v: DateTime | datetime) -> DateTime ``` # schemas Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-ui-schemas # `prefect.server.api.ui.schemas` ## Functions ### `validate_obj` ```python theme={null} validate_obj(json_schema: dict[str, Any] = Body(..., embed=True, alias='schema', validation_alias='schema', json_schema_extra={'additionalProperties': True}), values: dict[str, Any] = Body(..., embed=True, json_schema_extra={'additionalProperties': True}), db: PrefectDBInterface = Depends(provide_database_interface)) -> SchemaValuesValidationResponse ``` # task_runs Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-ui-task_runs # `prefect.server.api.ui.task_runs` ## Functions ### `read_dashboard_task_run_counts` ```python theme={null} read_dashboard_task_run_counts(task_runs: schemas.filters.TaskRunFilter, flows: Optional[schemas.filters.FlowFilter] = None, flow_runs: Optional[schemas.filters.FlowRunFilter] = None, deployments: Optional[schemas.filters.DeploymentFilter] = None, work_pools: Optional[schemas.filters.WorkPoolFilter] = None, work_queues: Optional[schemas.filters.WorkQueueFilter] = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> List[TaskRunCount] ``` ### `read_task_run_counts_by_state` ```python theme={null} read_task_run_counts_by_state(flows: Optional[schemas.filters.FlowFilter] = None, flow_runs: Optional[schemas.filters.FlowRunFilter] = None, task_runs: Optional[schemas.filters.TaskRunFilter] = None, deployments: Optional[schemas.filters.DeploymentFilter] = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.states.CountByState ``` ### `read_task_run_with_flow_run_name` ```python theme={null} read_task_run_with_flow_run_name(task_run_id: UUID = Path(..., description='The task run id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.ui.UITaskRun ``` Get a task run by id. ## Classes ### `TaskRunCount` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `ser_model` ```python theme={null} ser_model(self) -> dict[str, int] ``` # validation Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-validation # `prefect.server.api.validation` This module contains functions for validating job variables for deployments, work pools, flow runs, and RunDeployment actions. These functions are used to validate that job variables provided by users conform to the JSON schema defined in the work pool's base job template. Note some important details: 1. The order of applying job variables is: work pool's base job template, deployment, flow run. This means that flow run job variables override deployment job variables, which override work pool job variables. 2. The validation of job variables for work pools and deployments ignores required keys in because we don't know if the full set of overrides will include values for any required fields. 3. Work pools can include default values for job variables. These can be normal types or references to blocks. We have not been validating these values or whether default blocks satisfy job variable JSON schemas. To avoid failing validation for existing (otherwise working) data, we ignore invalid defaults when validating deployment and flow run variables, but not when validating the work pool's base template, e.g. during work pool creation or updates. If we find defaults that are invalid, we have to ignore required fields when we run the full validation. 4. A flow run is the terminal point for job variables, so it is the only place where we validate required variables and default values. Thus, `validate_job_variables_for_deployment_flow_run` and `validate_job_variables_for_run_deployment_action` check for required fields. 5. We have been using Pydantic v1 to generate work pool base job templates, and it produces invalid JSON schemas for some fields, e.g. tuples and optional fields. We try to fix these schemas on the fly while validating job variables, but there is a case we can't resolve, which is whether or not an optional field supports a None value. In this case, we allow None values to be passed in, which means that if an optional field does not actually allow None values, the Pydantic model will fail to validate at runtime. ## Functions ### `validate_job_variables_for_deployment_flow_run` ```python theme={null} validate_job_variables_for_deployment_flow_run(session: AsyncSession, deployment: BaseDeployment, flow_run: FlowRunAction) -> None ``` Validate job variables for a flow run created for a deployment. Flow runs are the terminal point for job variable overlays, so we validate required job variables because all variables should now be present. ### `validate_job_variables_for_deployment` ```python theme={null} validate_job_variables_for_deployment(session: AsyncSession, work_pool: WorkPool, deployment: DeploymentAction) -> None ``` Validate job variables for deployment creation and updates. This validation applies only to deployments that have a work pool. If the deployment does not have a work pool, we cannot validate job variables because we don't have a base job template to validate against, so we skip this validation. Unlike validations for flow runs, validation here ignores required keys in the schema because we don't know if the full set of overrides will include values for any required fields. If the full set of job variables when a flow is running, including the deployment's and flow run's overrides, fails to specify a value for the required key, that's an error. ### `validate_job_variable_defaults_for_work_pool` ```python theme={null} validate_job_variable_defaults_for_work_pool(session: AsyncSession, work_pool_name: str, base_job_template: Dict[str, Any]) -> None ``` Validate the default job variables for a work pool. This validation checks that default values for job variables match the JSON schema defined in the work pool's base job template. It also resolves references to block documents in the default values and hydrates them to perform the validation. Unlike validations for flow runs, validation here ignores required keys in the schema because we're only concerned with default values. The absence of a default for a required field is not an error, but if the full set of job variables when a flow is running, including the deployment's and flow run's overrides, fails to specify a value for the required key, that's an error. NOTE: This will raise an HTTP 404 error if a referenced block document does not exist. ### `validate_job_variables_for_run_deployment_action` ```python theme={null} validate_job_variables_for_run_deployment_action(session: AsyncSession, run_action: RunDeployment) -> None ``` Validate the job variables for a RunDeployment action. This action is equivalent to creating a flow run for a deployment, so we validate required job variables because all variables should now be present. # variables Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-variables # `prefect.server.api.variables` Routes for interacting with variable objects ## Functions ### `get_variable_or_404` ```python theme={null} get_variable_or_404(session: AsyncSession, variable_id: UUID) -> orm_models.Variable ``` Returns a variable or raises 404 HTTPException if it does not exist ### `get_variable_by_name_or_404` ```python theme={null} get_variable_by_name_or_404(session: AsyncSession, name: str) -> orm_models.Variable ``` Returns a variable or raises 404 HTTPException if it does not exist ### `create_variable` ```python theme={null} create_variable(variable: actions.VariableCreate, db: PrefectDBInterface = Depends(provide_database_interface)) -> core.Variable ``` Create a variable. For more information, see [https://docs.prefect.io/v3/concepts/variables](https://docs.prefect.io/v3/concepts/variables). ### `read_variable` ```python theme={null} read_variable(variable_id: UUID = Path(..., alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> core.Variable ``` ### `read_variable_by_name` ```python theme={null} read_variable_by_name(name: str = Path(...), db: PrefectDBInterface = Depends(provide_database_interface)) -> core.Variable ``` ### `read_variables` ```python theme={null} read_variables(limit: int = LimitBody(), offset: int = Body(0, ge=0), variables: Optional[filters.VariableFilter] = None, sort: sorting.VariableSort = Body(sorting.VariableSort.NAME_ASC), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[core.Variable] ``` ### `count_variables` ```python theme={null} count_variables(variables: Optional[filters.VariableFilter] = Body(None, embed=True), db: PrefectDBInterface = Depends(provide_database_interface)) -> int ``` ### `update_variable` ```python theme={null} update_variable(variable: actions.VariableUpdate, variable_id: UUID = Path(..., alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `update_variable_by_name` ```python theme={null} update_variable_by_name(variable: actions.VariableUpdate, name: str = Path(..., alias='name'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `delete_variable` ```python theme={null} delete_variable(variable_id: UUID = Path(..., alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `delete_variable_by_name` ```python theme={null} delete_variable_by_name(name: str = Path(...), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` # work_queues Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-work_queues # `prefect.server.api.work_queues` Routes for interacting with work queue objects. ## Functions ### `create_work_queue` ```python theme={null} create_work_queue(work_queue: schemas.actions.WorkQueueCreate, db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.responses.WorkQueueResponse ``` Creates a new work queue. If a work queue with the same name already exists, an error will be raised. For more information, see [https://docs.prefect.io/v3/concepts/work-pools#work-queues](https://docs.prefect.io/v3/concepts/work-pools#work-queues). ### `update_work_queue` ```python theme={null} update_work_queue(work_queue: schemas.actions.WorkQueueUpdate, work_queue_id: UUID = Path(..., description='The work queue id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Updates an existing work queue. ### `read_work_queue_by_name` ```python theme={null} read_work_queue_by_name(name: str = Path(..., description='The work queue name'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.responses.WorkQueueResponse ``` Get a work queue by id. ### `read_work_queue` ```python theme={null} read_work_queue(work_queue_id: UUID = Path(..., description='The work queue id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.responses.WorkQueueResponse ``` Get a work queue by id. ### `read_work_queue_runs` ```python theme={null} read_work_queue_runs(docket: dependencies.Docket, work_queue_id: UUID = Path(..., description='The work queue id', alias='id'), limit: int = dependencies.LimitBody(), scheduled_before: DateTime = Body(None, description='Only flow runs scheduled to start before this time will be returned.'), x_prefect_ui: Optional[bool] = Header(default=False, description='A header to indicate this request came from the Prefect UI.'), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.responses.FlowRunResponse] ``` Get flow runs from the work queue. ### `read_work_queues` ```python theme={null} read_work_queues(limit: int = dependencies.LimitBody(), offset: int = Body(0, ge=0), work_queues: Optional[schemas.filters.WorkQueueFilter] = None, db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.responses.WorkQueueResponse] ``` Query for work queues. ### `delete_work_queue` ```python theme={null} delete_work_queue(work_queue_id: UUID = Path(..., description='The work queue id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Delete a work queue by id. ### `read_work_queue_concurrency_status` ```python theme={null} read_work_queue_concurrency_status(work_queue_id: UUID = Path(..., description='The work queue id', alias='id'), page: int = Body(1, ge=1), limit: int = dependencies.LimitBody(), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.responses.WorkQueueConcurrencyStatus ``` Read concurrency status for a work queue, including paginated flow run summaries. active\_slots always reflects the total count. ### `read_work_queue_status` ```python theme={null} read_work_queue_status(work_queue_id: UUID = Path(..., description='The work queue id', alias='id'), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.core.WorkQueueStatusDetail ``` Get the status of a work queue. # workers Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-api-workers # `prefect.server.api.workers` Routes for interacting with work queue objects. ## Functions ### `create_work_pool` ```python theme={null} create_work_pool(work_pool: schemas.actions.WorkPoolCreate, db: PrefectDBInterface = Depends(provide_database_interface), prefect_client_version: Optional[str] = Depends(dependencies.get_prefect_client_version)) -> schemas.responses.WorkPoolResponse ``` Creates a new work pool. If a work pool with the same name already exists, an error will be raised. For more information, see [https://docs.prefect.io/v3/concepts/work-pools](https://docs.prefect.io/v3/concepts/work-pools). ### `read_work_pool` ```python theme={null} read_work_pool(work_pool_name: str = Path(..., description='The work pool name', alias='name'), worker_lookups: WorkerLookups = Depends(WorkerLookups), db: PrefectDBInterface = Depends(provide_database_interface), prefect_client_version: Optional[str] = Depends(dependencies.get_prefect_client_version)) -> schemas.responses.WorkPoolResponse ``` Read a work pool by name ### `read_work_pools` ```python theme={null} read_work_pools(work_pools: Optional[schemas.filters.WorkPoolFilter] = None, limit: int = dependencies.LimitBody(), offset: int = Body(0, ge=0), db: PrefectDBInterface = Depends(provide_database_interface), prefect_client_version: Optional[str] = Depends(dependencies.get_prefect_client_version)) -> List[schemas.responses.WorkPoolResponse] ``` Read multiple work pools ### `count_work_pools` ```python theme={null} count_work_pools(work_pools: Optional[schemas.filters.WorkPoolFilter] = Body(None, embed=True), db: PrefectDBInterface = Depends(provide_database_interface)) -> int ``` Count work pools ### `update_work_pool` ```python theme={null} update_work_pool(work_pool: schemas.actions.WorkPoolUpdate, work_pool_name: str = Path(..., description='The work pool name', alias='name'), worker_lookups: WorkerLookups = Depends(WorkerLookups), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Update a work pool ### `delete_work_pool` ```python theme={null} delete_work_pool(work_pool_name: str = Path(..., description='The work pool name', alias='name'), worker_lookups: WorkerLookups = Depends(WorkerLookups), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Delete a work pool ### `read_work_pool_concurrency_status` ```python theme={null} read_work_pool_concurrency_status(work_pool_name: str = Path(..., description='The work pool name', alias='name'), page: int = Body(1, ge=1), limit: int = dependencies.LimitBody(), flow_run_limit: int = Body(10, ge=0, le=200, description='Max flow runs per queue'), worker_lookups: WorkerLookups = Depends(WorkerLookups), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.responses.WorkPoolConcurrencyStatus ``` Read concurrency status for a work pool, including per-queue breakdown with flow run summaries. Queues are paginated; flow runs per queue are capped by flow\_run\_limit. ### `get_scheduled_flow_runs` ```python theme={null} get_scheduled_flow_runs(docket: dependencies.Docket, work_pool_name: str = Path(..., description='The work pool name', alias='name'), work_queue_names: List[str] = Body(None, description='The names of work pool queues'), scheduled_before: DateTime = Body(None, description='The maximum time to look for scheduled flow runs'), scheduled_after: DateTime = Body(None, description='The minimum time to look for scheduled flow runs'), limit: int = dependencies.LimitBody(), worker_lookups: WorkerLookups = Depends(WorkerLookups), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.responses.WorkerFlowRunResponse] ``` Load scheduled runs for a worker ### `create_work_queue` ```python theme={null} create_work_queue(work_queue: schemas.actions.WorkQueueCreate, work_pool_name: str = Path(..., description='The work pool name'), worker_lookups: WorkerLookups = Depends(WorkerLookups), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.responses.WorkQueueResponse ``` Creates a new work pool queue. If a work pool queue with the same name already exists, an error will be raised. For more information, see [https://docs.prefect.io/v3/concepts/work-pools#work-queues](https://docs.prefect.io/v3/concepts/work-pools#work-queues). ### `read_work_queue` ```python theme={null} read_work_queue(work_pool_name: str = Path(..., description='The work pool name'), work_queue_name: str = Path(..., description='The work pool queue name', alias='name'), worker_lookups: WorkerLookups = Depends(WorkerLookups), db: PrefectDBInterface = Depends(provide_database_interface)) -> schemas.responses.WorkQueueResponse ``` Read a work pool queue ### `read_work_queues` ```python theme={null} read_work_queues(work_pool_name: str = Path(..., description='The work pool name'), work_queues: schemas.filters.WorkQueueFilter = None, limit: int = dependencies.LimitBody(), offset: int = Body(0, ge=0), worker_lookups: WorkerLookups = Depends(WorkerLookups), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.responses.WorkQueueResponse] ``` Read all work pool queues ### `update_work_queue` ```python theme={null} update_work_queue(work_queue: schemas.actions.WorkQueueUpdate, work_pool_name: str = Path(..., description='The work pool name'), work_queue_name: str = Path(..., description='The work pool queue name', alias='name'), worker_lookups: WorkerLookups = Depends(WorkerLookups), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Update a work pool queue ### `delete_work_queue` ```python theme={null} delete_work_queue(work_pool_name: str = Path(..., description='The work pool name'), work_queue_name: str = Path(..., description='The work pool queue name', alias='name'), worker_lookups: WorkerLookups = Depends(WorkerLookups), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Delete a work pool queue ### `worker_heartbeat` ```python theme={null} worker_heartbeat(work_pool_name: str = Path(..., description='The work pool name'), name: str = Body(..., description='The worker process name', embed=True), heartbeat_interval_seconds: Optional[int] = Body(None, description="The worker's heartbeat interval in seconds", embed=True), worker_lookups: WorkerLookups = Depends(WorkerLookups), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` ### `read_workers` ```python theme={null} read_workers(work_pool_name: str = Path(..., description='The work pool name'), workers: Optional[schemas.filters.WorkerFilter] = None, limit: int = dependencies.LimitBody(), offset: int = Body(0, ge=0), worker_lookups: WorkerLookups = Depends(WorkerLookups), db: PrefectDBInterface = Depends(provide_database_interface)) -> List[schemas.responses.WorkerResponse] ``` Read all worker processes ### `delete_worker` ```python theme={null} delete_worker(work_pool_name: str = Path(..., description='The work pool name'), worker_name: str = Path(..., description="The work pool's worker name", alias='name'), worker_lookups: WorkerLookups = Depends(WorkerLookups), db: PrefectDBInterface = Depends(provide_database_interface)) -> None ``` Delete a work pool's worker ## Classes ### `WorkerLookups` # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-database-__init__ # `prefect.server.database` *This module is empty or contains only private/internal implementations.* # alembic_commands Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-database-alembic_commands # `prefect.server.database.alembic_commands` ## Functions ### `with_alembic_lock` ```python theme={null} with_alembic_lock(fn: Callable[P, R]) -> Callable[P, R] ``` Decorator that prevents alembic commands from running concurrently. This is necessary because alembic uses a global configuration object that is not thread-safe. This issue occurred in [https://github.com/PrefectHQ/prefect-dask/pull/50](https://github.com/PrefectHQ/prefect-dask/pull/50), where dask threads were simultaneously performing alembic upgrades, and causing cryptic `KeyError: 'config'` when `del globals_[attr_name]`. ### `alembic_config` ```python theme={null} alembic_config() -> 'Config' ``` ### `alembic_upgrade` ```python theme={null} alembic_upgrade(revision: str = 'head', dry_run: bool = False) -> None ``` Run alembic upgrades on Prefect REST API database **Args:** * `revision`: The revision passed to `alembic downgrade`. Defaults to 'head', upgrading all revisions. * `dry_run`: Show what migrations would be made without applying them. Will emit sql statements to stdout. ### `alembic_downgrade` ```python theme={null} alembic_downgrade(revision: str = '-1', dry_run: bool = False) -> None ``` Run alembic downgrades on Prefect REST API database **Args:** * `revision`: The revision passed to `alembic downgrade`. Defaults to 'base', downgrading all revisions. * `dry_run`: Show what migrations would be made without applying them. Will emit sql statements to stdout. ### `alembic_revision` ```python theme={null} alembic_revision(message: Optional[str] = None, autogenerate: bool = False, **kwargs: Any) -> None ``` Create a new revision file for the database. **Args:** * `message`: string message to apply to the revision. * `autogenerate`: whether or not to autogenerate the script from the database. ### `alembic_stamp` ```python theme={null} alembic_stamp(revision: Union[str, list[str], tuple[str, ...]]) -> None ``` Stamp the revision table with the given revision; don't run any migrations **Args:** * `revision`: The revision passed to `alembic stamp`. # configurations Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-database-configurations # `prefect.server.database.configurations` ## Classes ### `ConnectionTracker` A test utility which tracks the connections given out by a connection pool, to make it easy to see which connections are currently checked out and open. **Methods:** #### `clear` ```python theme={null} clear(self) -> None ``` #### `on_close` ```python theme={null} on_close(self, adapted_connection: AdaptedConnection, connection_record: ConnectionPoolEntry) -> None ``` #### `on_close_detached` ```python theme={null} on_close_detached(self, adapted_connection: AdaptedConnection) -> None ``` #### `on_connect` ```python theme={null} on_connect(self, adapted_connection: AdaptedConnection, connection_record: ConnectionPoolEntry) -> None ``` #### `track_pool` ```python theme={null} track_pool(self, pool: sa.pool.Pool) -> None ``` ### `BaseDatabaseConfiguration` Abstract base class used to inject database connection configuration into Prefect. This configuration is responsible for defining how Prefect REST API creates and manages database connections and sessions. **Methods:** #### `begin_transaction` ```python theme={null} begin_transaction(self, session: AsyncSession, with_for_update: bool = False) -> AbstractAsyncContextManager[AsyncSessionTransaction] ``` Enter a transaction for a session #### `create_db` ```python theme={null} create_db(self, connection: AsyncConnection, base_metadata: sa.MetaData) -> None ``` Create the database #### `drop_db` ```python theme={null} drop_db(self, connection: AsyncConnection, base_metadata: sa.MetaData) -> None ``` Drop the database #### `engine` ```python theme={null} engine(self) -> AsyncEngine ``` Returns a SqlAlchemy engine #### `is_inmemory` ```python theme={null} is_inmemory(self) -> bool ``` Returns true if database is run in memory #### `session` ```python theme={null} session(self, engine: AsyncEngine) -> AsyncSession ``` Retrieves a SQLAlchemy session for an engine. #### `unique_key` ```python theme={null} unique_key(self) -> tuple[Hashable, ...] ``` Returns a key used to determine whether to instantiate a new DB interface. ### `AsyncPostgresConfiguration` **Methods:** #### `begin_transaction` ```python theme={null} begin_transaction(self, session: AsyncSession, with_for_update: bool = False) -> AsyncGenerator[AsyncSessionTransaction, None] ``` #### `begin_transaction` ```python theme={null} begin_transaction(self, session: AsyncSession, with_for_update: bool = False) -> AbstractAsyncContextManager[AsyncSessionTransaction] ``` Enter a transaction for a session #### `create_db` ```python theme={null} create_db(self, connection: AsyncConnection, base_metadata: sa.MetaData) -> None ``` Create the database #### `create_db` ```python theme={null} create_db(self, connection: AsyncConnection, base_metadata: sa.MetaData) -> None ``` Create the database #### `drop_db` ```python theme={null} drop_db(self, connection: AsyncConnection, base_metadata: sa.MetaData) -> None ``` Drop the database #### `drop_db` ```python theme={null} drop_db(self, connection: AsyncConnection, base_metadata: sa.MetaData) -> None ``` Drop the database #### `engine` ```python theme={null} engine(self) -> AsyncEngine ``` Retrieves an async SQLAlchemy engine. **Args:** * `connection_url`: The database connection string. Defaults to self.connection\_url * `echo`: Whether to echo SQL sent to the database. Defaults to self.echo * `timeout`: The database statement timeout, in seconds. Defaults to self.timeout **Returns:** * a SQLAlchemy engine #### `engine` ```python theme={null} engine(self) -> AsyncEngine ``` Returns a SqlAlchemy engine #### `is_inmemory` ```python theme={null} is_inmemory(self) -> bool ``` Returns true if database is run in memory #### `is_inmemory` ```python theme={null} is_inmemory(self) -> bool ``` Returns true if database is run in memory #### `schedule_engine_disposal` ```python theme={null} schedule_engine_disposal(self, cache_key: _EngineCacheKey) -> None ``` Dispose of an engine once the event loop is closing. See caveats at `add_event_loop_shutdown_callback`. We attempted to lazily clean up old engines when new engines are created, but if the loop the engine is attached to is already closed then the connections cannot be cleaned up properly and warnings are displayed. Engine disposal should only be important when running the application ephemerally. Notably, this is an issue in our tests where many short-lived event loops and engines are created which can consume all of the available database connection slots. Users operating at a scale where connection limits are encountered should be encouraged to use a standalone server. #### `session` ```python theme={null} session(self, engine: AsyncEngine) -> AsyncSession ``` Retrieves a SQLAlchemy session for an engine. **Args:** * `engine`: a sqlalchemy engine #### `session` ```python theme={null} session(self, engine: AsyncEngine) -> AsyncSession ``` Retrieves a SQLAlchemy session for an engine. #### `unique_key` ```python theme={null} unique_key(self) -> tuple[Hashable, ...] ``` Returns a key used to determine whether to instantiate a new DB interface. ### `AioSqliteConfiguration` **Methods:** #### `begin_sqlite_conn` ```python theme={null} begin_sqlite_conn(self, conn: aiosqlite.AsyncAdapt_aiosqlite_connection) -> None ``` #### `begin_sqlite_stmt` ```python theme={null} begin_sqlite_stmt(self, conn: sa.Connection) -> None ``` #### `begin_transaction` ```python theme={null} begin_transaction(self, session: AsyncSession, with_for_update: bool = False) -> AsyncGenerator[AsyncSessionTransaction, None] ``` #### `begin_transaction` ```python theme={null} begin_transaction(self, session: AsyncSession, with_for_update: bool = False) -> AbstractAsyncContextManager[AsyncSessionTransaction] ``` Enter a transaction for a session #### `create_db` ```python theme={null} create_db(self, connection: AsyncConnection, base_metadata: sa.MetaData) -> None ``` Create the database #### `create_db` ```python theme={null} create_db(self, connection: AsyncConnection, base_metadata: sa.MetaData) -> None ``` Create the database #### `drop_db` ```python theme={null} drop_db(self, connection: AsyncConnection, base_metadata: sa.MetaData) -> None ``` Drop the database #### `drop_db` ```python theme={null} drop_db(self, connection: AsyncConnection, base_metadata: sa.MetaData) -> None ``` Drop the database #### `engine` ```python theme={null} engine(self) -> AsyncEngine ``` Retrieves an async SQLAlchemy engine. **Args:** * `connection_url`: The database connection string. Defaults to self.connection\_url * `echo`: Whether to echo SQL sent to the database. Defaults to self.echo * `timeout`: The database statement timeout, in seconds. Defaults to self.timeout **Returns:** * a SQLAlchemy engine #### `engine` ```python theme={null} engine(self) -> AsyncEngine ``` Returns a SqlAlchemy engine #### `is_inmemory` ```python theme={null} is_inmemory(self) -> bool ``` Returns true if database is run in memory #### `is_inmemory` ```python theme={null} is_inmemory(self) -> bool ``` Returns true if database is run in memory #### `schedule_engine_disposal` ```python theme={null} schedule_engine_disposal(self, cache_key: _EngineCacheKey) -> None ``` Dispose of an engine once the event loop is closing. See caveats at `add_event_loop_shutdown_callback`. We attempted to lazily clean up old engines when new engines are created, but if the loop the engine is attached to is already closed then the connections cannot be cleaned up properly and warnings are displayed. Engine disposal should only be important when running the application ephemerally. Notably, this is an issue in our tests where many short-lived event loops and engines are created which can consume all of the available database connection slots. Users operating at a scale where connection limits are encountered should be encouraged to use a standalone server. #### `session` ```python theme={null} session(self, engine: AsyncEngine) -> AsyncSession ``` Retrieves a SQLAlchemy session for an engine. **Args:** * `engine`: a sqlalchemy engine #### `session` ```python theme={null} session(self, engine: AsyncEngine) -> AsyncSession ``` Retrieves a SQLAlchemy session for an engine. #### `setup_sqlite` ```python theme={null} setup_sqlite(self, conn: DBAPIConnection, record: ConnectionPoolEntry) -> None ``` Issue PRAGMA statements to SQLITE on connect. PRAGMAs only last for the duration of the connection. See [https://www.sqlite.org/pragma.html](https://www.sqlite.org/pragma.html) for more info. #### `unique_key` ```python theme={null} unique_key(self) -> tuple[Hashable, ...] ``` Returns a key used to determine whether to instantiate a new DB interface. # dependencies Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-database-dependencies # `prefect.server.database.dependencies` Injected database interface dependencies ## Functions ### `provide_database_interface` ```python theme={null} provide_database_interface() -> PrefectDBInterface ``` Get the current Prefect REST API database interface. If components of the interface are not set, defaults will be inferred based on the dialect of the connection URL. ### `inject_db` ```python theme={null} inject_db(fn: Callable[P, R]) -> Callable[P, R] ``` Decorator that provides a database interface to a function. The decorated function *must* take a `db` kwarg and if a db is passed when called it will be used instead of creating a new one. ### `db_injector` ```python theme={null} db_injector(func: Union[_DBMethod[T, P, R], _DBFunction[P, R]]) -> Union[_Method[T, P, R], _Function[P, R]] ``` Decorator to inject a PrefectDBInterface instance as the first positional argument to the decorated function. Unlike `inject_db`, which injects the database connection as a keyword argument, `db_injector` adds it explicitly as the first positional argument. This change enhances type hinting by making the dependency on PrefectDBInterface explicit in the function signature. When decorating a coroutine function, the result will continue to pass the iscoroutinefunction() test. **Args:** * `func`: The function or method to decorate. **Returns:** * A wrapped descriptor object which injects the PrefectDBInterface instance * as the first argument to the function or method. This handles method * binding transparently. ### `temporary_database_config` ```python theme={null} temporary_database_config(tmp_database_config: Optional[BaseDatabaseConfiguration]) -> Generator[None, object, None] ``` Temporarily override the Prefect REST API database configuration. When the context is closed, the existing database configuration will be restored. **Args:** * `tmp_database_config`: Prefect REST API database configuration to inject. ### `temporary_query_components` ```python theme={null} temporary_query_components(tmp_queries: Optional['BaseQueryComponents']) -> Generator[None, object, None] ``` Temporarily override the Prefect REST API database query components. When the context is closed, the existing query components will be restored. **Args:** * `tmp_queries`: Prefect REST API query components to inject. ### `temporary_orm_config` ```python theme={null} temporary_orm_config(tmp_orm_config: Optional['BaseORMConfiguration']) -> Generator[None, object, None] ``` Temporarily override the Prefect REST API ORM configuration. When the context is closed, the existing orm configuration will be restored. **Args:** * `tmp_orm_config`: Prefect REST API ORM configuration to inject. ### `temporary_interface_class` ```python theme={null} temporary_interface_class(tmp_interface_class: Optional[type['PrefectDBInterface']]) -> Generator[None, object, None] ``` Temporarily override the Prefect REST API interface class When the context is closed, the existing interface will be restored. **Args:** * `tmp_interface_class`: Prefect REST API interface class to inject. ### `temporary_database_interface` ```python theme={null} temporary_database_interface(tmp_database_config: Optional[BaseDatabaseConfiguration] = None, tmp_queries: Optional['BaseQueryComponents'] = None, tmp_orm_config: Optional['BaseORMConfiguration'] = None, tmp_interface_class: Optional[type['PrefectDBInterface']] = None) -> Generator[None, object, None] ``` Temporarily override the Prefect REST API database interface. Any interface components that are not explicitly provided will be cleared and inferred from the Prefect REST API database connection string dialect. When the context is closed, the existing database interface will be restored. **Args:** * `tmp_database_config`: An optional Prefect REST API database configuration to inject. * `tmp_orm_config`: An optional Prefect REST API ORM configuration to inject. * `tmp_queries`: Optional Prefect REST API query components to inject. * `tmp_interface_class`: Optional database interface class to inject ### `set_database_config` ```python theme={null} set_database_config(database_config: Optional[BaseDatabaseConfiguration]) -> None ``` Set Prefect REST API database configuration. ### `set_query_components` ```python theme={null} set_query_components(query_components: Optional['BaseQueryComponents']) -> None ``` Set Prefect REST API query components. ### `set_orm_config` ```python theme={null} set_orm_config(orm_config: Optional['BaseORMConfiguration']) -> None ``` Set Prefect REST API orm configuration. ### `set_interface_class` ```python theme={null} set_interface_class(interface_class: Optional[type['PrefectDBInterface']]) -> None ``` Set Prefect REST API interface class. ## Classes ### `DBInjector` # interface Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-database-interface # `prefect.server.database.interface` ## Classes ### `DBSingleton` Ensures that only one database interface is created per unique key ### `PrefectDBInterface` An interface for backend-specific SqlAlchemy actions and ORM models. The REST API can be configured to run against different databases in order maintain performance at different scales. This interface integrates database- and dialect- specific configuration into a unified interface that the orchestration engine runs against. **Methods:** #### `Agent` ```python theme={null} Agent(self) -> type[orm_models.Agent] ``` An agent model #### `Artifact` ```python theme={null} Artifact(self) -> type[orm_models.Artifact] ``` An artifact orm model #### `ArtifactCollection` ```python theme={null} ArtifactCollection(self) -> type[orm_models.ArtifactCollection] ``` An artifact collection orm model #### `Automation` ```python theme={null} Automation(self) -> type[orm_models.Automation] ``` An automation model #### `AutomationBucket` ```python theme={null} AutomationBucket(self) -> type[orm_models.AutomationBucket] ``` An automation bucket model #### `AutomationEventFollower` ```python theme={null} AutomationEventFollower(self) -> type[orm_models.AutomationEventFollower] ``` A model capturing one event following another event #### `AutomationRelatedResource` ```python theme={null} AutomationRelatedResource(self) -> type[orm_models.AutomationRelatedResource] ``` An automation related resource model #### `Base` ```python theme={null} Base(self) -> type[orm_models.Base] ``` Base class for orm models #### `BlockDocument` ```python theme={null} BlockDocument(self) -> type[orm_models.BlockDocument] ``` A block document model #### `BlockDocumentReference` ```python theme={null} BlockDocumentReference(self) -> type[orm_models.BlockDocumentReference] ``` A block document reference model #### `BlockSchema` ```python theme={null} BlockSchema(self) -> type[orm_models.BlockSchema] ``` A block schema model #### `BlockSchemaReference` ```python theme={null} BlockSchemaReference(self) -> type[orm_models.BlockSchemaReference] ``` A block schema reference model #### `BlockType` ```python theme={null} BlockType(self) -> type[orm_models.BlockType] ``` A block type model #### `CompositeTriggerChildFiring` ```python theme={null} CompositeTriggerChildFiring(self) -> type[orm_models.CompositeTriggerChildFiring] ``` A model capturing a composite trigger's child firing #### `ConcurrencyLimit` ```python theme={null} ConcurrencyLimit(self) -> type[orm_models.ConcurrencyLimit] ``` A concurrency model #### `ConcurrencyLimitV2` ```python theme={null} ConcurrencyLimitV2(self) -> type[orm_models.ConcurrencyLimitV2] ``` A v2 concurrency model #### `Configuration` ```python theme={null} Configuration(self) -> type[orm_models.Configuration] ``` An configuration model #### `CsrfToken` ```python theme={null} CsrfToken(self) -> type[orm_models.CsrfToken] ``` A csrf token model #### `Deployment` ```python theme={null} Deployment(self) -> type[orm_models.Deployment] ``` A deployment orm model #### `DeploymentSchedule` ```python theme={null} DeploymentSchedule(self) -> type[orm_models.DeploymentSchedule] ``` A deployment schedule orm model #### `Event` ```python theme={null} Event(self) -> type[orm_models.Event] ``` An event model #### `EventResource` ```python theme={null} EventResource(self) -> type[orm_models.EventResource] ``` An event resource model #### `Flow` ```python theme={null} Flow(self) -> type[orm_models.Flow] ``` A flow orm model #### `FlowRun` ```python theme={null} FlowRun(self) -> type[orm_models.FlowRun] ``` A flow run orm model #### `FlowRunInput` ```python theme={null} FlowRunInput(self) -> type[orm_models.FlowRunInput] ``` A flow run input model #### `FlowRunState` ```python theme={null} FlowRunState(self) -> type[orm_models.FlowRunState] ``` A flow run state orm model #### `Log` ```python theme={null} Log(self) -> type[orm_models.Log] ``` A log orm model #### `SavedSearch` ```python theme={null} SavedSearch(self) -> type[orm_models.SavedSearch] ``` A saved search orm model #### `TaskRun` ```python theme={null} TaskRun(self) -> type[orm_models.TaskRun] ``` A task run orm model #### `TaskRunState` ```python theme={null} TaskRunState(self) -> type[orm_models.TaskRunState] ``` A task run state orm model #### `TaskRunStateCache` ```python theme={null} TaskRunStateCache(self) -> type[orm_models.TaskRunStateCache] ``` A task run state cache orm model #### `Variable` ```python theme={null} Variable(self) -> type[orm_models.Variable] ``` A variable model #### `WorkPool` ```python theme={null} WorkPool(self) -> type[orm_models.WorkPool] ``` A work pool orm model #### `WorkQueue` ```python theme={null} WorkQueue(self) -> type[orm_models.WorkQueue] ``` A work queue model #### `Worker` ```python theme={null} Worker(self) -> type[orm_models.Worker] ``` A worker process orm model #### `create_db` ```python theme={null} create_db(self) -> None ``` Create the database #### `dialect` ```python theme={null} dialect(self) -> type[sa.engine.Dialect] ``` #### `drop_db` ```python theme={null} drop_db(self) -> None ``` Drop the database by removing all tables directly. This reflects the actual database schema and drops every table rather than running all Alembic downgrade migrations in reverse. Running downgrades is fragile because individual migration downgrade steps may fail on real-world data (e.g. re-adding a foreign key constraint when orphaned references exist). Dropping tables directly is both faster and more robust. Reflection is used instead of `Base.metadata.drop_all()` so that tables created by migrations but not tracked in the ORM (e.g. `deployment_version`, `alembic_version`) are also removed. #### `engine` ```python theme={null} engine(self) -> AsyncEngine ``` Provides a SqlAlchemy engine against a specific database. #### `is_db_connectable` ```python theme={null} is_db_connectable(self) -> bool ``` Returns boolean indicating if the database is connectable. This method is used to determine if the server is ready to accept requests. #### `run_migrations_downgrade` ```python theme={null} run_migrations_downgrade(self, revision: str = '-1') -> None ``` Run all downgrade migrations #### `run_migrations_upgrade` ```python theme={null} run_migrations_upgrade(self) -> None ``` Run all upgrade migrations #### `session` ```python theme={null} session(self) -> AsyncSession ``` Provides a SQLAlchemy session. #### `session_context` ```python theme={null} session_context(self, begin_transaction: bool = False, with_for_update: bool = False) ``` Provides a SQLAlchemy session and a context manager for opening/closing the underlying connection. **Args:** * `begin_transaction`: if True, the context manager will begin a SQL transaction. Exiting the context manager will COMMIT or ROLLBACK any changes. # orm_models Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-database-orm_models # `prefect.server.database.orm_models` ## Classes ### `Base` Base SQLAlchemy model that automatically infers the table name and provides ID, created, and updated columns ### `Flow` SQLAlchemy mixin of a flow. ### `FlowRunState` SQLAlchemy mixin of a flow run state. **Methods:** #### `as_state` ```python theme={null} as_state(self) -> schemas.states.State ``` #### `data` ```python theme={null} data(self) -> Optional[Any] ``` ### `TaskRunState` SQLAlchemy model of a task run state. **Methods:** #### `as_state` ```python theme={null} as_state(self) -> schemas.states.State ``` #### `data` ```python theme={null} data(self) -> Optional[Any] ``` ### `Artifact` SQLAlchemy model of artifacts. ### `ArtifactCollection` ### `TaskRunStateCache` SQLAlchemy model of a task run state cache. ### `Run` Common columns and logic for FlowRun and TaskRun models **Methods:** #### `estimated_run_time` ```python theme={null} estimated_run_time(self) -> datetime.timedelta ``` Total run time is incremented in the database whenever a RUNNING state is exited. To give up-to-date estimates, we estimate incremental run time for any runs currently in a RUNNING state. #### `estimated_start_time_delta` ```python theme={null} estimated_start_time_delta(self) -> datetime.timedelta ``` The delta to the expected start time (or "lateness") is computed as the difference between the actual start time and expected start time. To give up-to-date estimates, we estimate lateness for any runs that don't have a start time and are not in a final state and were expected to start already. ### `FlowRun` SQLAlchemy model of a flow run. **Methods:** #### `estimated_run_time` ```python theme={null} estimated_run_time(self) -> datetime.timedelta ``` Total run time is incremented in the database whenever a RUNNING state is exited. To give up-to-date estimates, we estimate incremental run time for any runs currently in a RUNNING state. #### `estimated_start_time_delta` ```python theme={null} estimated_start_time_delta(self) -> datetime.timedelta ``` The delta to the expected start time (or "lateness") is computed as the difference between the actual start time and expected start time. To give up-to-date estimates, we estimate lateness for any runs that don't have a start time and are not in a final state and were expected to start already. #### `set_state` ```python theme={null} set_state(self, state: Optional[FlowRunState]) -> None ``` If a state is assigned to this run, populate its run id. This would normally be handled by the back-populated SQLAlchemy relationship, but because this is a one-to-one pointer to a one-to-many relationship, SQLAlchemy can't figure it out. #### `state` ```python theme={null} state(self) -> Optional[FlowRunState] ``` ### `TaskRun` SQLAlchemy model of a task run. **Methods:** #### `estimated_run_time` ```python theme={null} estimated_run_time(self) -> datetime.timedelta ``` Total run time is incremented in the database whenever a RUNNING state is exited. To give up-to-date estimates, we estimate incremental run time for any runs currently in a RUNNING state. #### `estimated_start_time_delta` ```python theme={null} estimated_start_time_delta(self) -> datetime.timedelta ``` The delta to the expected start time (or "lateness") is computed as the difference between the actual start time and expected start time. To give up-to-date estimates, we estimate lateness for any runs that don't have a start time and are not in a final state and were expected to start already. #### `set_state` ```python theme={null} set_state(self, state: Optional[TaskRunState]) -> None ``` If a state is assigned to this run, populate its run id. This would normally be handled by the back-populated SQLAlchemy relationship, but because this is a one-to-one pointer to a one-to-many relationship, SQLAlchemy can't figure it out. #### `state` ```python theme={null} state(self) -> Optional[TaskRunState] ``` ### `DeploymentSchedule` ### `Deployment` SQLAlchemy model of a deployment. **Methods:** #### `job_variables` ```python theme={null} job_variables(self) -> Mapped[dict[str, Any]] ``` ### `Log` SQLAlchemy model of a logging statement. ### `ConcurrencyLimit` ### `ConcurrencyLimitV2` ### `BlockType` ### `BlockSchema` ### `BlockSchemaReference` ### `BlockDocument` **Methods:** #### `decrypt_data` ```python theme={null} decrypt_data(self, session: AsyncSession) -> dict[str, Any] ``` Retrieve decrypted data from the ORM model. Note: will only succeed if the caller has sufficient permission. #### `encrypt_data` ```python theme={null} encrypt_data(self, session: AsyncSession, data: dict[str, Any]) -> None ``` Store encrypted data on the ORM model Note: will only succeed if the caller has sufficient permission. ### `BlockDocumentReference` ### `Configuration` ### `SavedSearch` SQLAlchemy model of a saved search. ### `WorkQueue` SQLAlchemy model of a work queue ### `WorkPool` SQLAlchemy model of an worker ### `Worker` SQLAlchemy model of an worker ### `Agent` SQLAlchemy model of an agent ### `Variable` ### `FlowRunInput` ### `CsrfToken` ### `Automation` **Methods:** #### `sort_expression` ```python theme={null} sort_expression(cls, value: AutomationSort) -> sa.ColumnExpressionArgument[Any] ``` Return an expression used to sort Automations ### `AutomationBucket` ### `AutomationRelatedResource` ### `CompositeTriggerChildFiring` ### `AutomationEventFollower` ### `Event` ### `EventResource` ### `BaseORMConfiguration` Abstract base class used to inject database-specific ORM configuration into Prefect. Modifications to core Prefect REST API data structures can have unintended consequences. Use with caution. **Methods:** #### `artifact_collection_unique_upsert_columns` ```python theme={null} artifact_collection_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting an ArtifactCollection #### `block_document_unique_upsert_columns` ```python theme={null} block_document_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a BlockDocument #### `block_schema_unique_upsert_columns` ```python theme={null} block_schema_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a BlockSchema #### `block_type_unique_upsert_columns` ```python theme={null} block_type_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a BlockType #### `concurrency_limit_unique_upsert_columns` ```python theme={null} concurrency_limit_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a ConcurrencyLimit #### `deployment_unique_upsert_columns` ```python theme={null} deployment_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a Deployment #### `flow_run_unique_upsert_columns` ```python theme={null} flow_run_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a FlowRun #### `flow_unique_upsert_columns` ```python theme={null} flow_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a Flow #### `saved_search_unique_upsert_columns` ```python theme={null} saved_search_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a SavedSearch #### `task_run_unique_upsert_columns` ```python theme={null} task_run_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a TaskRun #### `unique_key` ```python theme={null} unique_key(self) -> tuple[Hashable, ...] ``` Returns a key used to determine whether to instantiate a new DB interface. #### `versions_dir` ```python theme={null} versions_dir(self) -> Path ``` Directory containing migrations ### `AsyncPostgresORMConfiguration` Postgres specific orm configuration **Methods:** #### `artifact_collection_unique_upsert_columns` ```python theme={null} artifact_collection_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting an ArtifactCollection #### `block_document_unique_upsert_columns` ```python theme={null} block_document_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a BlockDocument #### `block_schema_unique_upsert_columns` ```python theme={null} block_schema_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a BlockSchema #### `block_type_unique_upsert_columns` ```python theme={null} block_type_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a BlockType #### `concurrency_limit_unique_upsert_columns` ```python theme={null} concurrency_limit_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a ConcurrencyLimit #### `deployment_unique_upsert_columns` ```python theme={null} deployment_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a Deployment #### `flow_run_unique_upsert_columns` ```python theme={null} flow_run_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a FlowRun #### `flow_unique_upsert_columns` ```python theme={null} flow_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a Flow #### `saved_search_unique_upsert_columns` ```python theme={null} saved_search_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a SavedSearch #### `task_run_unique_upsert_columns` ```python theme={null} task_run_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a TaskRun #### `unique_key` ```python theme={null} unique_key(self) -> tuple[Hashable, ...] ``` Returns a key used to determine whether to instantiate a new DB interface. #### `versions_dir` ```python theme={null} versions_dir(self) -> Path ``` Directory containing migrations #### `versions_dir` ```python theme={null} versions_dir(self) -> Path ``` Directory containing migrations ### `AioSqliteORMConfiguration` SQLite specific orm configuration **Methods:** #### `artifact_collection_unique_upsert_columns` ```python theme={null} artifact_collection_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting an ArtifactCollection #### `block_document_unique_upsert_columns` ```python theme={null} block_document_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a BlockDocument #### `block_schema_unique_upsert_columns` ```python theme={null} block_schema_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a BlockSchema #### `block_type_unique_upsert_columns` ```python theme={null} block_type_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a BlockType #### `concurrency_limit_unique_upsert_columns` ```python theme={null} concurrency_limit_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a ConcurrencyLimit #### `deployment_unique_upsert_columns` ```python theme={null} deployment_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a Deployment #### `flow_run_unique_upsert_columns` ```python theme={null} flow_run_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a FlowRun #### `flow_unique_upsert_columns` ```python theme={null} flow_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a Flow #### `saved_search_unique_upsert_columns` ```python theme={null} saved_search_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a SavedSearch #### `task_run_unique_upsert_columns` ```python theme={null} task_run_unique_upsert_columns(self) -> _UpsertColumns ``` Unique columns for upserting a TaskRun #### `unique_key` ```python theme={null} unique_key(self) -> tuple[Hashable, ...] ``` Returns a key used to determine whether to instantiate a new DB interface. #### `versions_dir` ```python theme={null} versions_dir(self) -> Path ``` Directory containing migrations #### `versions_dir` ```python theme={null} versions_dir(self) -> Path ``` Directory containing migrations # query_components Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-database-query_components # `prefect.server.database.query_components` ## Classes ### `FlowRunGraphV2Node` ### `BaseQueryComponents` Abstract base class used to inject dialect-specific SQL operations into Prefect. **Methods:** #### `build_json_object` ```python theme={null} build_json_object(self, *args: Union[str, sa.ColumnElement[Any]]) -> sa.ColumnElement[Any] ``` builds a JSON object from sequential key-value pairs #### `cast_to_json` ```python theme={null} cast_to_json(self, json_obj: sa.ColumnElement[T]) -> sa.ColumnElement[T] ``` casts to JSON object if necessary #### `clear_configuration_value_cache_for_key` ```python theme={null} clear_configuration_value_cache_for_key(self, key: str) -> None ``` Removes a configuration key from the cache. #### `flow_run_graph_v2` ```python theme={null} flow_run_graph_v2(self, db: PrefectDBInterface, session: AsyncSession, flow_run_id: UUID, since: DateTime, max_nodes: int, max_artifacts: int) -> Graph ``` Returns the query that selects all of the nodes and edges for a flow run graph (version 2). #### `get_scheduled_flow_runs_from_work_pool` ```python theme={null} get_scheduled_flow_runs_from_work_pool(self, db: PrefectDBInterface, session: AsyncSession, limit: Optional[int] = None, worker_limit: Optional[int] = None, queue_limit: Optional[int] = None, work_pool_ids: Optional[list[UUID]] = None, work_queue_ids: Optional[list[UUID]] = None, scheduled_before: Optional[DateTime] = None, scheduled_after: Optional[DateTime] = None, respect_queue_priorities: bool = False) -> list[schemas.responses.WorkerFlowRunResponse] ``` #### `get_scheduled_flow_runs_from_work_queues` ```python theme={null} get_scheduled_flow_runs_from_work_queues(self, db: PrefectDBInterface, limit_per_queue: Optional[int] = None, work_queue_ids: Optional[list[UUID]] = None, scheduled_before: Optional[DateTime] = None) -> sa.Select[tuple[orm_models.FlowRun, UUID]] ``` Returns all scheduled runs in work queues, subject to provided parameters. This query returns a `(orm_models.FlowRun, orm_models.WorkQueue.id)` pair; calling `result.all()` will return both; calling `result.scalars().unique().all()` will return only the flow run because it grabs the first result. #### `insert` ```python theme={null} insert(self, obj: type[orm_models.Base]) -> Union[postgresql.Insert, sqlite.Insert] ``` dialect-specific insert statement #### `json_arr_agg` ```python theme={null} json_arr_agg(self, json_array: sa.ColumnElement[Any]) -> sa.ColumnElement[Any] ``` aggregates a JSON array #### `make_timestamp_intervals` ```python theme={null} make_timestamp_intervals(self, start_time: datetime.datetime, end_time: datetime.datetime, interval: datetime.timedelta) -> sa.Select[tuple[datetime.datetime, datetime.datetime]] ``` #### `read_configuration_value` ```python theme={null} read_configuration_value(self, db: PrefectDBInterface, session: AsyncSession, key: str) -> Optional[dict[str, Any]] ``` Read a configuration value by key. Configuration values should not be changed at run time, so retrieved values are cached in memory. The main use of configurations is encrypting blocks, this speeds up nested block document queries. #### `set_state_id_on_inserted_flow_runs_statement` ```python theme={null} set_state_id_on_inserted_flow_runs_statement(self, inserted_flow_run_ids: Sequence[UUID], insert_flow_run_states: Iterable[dict[str, Any]]) -> sa.Update ``` #### `unique_key` ```python theme={null} unique_key(self) -> tuple[Hashable, ...] ``` Returns a key used to determine whether to instantiate a new DB interface. #### `uses_json_strings` ```python theme={null} uses_json_strings(self) -> bool ``` specifies whether the configured dialect returns JSON as strings ### `AsyncPostgresQueryComponents` **Methods:** #### `build_json_object` ```python theme={null} build_json_object(self, *args: Union[str, sa.ColumnElement[Any]]) -> sa.ColumnElement[Any] ``` #### `build_json_object` ```python theme={null} build_json_object(self, *args: Union[str, sa.ColumnElement[Any]]) -> sa.ColumnElement[Any] ``` builds a JSON object from sequential key-value pairs #### `cast_to_json` ```python theme={null} cast_to_json(self, json_obj: sa.ColumnElement[T]) -> sa.ColumnElement[T] ``` #### `cast_to_json` ```python theme={null} cast_to_json(self, json_obj: sa.ColumnElement[T]) -> sa.ColumnElement[T] ``` casts to JSON object if necessary #### `clear_configuration_value_cache_for_key` ```python theme={null} clear_configuration_value_cache_for_key(self, key: str) -> None ``` Removes a configuration key from the cache. #### `flow_run_graph_v2` ```python theme={null} flow_run_graph_v2(self, db: PrefectDBInterface, session: AsyncSession, flow_run_id: UUID, since: DateTime, max_nodes: int, max_artifacts: int) -> Graph ``` Returns the query that selects all of the nodes and edges for a flow run graph (version 2). #### `get_scheduled_flow_runs_from_work_pool` ```python theme={null} get_scheduled_flow_runs_from_work_pool(self, db: PrefectDBInterface, session: AsyncSession, limit: Optional[int] = None, worker_limit: Optional[int] = None, queue_limit: Optional[int] = None, work_pool_ids: Optional[list[UUID]] = None, work_queue_ids: Optional[list[UUID]] = None, scheduled_before: Optional[DateTime] = None, scheduled_after: Optional[DateTime] = None, respect_queue_priorities: bool = False) -> list[schemas.responses.WorkerFlowRunResponse] ``` #### `get_scheduled_flow_runs_from_work_queues` ```python theme={null} get_scheduled_flow_runs_from_work_queues(self, db: PrefectDBInterface, limit_per_queue: Optional[int] = None, work_queue_ids: Optional[list[UUID]] = None, scheduled_before: Optional[DateTime] = None) -> sa.Select[tuple[orm_models.FlowRun, UUID]] ``` Returns all scheduled runs in work queues, subject to provided parameters. This query returns a `(orm_models.FlowRun, orm_models.WorkQueue.id)` pair; calling `result.all()` will return both; calling `result.scalars().unique().all()` will return only the flow run because it grabs the first result. #### `insert` ```python theme={null} insert(self, obj: type[orm_models.Base]) -> postgresql.Insert ``` #### `insert` ```python theme={null} insert(self, obj: type[orm_models.Base]) -> Union[postgresql.Insert, sqlite.Insert] ``` dialect-specific insert statement #### `json_arr_agg` ```python theme={null} json_arr_agg(self, json_array: sa.ColumnElement[Any]) -> sa.ColumnElement[Any] ``` #### `json_arr_agg` ```python theme={null} json_arr_agg(self, json_array: sa.ColumnElement[Any]) -> sa.ColumnElement[Any] ``` aggregates a JSON array #### `make_timestamp_intervals` ```python theme={null} make_timestamp_intervals(self, start_time: datetime.datetime, end_time: datetime.datetime, interval: datetime.timedelta) -> sa.Select[tuple[datetime.datetime, datetime.datetime]] ``` #### `make_timestamp_intervals` ```python theme={null} make_timestamp_intervals(self, start_time: datetime.datetime, end_time: datetime.datetime, interval: datetime.timedelta) -> sa.Select[tuple[datetime.datetime, datetime.datetime]] ``` #### `read_configuration_value` ```python theme={null} read_configuration_value(self, db: PrefectDBInterface, session: AsyncSession, key: str) -> Optional[dict[str, Any]] ``` Read a configuration value by key. Configuration values should not be changed at run time, so retrieved values are cached in memory. The main use of configurations is encrypting blocks, this speeds up nested block document queries. #### `set_state_id_on_inserted_flow_runs_statement` ```python theme={null} set_state_id_on_inserted_flow_runs_statement(self, db: PrefectDBInterface, inserted_flow_run_ids: Sequence[UUID], insert_flow_run_states: Iterable[dict[str, Any]]) -> sa.Update ``` Given a list of flow run ids and associated states, set the state\_id to the appropriate state for all flow runs #### `set_state_id_on_inserted_flow_runs_statement` ```python theme={null} set_state_id_on_inserted_flow_runs_statement(self, inserted_flow_run_ids: Sequence[UUID], insert_flow_run_states: Iterable[dict[str, Any]]) -> sa.Update ``` #### `unique_key` ```python theme={null} unique_key(self) -> tuple[Hashable, ...] ``` Returns a key used to determine whether to instantiate a new DB interface. #### `uses_json_strings` ```python theme={null} uses_json_strings(self) -> bool ``` #### `uses_json_strings` ```python theme={null} uses_json_strings(self) -> bool ``` specifies whether the configured dialect returns JSON as strings ### `UUIDList` Map a JSON list of strings back to a list of UUIDs at the result loading stage **Methods:** #### `process_result_value` ```python theme={null} process_result_value(self, value: Optional[list[Union[str, UUID]]], dialect: sa.Dialect) -> Optional[list[UUID]] ``` ### `AioSqliteQueryComponents` **Methods:** #### `build_json_object` ```python theme={null} build_json_object(self, *args: Union[str, sa.ColumnElement[Any]]) -> sa.ColumnElement[Any] ``` #### `build_json_object` ```python theme={null} build_json_object(self, *args: Union[str, sa.ColumnElement[Any]]) -> sa.ColumnElement[Any] ``` builds a JSON object from sequential key-value pairs #### `cast_to_json` ```python theme={null} cast_to_json(self, json_obj: sa.ColumnElement[T]) -> sa.ColumnElement[T] ``` #### `cast_to_json` ```python theme={null} cast_to_json(self, json_obj: sa.ColumnElement[T]) -> sa.ColumnElement[T] ``` casts to JSON object if necessary #### `clear_configuration_value_cache_for_key` ```python theme={null} clear_configuration_value_cache_for_key(self, key: str) -> None ``` Removes a configuration key from the cache. #### `flow_run_graph_v2` ```python theme={null} flow_run_graph_v2(self, db: PrefectDBInterface, session: AsyncSession, flow_run_id: UUID, since: DateTime, max_nodes: int, max_artifacts: int) -> Graph ``` Returns the query that selects all of the nodes and edges for a flow run graph (version 2). #### `get_scheduled_flow_runs_from_work_pool` ```python theme={null} get_scheduled_flow_runs_from_work_pool(self, db: PrefectDBInterface, session: AsyncSession, limit: Optional[int] = None, worker_limit: Optional[int] = None, queue_limit: Optional[int] = None, work_pool_ids: Optional[list[UUID]] = None, work_queue_ids: Optional[list[UUID]] = None, scheduled_before: Optional[DateTime] = None, scheduled_after: Optional[DateTime] = None, respect_queue_priorities: bool = False) -> list[schemas.responses.WorkerFlowRunResponse] ``` #### `get_scheduled_flow_runs_from_work_queues` ```python theme={null} get_scheduled_flow_runs_from_work_queues(self, db: PrefectDBInterface, limit_per_queue: Optional[int] = None, work_queue_ids: Optional[list[UUID]] = None, scheduled_before: Optional[DateTime] = None) -> sa.Select[tuple[orm_models.FlowRun, UUID]] ``` Returns all scheduled runs in work queues, subject to provided parameters. This query returns a `(orm_models.FlowRun, orm_models.WorkQueue.id)` pair; calling `result.all()` will return both; calling `result.scalars().unique().all()` will return only the flow run because it grabs the first result. #### `insert` ```python theme={null} insert(self, obj: type[orm_models.Base]) -> sqlite.Insert ``` #### `insert` ```python theme={null} insert(self, obj: type[orm_models.Base]) -> Union[postgresql.Insert, sqlite.Insert] ``` dialect-specific insert statement #### `json_arr_agg` ```python theme={null} json_arr_agg(self, json_array: sa.ColumnElement[Any]) -> sa.ColumnElement[Any] ``` #### `json_arr_agg` ```python theme={null} json_arr_agg(self, json_array: sa.ColumnElement[Any]) -> sa.ColumnElement[Any] ``` aggregates a JSON array #### `make_timestamp_intervals` ```python theme={null} make_timestamp_intervals(self, start_time: datetime.datetime, end_time: datetime.datetime, interval: datetime.timedelta) -> sa.Select[tuple[datetime.datetime, datetime.datetime]] ``` #### `make_timestamp_intervals` ```python theme={null} make_timestamp_intervals(self, start_time: datetime.datetime, end_time: datetime.datetime, interval: datetime.timedelta) -> sa.Select[tuple[datetime.datetime, datetime.datetime]] ``` #### `read_configuration_value` ```python theme={null} read_configuration_value(self, db: PrefectDBInterface, session: AsyncSession, key: str) -> Optional[dict[str, Any]] ``` Read a configuration value by key. Configuration values should not be changed at run time, so retrieved values are cached in memory. The main use of configurations is encrypting blocks, this speeds up nested block document queries. #### `set_state_id_on_inserted_flow_runs_statement` ```python theme={null} set_state_id_on_inserted_flow_runs_statement(self, db: PrefectDBInterface, inserted_flow_run_ids: Sequence[UUID], insert_flow_run_states: Iterable[dict[str, Any]]) -> sa.Update ``` Given a list of flow run ids and associated states, set the state\_id to the appropriate state for all flow runs #### `set_state_id_on_inserted_flow_runs_statement` ```python theme={null} set_state_id_on_inserted_flow_runs_statement(self, inserted_flow_run_ids: Sequence[UUID], insert_flow_run_states: Iterable[dict[str, Any]]) -> sa.Update ``` #### `unique_key` ```python theme={null} unique_key(self) -> tuple[Hashable, ...] ``` Returns a key used to determine whether to instantiate a new DB interface. #### `uses_json_strings` ```python theme={null} uses_json_strings(self) -> bool ``` #### `uses_json_strings` ```python theme={null} uses_json_strings(self) -> bool ``` specifies whether the configured dialect returns JSON as strings # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-events-__init__ # `prefect.server.events` *This module is empty or contains only private/internal implementations.* # actions Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-events-actions # `prefect.server.events.actions` The actions consumer watches for actions that have been triggered by Automations and carries them out. Also includes the various concrete subtypes of Actions ## Functions ### `record_action_happening` ```python theme={null} record_action_happening(id: UUID) -> None ``` Record that an action has happened, with an expiration of an hour. ### `action_has_already_happened` ```python theme={null} action_has_already_happened(id: UUID) -> bool ``` Check if the action has already happened ### `consumer` ```python theme={null} consumer() -> AsyncGenerator[MessageHandler, None] ``` ## Classes ### `ActionFailed` ### `Action` An Action that may be performed when an Automation is triggered **Methods:** #### `act` ```python theme={null} act(self, triggered_action: 'TriggeredAction') -> None ``` Perform the requested Action #### `fail` ```python theme={null} fail(self, triggered_action: 'TriggeredAction', reason: str) -> None ``` #### `logging_context` ```python theme={null} logging_context(self, triggered_action: 'TriggeredAction') -> Dict[str, Any] ``` Common logging context for all actions #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `succeed` ```python theme={null} succeed(self, triggered_action: 'TriggeredAction') -> None ``` ### `DoNothing` Do nothing when an Automation is triggered **Methods:** #### `act` ```python theme={null} act(self, triggered_action: 'TriggeredAction') -> None ``` #### `describe_for_cli` ```python theme={null} describe_for_cli(self) -> str ``` A human-readable description of the action ### `EmitEventAction` **Methods:** #### `act` ```python theme={null} act(self, triggered_action: 'TriggeredAction') -> None ``` #### `create_event` ```python theme={null} create_event(self, triggered_action: 'TriggeredAction') -> 'Event' ``` Create an event from the TriggeredAction #### `describe_for_cli` ```python theme={null} describe_for_cli(self) -> str ``` A human-readable description of the action ### `ExternalDataAction` Base class for Actions that require data from an external source such as the Orchestration API **Methods:** #### `describe_for_cli` ```python theme={null} describe_for_cli(self) -> str ``` A human-readable description of the action #### `events_api_client` ```python theme={null} events_api_client(self, triggered_action: 'TriggeredAction') -> PrefectServerEventsAPIClient ``` #### `orchestration_client` ```python theme={null} orchestration_client(self, triggered_action: 'TriggeredAction') -> 'OrchestrationClient' ``` #### `reason_from_response` ```python theme={null} reason_from_response(self, response: Response) -> str ``` ### `JinjaTemplateAction` Base class for Actions that use Jinja templates supplied by the user and are rendered with a context containing data from the triggered action, and the orchestration API. **Methods:** #### `events_api_client` ```python theme={null} events_api_client(self, triggered_action: 'TriggeredAction') -> PrefectServerEventsAPIClient ``` #### `instantiate_object` ```python theme={null} instantiate_object(self, model: Type[PrefectBaseModel], data: Dict[str, Any], triggered_action: 'TriggeredAction', resource: Optional['Resource'] = None) -> PrefectBaseModel ``` #### `orchestration_client` ```python theme={null} orchestration_client(self, triggered_action: 'TriggeredAction') -> 'OrchestrationClient' ``` #### `reason_from_response` ```python theme={null} reason_from_response(self, response: Response) -> str ``` #### `templates_in_dictionary` ```python theme={null} templates_in_dictionary(cls, dict_: dict[Any, Any | dict[Any, Any]]) -> list[tuple[dict[Any, Any], dict[Any, str]]] ``` #### `validate_template` ```python theme={null} validate_template(cls, template: str, field_name: str) -> str ``` ### `DeploymentAction` Base class for Actions that operate on Deployments and need to infer them from events **Methods:** #### `deployment_id_to_use` ```python theme={null} deployment_id_to_use(self, triggered_action: 'TriggeredAction') -> UUID ``` #### `describe_for_cli` ```python theme={null} describe_for_cli(self) -> str ``` A human-readable description of the action #### `selected_deployment_requires_id` ```python theme={null} selected_deployment_requires_id(self) -> Self ``` ### `DeploymentCommandAction` Executes a command against a matching deployment **Methods:** #### `act` ```python theme={null} act(self, triggered_action: 'TriggeredAction') -> None ``` #### `command` ```python theme={null} command(self, orchestration: 'OrchestrationClient', deployment_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` Execute the deployment command #### `events_api_client` ```python theme={null} events_api_client(self, triggered_action: 'TriggeredAction') -> PrefectServerEventsAPIClient ``` #### `orchestration_client` ```python theme={null} orchestration_client(self, triggered_action: 'TriggeredAction') -> 'OrchestrationClient' ``` #### `reason_from_response` ```python theme={null} reason_from_response(self, response: Response) -> str ``` #### `selected_deployment_requires_id` ```python theme={null} selected_deployment_requires_id(self) ``` ### `RunDeployment` Runs the given deployment with the given parameters **Methods:** #### `act` ```python theme={null} act(self, triggered_action: 'TriggeredAction') -> None ``` #### `command` ```python theme={null} command(self, orchestration: 'OrchestrationClient', deployment_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` #### `command` ```python theme={null} command(self, orchestration: 'OrchestrationClient', deployment_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` Execute the deployment command #### `instantiate_object` ```python theme={null} instantiate_object(self, model: Type[PrefectBaseModel], data: Dict[str, Any], triggered_action: 'TriggeredAction', resource: Optional['Resource'] = None) -> PrefectBaseModel ``` #### `render_parameters` ```python theme={null} render_parameters(self, triggered_action: 'TriggeredAction') -> Dict[str, Any] ``` #### `templates_in_dictionary` ```python theme={null} templates_in_dictionary(cls, dict_: dict[Any, Any | dict[Any, Any]]) -> list[tuple[dict[Any, Any], dict[Any, str]]] ``` #### `validate_parameters` ```python theme={null} validate_parameters(cls, value: dict[str, Any] | None) -> dict[str, Any] | None ``` #### `validate_template` ```python theme={null} validate_template(cls, template: str, field_name: str) -> str ``` ### `PauseDeployment` Pauses the given Deployment **Methods:** #### `act` ```python theme={null} act(self, triggered_action: 'TriggeredAction') -> None ``` #### `command` ```python theme={null} command(self, orchestration: 'OrchestrationClient', deployment_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` #### `command` ```python theme={null} command(self, orchestration: 'OrchestrationClient', deployment_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` Execute the deployment command ### `ResumeDeployment` Resumes the given Deployment **Methods:** #### `act` ```python theme={null} act(self, triggered_action: 'TriggeredAction') -> None ``` #### `command` ```python theme={null} command(self, orchestration: 'OrchestrationClient', deployment_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` #### `command` ```python theme={null} command(self, orchestration: 'OrchestrationClient', deployment_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` Execute the deployment command ### `FlowRunAction` An action that operates on a flow run **Methods:** #### `events_api_client` ```python theme={null} events_api_client(self, triggered_action: 'TriggeredAction') -> PrefectServerEventsAPIClient ``` #### `flow_run` ```python theme={null} flow_run(self, triggered_action: 'TriggeredAction') -> UUID ``` #### `orchestration_client` ```python theme={null} orchestration_client(self, triggered_action: 'TriggeredAction') -> 'OrchestrationClient' ``` #### `reason_from_response` ```python theme={null} reason_from_response(self, response: Response) -> str ``` ### `FlowRunStateChangeAction` Changes the state of a flow run associated with the trigger **Methods:** #### `act` ```python theme={null} act(self, triggered_action: 'TriggeredAction') -> None ``` #### `flow_run` ```python theme={null} flow_run(self, triggered_action: 'TriggeredAction') -> UUID ``` #### `new_state` ```python theme={null} new_state(self, triggered_action: 'TriggeredAction') -> StateCreate ``` Return the new state for the flow run ### `ChangeFlowRunState` Changes the state of a flow run associated with the trigger **Methods:** #### `act` ```python theme={null} act(self, triggered_action: 'TriggeredAction') -> None ``` #### `new_state` ```python theme={null} new_state(self, triggered_action: 'TriggeredAction') -> StateCreate ``` #### `new_state` ```python theme={null} new_state(self, triggered_action: 'TriggeredAction') -> StateCreate ``` Return the new state for the flow run ### `CancelFlowRun` Cancels a flow run associated with the trigger **Methods:** #### `act` ```python theme={null} act(self, triggered_action: 'TriggeredAction') -> None ``` #### `new_state` ```python theme={null} new_state(self, triggered_action: 'TriggeredAction') -> StateCreate ``` #### `new_state` ```python theme={null} new_state(self, triggered_action: 'TriggeredAction') -> StateCreate ``` Return the new state for the flow run ### `SuspendFlowRun` Suspends a flow run associated with the trigger **Methods:** #### `act` ```python theme={null} act(self, triggered_action: 'TriggeredAction') -> None ``` #### `new_state` ```python theme={null} new_state(self, triggered_action: 'TriggeredAction') -> StateCreate ``` #### `new_state` ```python theme={null} new_state(self, triggered_action: 'TriggeredAction') -> StateCreate ``` Return the new state for the flow run ### `ResumeFlowRun` Resumes a paused or suspended flow run associated with the trigger **Methods:** #### `act` ```python theme={null} act(self, triggered_action: 'TriggeredAction') -> None ``` #### `flow_run` ```python theme={null} flow_run(self, triggered_action: 'TriggeredAction') -> UUID ``` ### `CallWebhook` Call a webhook when an Automation is triggered. **Methods:** #### `act` ```python theme={null} act(self, triggered_action: 'TriggeredAction') -> None ``` #### `ensure_payload_is_a_string` ```python theme={null} ensure_payload_is_a_string(cls, value: Union[str, Dict[str, Any], None]) -> Optional[str] ``` Temporary measure while we migrate payloads from being a dictionary to a string template. This covers both reading from the database where values may currently be a dictionary, as well as the API, where older versions of the frontend may be sending a JSON object with the single `"message"` key. #### `instantiate_object` ```python theme={null} instantiate_object(self, model: Type[PrefectBaseModel], data: Dict[str, Any], triggered_action: 'TriggeredAction', resource: Optional['Resource'] = None) -> PrefectBaseModel ``` #### `templates_in_dictionary` ```python theme={null} templates_in_dictionary(cls, dict_: dict[Any, Any | dict[Any, Any]]) -> list[tuple[dict[Any, Any], dict[Any, str]]] ``` #### `validate_payload_templates` ```python theme={null} validate_payload_templates(cls, value: Optional[str]) -> Optional[str] ``` Validate user-provided payload template. #### `validate_template` ```python theme={null} validate_template(cls, template: str, field_name: str) -> str ``` ### `SendNotification` Send a notification when an Automation is triggered **Methods:** #### `act` ```python theme={null} act(self, triggered_action: 'TriggeredAction') -> None ``` #### `instantiate_object` ```python theme={null} instantiate_object(self, model: Type[PrefectBaseModel], data: Dict[str, Any], triggered_action: 'TriggeredAction', resource: Optional['Resource'] = None) -> PrefectBaseModel ``` #### `is_valid_template` ```python theme={null} is_valid_template(cls, value: str, info: ValidationInfo) -> str ``` #### `render` ```python theme={null} render(self, triggered_action: 'TriggeredAction') -> List[str] ``` #### `templates_in_dictionary` ```python theme={null} templates_in_dictionary(cls, dict_: dict[Any, Any | dict[Any, Any]]) -> list[tuple[dict[Any, Any], dict[Any, str]]] ``` #### `validate_template` ```python theme={null} validate_template(cls, template: str, field_name: str) -> str ``` ### `WorkPoolAction` Base class for Actions that operate on Work Pools and need to infer them from events **Methods:** #### `describe_for_cli` ```python theme={null} describe_for_cli(self) -> str ``` A human-readable description of the action #### `selected_work_pool_requires_id` ```python theme={null} selected_work_pool_requires_id(self) -> Self ``` #### `work_pool_id_to_use` ```python theme={null} work_pool_id_to_use(self, triggered_action: 'TriggeredAction') -> UUID ``` ### `WorkPoolCommandAction` **Methods:** #### `act` ```python theme={null} act(self, triggered_action: 'TriggeredAction') -> None ``` #### `command` ```python theme={null} command(self, orchestration: 'OrchestrationClient', work_pool: WorkPool, triggered_action: 'TriggeredAction') -> Response ``` Issue the command to the Work Pool #### `events_api_client` ```python theme={null} events_api_client(self, triggered_action: 'TriggeredAction') -> PrefectServerEventsAPIClient ``` #### `orchestration_client` ```python theme={null} orchestration_client(self, triggered_action: 'TriggeredAction') -> 'OrchestrationClient' ``` #### `reason_from_response` ```python theme={null} reason_from_response(self, response: Response) -> str ``` #### `target_work_pool` ```python theme={null} target_work_pool(self, triggered_action: 'TriggeredAction') -> WorkPool ``` ### `PauseWorkPool` Pauses a Work Pool **Methods:** #### `act` ```python theme={null} act(self, triggered_action: 'TriggeredAction') -> None ``` #### `command` ```python theme={null} command(self, orchestration: 'OrchestrationClient', work_pool: WorkPool, triggered_action: 'TriggeredAction') -> Response ``` #### `command` ```python theme={null} command(self, orchestration: 'OrchestrationClient', work_pool: WorkPool, triggered_action: 'TriggeredAction') -> Response ``` Issue the command to the Work Pool #### `target_work_pool` ```python theme={null} target_work_pool(self, triggered_action: 'TriggeredAction') -> WorkPool ``` ### `ResumeWorkPool` Resumes a Work Pool **Methods:** #### `act` ```python theme={null} act(self, triggered_action: 'TriggeredAction') -> None ``` #### `command` ```python theme={null} command(self, orchestration: 'OrchestrationClient', work_pool: WorkPool, triggered_action: 'TriggeredAction') -> Response ``` #### `command` ```python theme={null} command(self, orchestration: 'OrchestrationClient', work_pool: WorkPool, triggered_action: 'TriggeredAction') -> Response ``` Issue the command to the Work Pool #### `target_work_pool` ```python theme={null} target_work_pool(self, triggered_action: 'TriggeredAction') -> WorkPool ``` ### `WorkQueueAction` Base class for Actions that operate on Work Queues and need to infer them from events **Methods:** #### `describe_for_cli` ```python theme={null} describe_for_cli(self) -> str ``` A human-readable description of the action #### `selected_work_queue_requires_id` ```python theme={null} selected_work_queue_requires_id(self) -> Self ``` #### `work_queue_id_to_use` ```python theme={null} work_queue_id_to_use(self, triggered_action: 'TriggeredAction') -> UUID ``` ### `WorkQueueCommandAction` **Methods:** #### `act` ```python theme={null} act(self, triggered_action: 'TriggeredAction') -> None ``` #### `command` ```python theme={null} command(self, orchestration: 'OrchestrationClient', work_queue_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` Issue the command to the Work Queue #### `events_api_client` ```python theme={null} events_api_client(self, triggered_action: 'TriggeredAction') -> PrefectServerEventsAPIClient ``` #### `orchestration_client` ```python theme={null} orchestration_client(self, triggered_action: 'TriggeredAction') -> 'OrchestrationClient' ``` #### `reason_from_response` ```python theme={null} reason_from_response(self, response: Response) -> str ``` #### `selected_work_queue_requires_id` ```python theme={null} selected_work_queue_requires_id(self) -> Self ``` ### `PauseWorkQueue` Pauses a Work Queue **Methods:** #### `act` ```python theme={null} act(self, triggered_action: 'TriggeredAction') -> None ``` #### `command` ```python theme={null} command(self, orchestration: 'OrchestrationClient', work_queue_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` #### `command` ```python theme={null} command(self, orchestration: 'OrchestrationClient', work_queue_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` Issue the command to the Work Queue ### `ResumeWorkQueue` Resumes a Work Queue **Methods:** #### `act` ```python theme={null} act(self, triggered_action: 'TriggeredAction') -> None ``` #### `command` ```python theme={null} command(self, orchestration: 'OrchestrationClient', work_queue_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` #### `command` ```python theme={null} command(self, orchestration: 'OrchestrationClient', work_queue_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` Issue the command to the Work Queue ### `AutomationAction` Base class for Actions that operate on Automations and need to infer them from events **Methods:** #### `automation_id_to_use` ```python theme={null} automation_id_to_use(self, triggered_action: 'TriggeredAction') -> UUID ``` #### `describe_for_cli` ```python theme={null} describe_for_cli(self) -> str ``` A human-readable description of the action #### `selected_automation_requires_id` ```python theme={null} selected_automation_requires_id(self) -> Self ``` ### `AutomationCommandAction` **Methods:** #### `act` ```python theme={null} act(self, triggered_action: 'TriggeredAction') -> None ``` #### `command` ```python theme={null} command(self, events: PrefectServerEventsAPIClient, automation_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` Issue the command to the Work Queue #### `events_api_client` ```python theme={null} events_api_client(self, triggered_action: 'TriggeredAction') -> PrefectServerEventsAPIClient ``` #### `orchestration_client` ```python theme={null} orchestration_client(self, triggered_action: 'TriggeredAction') -> 'OrchestrationClient' ``` #### `reason_from_response` ```python theme={null} reason_from_response(self, response: Response) -> str ``` #### `selected_automation_requires_id` ```python theme={null} selected_automation_requires_id(self) -> Self ``` ### `PauseAutomation` Pauses a Work Queue **Methods:** #### `act` ```python theme={null} act(self, triggered_action: 'TriggeredAction') -> None ``` #### `command` ```python theme={null} command(self, events: PrefectServerEventsAPIClient, automation_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` #### `command` ```python theme={null} command(self, events: PrefectServerEventsAPIClient, automation_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` Issue the command to the Work Queue ### `ResumeAutomation` Resumes a Work Queue **Methods:** #### `act` ```python theme={null} act(self, triggered_action: 'TriggeredAction') -> None ``` #### `command` ```python theme={null} command(self, events: PrefectServerEventsAPIClient, automation_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` #### `command` ```python theme={null} command(self, events: PrefectServerEventsAPIClient, automation_id: UUID, triggered_action: 'TriggeredAction') -> Response ``` Issue the command to the Work Queue # clients Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-events-clients # `prefect.server.events.clients` ## Classes ### `EventsClient` The abstract interface for a Prefect Events client **Methods:** #### `emit` ```python theme={null} emit(self, event: Event) -> Optional[Event] ``` ### `NullEventsClient` A no-op implementation of the Prefect Events client for testing **Methods:** #### `client_name` ```python theme={null} client_name(self) -> str ``` #### `emit` ```python theme={null} emit(self, event: Event) -> None ``` #### `emit` ```python theme={null} emit(self, event: Event) -> None ``` Emit a single event ### `AssertingEventsClient` An implementation of the Prefect Events client that records all events sent to it for inspection during tests. **Methods:** #### `assert_emitted_event_count` ```python theme={null} assert_emitted_event_count(cls, count: int) -> None ``` Assert that the given number of events were emitted. #### `assert_emitted_event_with` ```python theme={null} assert_emitted_event_with(cls, event: Optional[str] = None, resource: Optional[Dict[str, LabelValue]] = None, related: Optional[List[Dict[str, LabelValue]]] = None, payload: Optional[Dict[str, Any]] = None) -> None ``` Assert that an event was emitted containing the given properties. #### `assert_no_emitted_event_with` ```python theme={null} assert_no_emitted_event_with(cls, event: Optional[str] = None, resource: Optional[Dict[str, LabelValue]] = None, related: Optional[List[Dict[str, LabelValue]]] = None, payload: Optional[Dict[str, Any]] = None) -> None ``` #### `client_name` ```python theme={null} client_name(self) -> str ``` #### `emit` ```python theme={null} emit(self, event: Event) -> Event ``` #### `emit` ```python theme={null} emit(self, event: Event) -> None ``` Emit a single event #### `emitted_events_count` ```python theme={null} emitted_events_count(cls) -> int ``` #### `reset` ```python theme={null} reset(cls) -> None ``` Reset all captured instances and their events. For use this between tests ### `PrefectServerEventsClient` **Methods:** #### `client_name` ```python theme={null} client_name(self) -> str ``` #### `emit` ```python theme={null} emit(self, event: Event) -> ReceivedEvent ``` #### `emit` ```python theme={null} emit(self, event: Event) -> None ``` Emit a single event ### `PrefectServerEventsAPIClient` **Methods:** #### `pause_automation` ```python theme={null} pause_automation(self, automation_id: UUID) -> httpx.Response ``` #### `resume_automation` ```python theme={null} resume_automation(self, automation_id: UUID) -> httpx.Response ``` # counting Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-events-counting # `prefect.server.events.counting` ## Classes ### `InvalidEventCountParameters` Raised when the given parameters are invalid for counting events. ### `TimeUnit` **Methods:** #### `as_timedelta` ```python theme={null} as_timedelta(self, interval: float) -> Duration ``` #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` #### `database_label_expression` ```python theme={null} database_label_expression(self, db: PrefectDBInterface, time_interval: float) -> sa.Function[str] ``` Returns the SQL expression to label a time bucket #### `database_value_expression` ```python theme={null} database_value_expression(self, time_interval: float) -> sa.Cast[str] ``` Returns the SQL expression to place an event in a time bucket #### `get_interval_spans` ```python theme={null} get_interval_spans(self, start_datetime: datetime.datetime, end_datetime: datetime.datetime, interval: float) -> Generator[int | tuple[datetime.datetime, datetime.datetime], None, None] ``` Divide the given range of dates into evenly-sized spans of interval units #### `validate_buckets` ```python theme={null} validate_buckets(self, start_datetime: datetime.datetime, end_datetime: datetime.datetime, interval: float) -> None ``` ### `Countable` **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` #### `get_database_query` ```python theme={null} get_database_query(self, filter: 'EventFilter', time_unit: TimeUnit, time_interval: float) -> Select[tuple[str, str, DateTime, DateTime, int]] ``` # filters Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-events-filters # `prefect.server.events.filters` ## Classes ### `AutomationFilterCreated` Filter by `Automation.created`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `AutomationFilterName` Filter by `Automation.created`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `AutomationFilterTags` Filter by `Automation.tags`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` ### `AutomationFilter` **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` ### `EventDataFilter` A base class for filtering event data. **Methods:** #### `build_where_clauses` ```python theme={null} build_where_clauses(self) -> Sequence['ColumnExpressionArgument[bool]'] ``` Convert the criteria to a WHERE clause. #### `excludes` ```python theme={null} excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python theme={null} get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python theme={null} includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `EventOccurredFilter` **Methods:** #### `build_where_clauses` ```python theme={null} build_where_clauses(self, db: PrefectDBInterface) -> Sequence['ColumnExpressionArgument[bool]'] ``` #### `clamp` ```python theme={null} clamp(self, max_duration: timedelta) -> None ``` Limit how far the query can look back based on the given duration #### `excludes` ```python theme={null} excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python theme={null} get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python theme={null} includes(self, event: Event) -> bool ``` #### `includes` ```python theme={null} includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? ### `EventNameFilter` **Methods:** #### `build_where_clauses` ```python theme={null} build_where_clauses(self, db: PrefectDBInterface) -> Sequence['ColumnExpressionArgument[bool]'] ``` #### `excludes` ```python theme={null} excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python theme={null} get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python theme={null} includes(self, event: Event) -> bool ``` #### `includes` ```python theme={null} includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? ### `EventResourceFilter` **Methods:** #### `build_where_clauses` ```python theme={null} build_where_clauses(self, db: PrefectDBInterface) -> Sequence['ColumnExpressionArgument[bool]'] ``` #### `excludes` ```python theme={null} excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python theme={null} get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python theme={null} includes(self, event: Event) -> bool ``` #### `includes` ```python theme={null} includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? ### `EventRelatedFilter` **Methods:** #### `build_where_clauses` ```python theme={null} build_where_clauses(self, db: PrefectDBInterface) -> Sequence['ColumnExpressionArgument[bool]'] ``` #### `excludes` ```python theme={null} excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python theme={null} get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python theme={null} includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? ### `EventAnyResourceFilter` **Methods:** #### `build_where_clauses` ```python theme={null} build_where_clauses(self, db: PrefectDBInterface) -> Sequence['ColumnExpressionArgument[bool]'] ``` #### `excludes` ```python theme={null} excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python theme={null} get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python theme={null} includes(self, event: Event) -> bool ``` #### `includes` ```python theme={null} includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? ### `EventIDFilter` **Methods:** #### `build_where_clauses` ```python theme={null} build_where_clauses(self, db: PrefectDBInterface) -> Sequence['ColumnExpressionArgument[bool]'] ``` #### `excludes` ```python theme={null} excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python theme={null} get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python theme={null} includes(self, event: Event) -> bool ``` #### `includes` ```python theme={null} includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? ### `EventTextFilter` Filter by text search across event content. **Methods:** #### `build_where_clauses` ```python theme={null} build_where_clauses(self, db: PrefectDBInterface) -> Sequence['ColumnExpressionArgument[bool]'] ``` Build SQLAlchemy WHERE clauses for text search #### `excludes` ```python theme={null} excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python theme={null} get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python theme={null} includes(self, event: Event) -> bool ``` Check if this text filter includes the given event. #### `includes` ```python theme={null} includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? ### `EventOrder` **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `EventFilter` **Methods:** #### `build_where_clauses` ```python theme={null} build_where_clauses(self, db: PrefectDBInterface) -> Sequence['ColumnExpressionArgument[bool]'] ``` #### `excludes` ```python theme={null} excludes(self, event: Event) -> bool ``` Would the given filter exclude this event? #### `get_filters` ```python theme={null} get_filters(self) -> list['EventDataFilter'] ``` #### `includes` ```python theme={null} includes(self, event: Event) -> bool ``` Does the given event match the criteria of this filter? #### `logical_limit` ```python theme={null} logical_limit(self) -> int ``` The logical limit for this query, which is a maximum number of rows that it *could* return (regardless of what the caller has requested). May be used as an optimization for DB queries # jinja_filters Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-events-jinja_filters # `prefect.server.events.jinja_filters` ## Functions ### `ui_url` ```python theme={null} ui_url(ctx: Mapping[str, Any], obj: Any) -> str | None ``` Return the UI URL for the given object. ### `ui_resource_events_url` ```python theme={null} ui_resource_events_url(ctx: Mapping[str, Any], obj: Any) -> str | None ``` Given a Resource or Model, return a UI link to the events page filtered for that resource. If an unsupported object is provided, return `None`. Currently supports Automation, Resource, Deployment, Flow, FlowRun, TaskRun, and WorkQueue objects. Within a Resource, deployment, flow, flow-run, task-run, and work-queue are supported. ### `flow_run_id` ```python theme={null} flow_run_id(text: str | None) -> str | None ``` Extract a flow run ID from a string, such as a PR body. # messaging Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-events-messaging # `prefect.server.events.messaging` ## Functions ### `publish` ```python theme={null} publish(events: Iterable[ReceivedEvent]) -> None ``` Send the given events as a batch via the default publisher ### `create_event_publisher` ```python theme={null} create_event_publisher() -> EventPublisher ``` ### `create_actions_publisher` ```python theme={null} create_actions_publisher() -> Publisher ``` ## Classes ### `EventPublisher` **Methods:** #### `publish_data` ```python theme={null} publish_data(self, data: bytes, attributes: Mapping[str, str]) -> None ``` #### `publish_data` ```python theme={null} publish_data(self, data: bytes, attributes: Mapping[str, str]) -> None ``` #### `publish_event` ```python theme={null} publish_event(self, event: ReceivedEvent) -> None ``` Publishes the given events **Args:** * `event`: the event to publish # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-events-models-__init__ # `prefect.server.events.models` *This module is empty or contains only private/internal implementations.* # automations Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-events-models-automations # `prefect.server.events.models.automations` ## Functions ### `automations_session` ```python theme={null} automations_session(db: PrefectDBInterface, begin_transaction: bool = False) -> AsyncGenerator[AsyncSession, None] ``` ### `read_automations_for_workspace` ```python theme={null} read_automations_for_workspace(db: PrefectDBInterface, session: AsyncSession, sort: AutomationSort = AutomationSort.NAME_ASC, limit: Optional[int] = None, offset: Optional[int] = None, automation_filter: Optional[filters.AutomationFilter] = None) -> Sequence[Automation] ``` ### `count_automations_for_workspace` ```python theme={null} count_automations_for_workspace(db: PrefectDBInterface, session: AsyncSession) -> int ``` ### `read_automation` ```python theme={null} read_automation(db: PrefectDBInterface, session: AsyncSession, automation_id: UUID) -> Optional[Automation] ``` ### `read_automation_by_id` ```python theme={null} read_automation_by_id(db: PrefectDBInterface, session: AsyncSession, automation_id: UUID) -> Optional[Automation] ``` ### `create_automation` ```python theme={null} create_automation(db: PrefectDBInterface, session: AsyncSession, automation: Automation) -> Automation ``` ### `update_automation` ```python theme={null} update_automation(db: PrefectDBInterface, session: AsyncSession, automation_update: Union[AutomationUpdate, AutomationPartialUpdate], automation_id: UUID) -> bool ``` ### `delete_automation` ```python theme={null} delete_automation(db: PrefectDBInterface, session: AsyncSession, automation_id: UUID) -> bool ``` ### `delete_automations_for_workspace` ```python theme={null} delete_automations_for_workspace(db: PrefectDBInterface, session: AsyncSession) -> bool ``` ### `disable_automations_for_workspace` ```python theme={null} disable_automations_for_workspace(db: PrefectDBInterface, session: AsyncSession) -> bool ``` ### `disable_automation` ```python theme={null} disable_automation(db: PrefectDBInterface, session: AsyncSession, automation_id: UUID) -> bool ``` ### `relate_automation_to_resource` ```python theme={null} relate_automation_to_resource(db: PrefectDBInterface, session: AsyncSession, automation_id: UUID, resource_id: str, owned_by_resource: bool) -> None ``` ### `read_automations_related_to_resource` ```python theme={null} read_automations_related_to_resource(db: PrefectDBInterface, session: AsyncSession, resource_id: str, owned_by_resource: Optional[bool] = None, automation_filter: Optional[filters.AutomationFilter] = None) -> Sequence[Automation] ``` ### `delete_automations_owned_by_resource` ```python theme={null} delete_automations_owned_by_resource(db: PrefectDBInterface, session: AsyncSession, resource_id: str, automation_filter: Optional[filters.AutomationFilter] = None) -> Sequence[UUID] ``` # composite_trigger_child_firing Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-events-models-composite_trigger_child_firing # `prefect.server.events.models.composite_trigger_child_firing` ## Functions ### `acquire_composite_trigger_lock` ```python theme={null} acquire_composite_trigger_lock(session: AsyncSession, trigger: CompositeTrigger) -> None ``` Acquire a transaction-scoped advisory lock for the given composite trigger. This serializes concurrent child trigger evaluations for the same compound trigger, preventing a race condition where multiple transactions each see only their own child firing and neither fires the parent. The lock is automatically released when the transaction commits or rolls back. ### `upsert_child_firing` ```python theme={null} upsert_child_firing(db: PrefectDBInterface, session: AsyncSession, firing: Firing) ``` ### `get_child_firings` ```python theme={null} get_child_firings(db: PrefectDBInterface, session: AsyncSession, trigger: CompositeTrigger) -> Sequence['ORMCompositeTriggerChildFiring'] ``` ### `clear_old_child_firings` ```python theme={null} clear_old_child_firings(db: PrefectDBInterface, session: AsyncSession, trigger: CompositeTrigger, fired_before: DateTime) -> None ``` ### `clear_child_firings` ```python theme={null} clear_child_firings(db: PrefectDBInterface, session: AsyncSession, trigger: CompositeTrigger, firing_ids: Sequence[UUID]) -> set[UUID] ``` Delete the specified child firings and return the IDs that were actually deleted. Returns the set of child\_firing\_ids that were successfully deleted. Callers can compare this to the expected firing\_ids to detect races and avoid double-firing composite triggers. # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-events-ordering-__init__ # `prefect.server.events.ordering` Manages the partial causal ordering of events for a particular consumer. This module maintains a buffer of events to be processed, aiming to process them in the order they occurred causally. ## Functions ### `get_triggers_causal_ordering` ```python theme={null} get_triggers_causal_ordering() -> CausalOrdering ``` ### `get_task_run_recorder_causal_ordering` ```python theme={null} get_task_run_recorder_causal_ordering() -> CausalOrdering ``` ## Classes ### `CausalOrderingModule` ### `EventArrivedEarly` ### `MaxDepthExceeded` ### `event_handler` ### `CausalOrdering` **Methods:** #### `event_has_been_seen` ```python theme={null} event_has_been_seen(self, event: Union[UUID, Event]) -> bool ``` #### `forget_follower` ```python theme={null} forget_follower(self, follower: ReceivedEvent) -> None ``` #### `get_followers` ```python theme={null} get_followers(self, leader: ReceivedEvent) -> List[ReceivedEvent] ``` #### `get_lost_followers` ```python theme={null} get_lost_followers(self) -> List[ReceivedEvent] ``` #### `preceding_event_confirmed` ```python theme={null} preceding_event_confirmed(self, handler: event_handler, event: ReceivedEvent, depth: int = 0) -> AsyncContextManager[None] ``` #### `record_event_as_seen` ```python theme={null} record_event_as_seen(self, event: ReceivedEvent) -> None ``` #### `record_follower` ```python theme={null} record_follower(self, event: ReceivedEvent) -> None ``` # db Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-events-ordering-db # `prefect.server.events.ordering.db` ## Classes ### `CausalOrdering` **Methods:** #### `event_has_been_seen` ```python theme={null} event_has_been_seen(self, event: Union[UUID, Event]) -> bool ``` #### `forget_follower` ```python theme={null} forget_follower(self, db: PrefectDBInterface, follower: ReceivedEvent) -> None ``` Forget that this event is waiting on another event to arrive #### `get_followers` ```python theme={null} get_followers(self, db: PrefectDBInterface, leader: ReceivedEvent) -> List[ReceivedEvent] ``` Returns events that were waiting on this leader event to arrive #### `get_lost_followers` ```python theme={null} get_lost_followers(self, db: PrefectDBInterface) -> List[ReceivedEvent] ``` Returns events that were waiting on a leader event that never arrived #### `preceding_event_confirmed` ```python theme={null} preceding_event_confirmed(self, handler: event_handler, event: ReceivedEvent, depth: int = 0) ``` Events may optionally declare that they logically follow another event, so that we can preserve important event orderings in the face of unreliable delivery and ordering of messages from the queues. This function keeps track of the ID of each event that this shard has successfully processed going back to the PRECEDING\_EVENT\_LOOKBACK period. If an event arrives that must follow another one, confirm that we have recently seen and processed that event before proceeding. event (ReceivedEvent): The event to be processed. This object should include metadata indicating if and what event it follows. depth (int, optional): The current recursion depth, used to prevent infinite recursion due to cyclic dependencies between events. Defaults to 0. Raises EventArrivedEarly if the current event shouldn't be processed yet. #### `record_event_as_seen` ```python theme={null} record_event_as_seen(self, event: ReceivedEvent) -> None ``` #### `record_follower` ```python theme={null} record_follower(self, db: PrefectDBInterface, event: ReceivedEvent) -> None ``` Remember that this event is waiting on another event to arrive # memory Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-events-ordering-memory # `prefect.server.events.ordering.memory` ## Classes ### `EventBeingProcessed` Indicates that an event is currently being processed and should not be processed until it is finished. This may happen due to concurrent processing. ### `CausalOrdering` **Methods:** #### `clear` ```python theme={null} clear(self) -> None ``` Clear all data for this scope. #### `clear_all_scopes` ```python theme={null} clear_all_scopes(cls) -> None ``` Clear all data for all scopes - useful for testing. #### `event_has_been_seen` ```python theme={null} event_has_been_seen(self, event: UUID | Event) -> bool ``` #### `event_has_started_processing` ```python theme={null} event_has_started_processing(self, event: UUID | Event) -> bool ``` #### `event_is_processing` ```python theme={null} event_is_processing(self, event: ReceivedEvent) -> AsyncGenerator[None, None] ``` Mark an event as being processed for the duration of its lifespan through the ordering system. #### `followers_by_id` ```python theme={null} followers_by_id(self, follower_ids: list[UUID]) -> list[ReceivedEvent] ``` Returns the events with the given IDs, in the order they occurred. #### `forget_event_is_processing` ```python theme={null} forget_event_is_processing(self, event: ReceivedEvent) -> None ``` #### `forget_follower` ```python theme={null} forget_follower(self, follower: ReceivedEvent) -> None ``` Forget that this event is waiting on another event to arrive. #### `get_followers` ```python theme={null} get_followers(self, leader: ReceivedEvent) -> list[ReceivedEvent] ``` Returns events that were waiting on this leader event to arrive. #### `get_lost_followers` ```python theme={null} get_lost_followers(self) -> list[ReceivedEvent] ``` Returns events that were waiting on a leader event that never arrived. #### `preceding_event_confirmed` ```python theme={null} preceding_event_confirmed(self, handler: event_handler, event: ReceivedEvent, depth: int = 0) -> AsyncGenerator[None, None] ``` Events may optionally declare that they logically follow another event, so that we can preserve important event orderings in the face of unreliable delivery and ordering of messages from the queues. This function keeps track of the ID of each event that this shard has successfully processed going back to the PRECEDING\_EVENT\_LOOKBACK period. If an event arrives that must follow another one, confirm that we have recently seen and processed that event before proceeding. **Args:** * `handler`: The function to call when an out-of-order event is ready to be processed * `event`: The event to be processed. This object should include metadata indicating if and what event it follows. * `depth`: The current recursion depth, used to prevent infinite recursion due to cyclic dependencies between events. Defaults to 0. Raises EventArrivedEarly if the current event shouldn't be processed yet. #### `record_event_as_processing` ```python theme={null} record_event_as_processing(self, event: ReceivedEvent) -> bool ``` Record that an event is being processed, returning False if already processing. #### `record_event_as_seen` ```python theme={null} record_event_as_seen(self, event: ReceivedEvent) -> None ``` #### `record_follower` ```python theme={null} record_follower(self, event: ReceivedEvent) -> None ``` Remember that this event is waiting on another event to arrive. #### `wait_for_leader` ```python theme={null} wait_for_leader(self, event: ReceivedEvent) -> None ``` Given an event, wait for its leader to be processed before proceeding, or raise EventArrivedEarly if we would wait too long in this attempt. # pipeline Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-events-pipeline # `prefect.server.events.pipeline` ## Classes ### `EventsPipeline` **Methods:** #### `events_to_messages` ```python theme={null} events_to_messages(events: list[Event]) -> list[MemoryMessage] ``` #### `process_events` ```python theme={null} process_events(self, events: list[Event]) -> None ``` #### `process_message` ```python theme={null} process_message(self, message: MemoryMessage) -> None ``` Process a single event message #### `process_messages` ```python theme={null} process_messages(self, messages: list[MemoryMessage]) -> None ``` # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-events-schemas-__init__ # `prefect.server.events.schemas` *This module is empty or contains only private/internal implementations.* # automations Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-events-schemas-automations # `prefect.server.events.schemas.automations` ## Classes ### `Posture` **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `TriggerState` **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `Trigger` Base class describing a set of criteria that must be satisfied in order to trigger an automation. **Methods:** #### `all_triggers` ```python theme={null} all_triggers(self) -> Sequence[Trigger] ``` Returns all triggers within this trigger #### `automation` ```python theme={null} automation(self) -> 'Automation' ``` #### `create_automation_state_change_event` ```python theme={null} create_automation_state_change_event(self, firing: 'Firing', trigger_state: TriggerState) -> ReceivedEvent ``` #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `parent` ```python theme={null} parent(self) -> 'Union[Trigger, Automation]' ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `reset_ids` ```python theme={null} reset_ids(self) -> None ``` Resets the ID of this trigger and all of its children ### `CompositeTrigger` Requires some number of triggers to have fired within the given time period. **Methods:** #### `actions` ```python theme={null} actions(self) -> List[ActionTypes] ``` #### `all_triggers` ```python theme={null} all_triggers(self) -> Sequence[Trigger] ``` #### `as_automation` ```python theme={null} as_automation(self) -> 'AutomationCore' ``` #### `child_trigger_ids` ```python theme={null} child_trigger_ids(self) -> List[UUID] ``` #### `create_automation_state_change_event` ```python theme={null} create_automation_state_change_event(self, firing: Firing, trigger_state: TriggerState) -> ReceivedEvent ``` Returns a ReceivedEvent for an automation state change into a triggered or resolved state. #### `describe_for_cli` ```python theme={null} describe_for_cli(self, indent: int = 0) -> str ``` Return a human-readable description of this trigger for the CLI #### `num_expected_firings` ```python theme={null} num_expected_firings(self) -> int ``` #### `owner_resource` ```python theme={null} owner_resource(self) -> Optional[str] ``` #### `ready_to_fire` ```python theme={null} ready_to_fire(self, firings: Sequence['Firing']) -> bool ``` #### `set_deployment_id` ```python theme={null} set_deployment_id(self, deployment_id: UUID) -> None ``` ### `CompoundTrigger` A composite trigger that requires some number of triggers to have fired within the given time period **Methods:** #### `num_expected_firings` ```python theme={null} num_expected_firings(self) -> int ``` #### `ready_to_fire` ```python theme={null} ready_to_fire(self, firings: Sequence['Firing']) -> bool ``` #### `validate_require` ```python theme={null} validate_require(self) -> Self ``` ### `SequenceTrigger` A composite trigger that requires some number of triggers to have fired within the given time period in a specific order **Methods:** #### `expected_firing_order` ```python theme={null} expected_firing_order(self) -> List[UUID] ``` #### `ready_to_fire` ```python theme={null} ready_to_fire(self, firings: Sequence['Firing']) -> bool ``` ### `ResourceTrigger` Base class for triggers that may filter by the labels of resources. **Methods:** #### `actions` ```python theme={null} actions(self) -> List[ActionTypes] ``` #### `as_automation` ```python theme={null} as_automation(self) -> 'AutomationCore' ``` #### `coerce_match` ```python theme={null} coerce_match(cls, v: Any) -> Any ``` #### `coerce_match_related` ```python theme={null} coerce_match_related(cls, v: Any) -> Any ``` #### `covers_resources` ```python theme={null} covers_resources(self, resource: Resource, related: Sequence[RelatedResource]) -> bool ``` #### `describe_for_cli` ```python theme={null} describe_for_cli(self, indent: int = 0) -> str ``` Return a human-readable description of this trigger for the CLI #### `owner_resource` ```python theme={null} owner_resource(self) -> Optional[str] ``` #### `set_deployment_id` ```python theme={null} set_deployment_id(self, deployment_id: UUID) -> None ``` ### `EventTrigger` A trigger that fires based on the presence or absence of events within a given period of time. **Methods:** #### `bucketing_key` ```python theme={null} bucketing_key(self, event: ReceivedEvent) -> Tuple[str, ...] ``` #### `coerce_match` ```python theme={null} coerce_match(cls, v: Any) -> Any ``` #### `coerce_match_related` ```python theme={null} coerce_match_related(cls, v: Any) -> Any ``` #### `covers` ```python theme={null} covers(self, event: ReceivedEvent) -> bool ``` #### `create_automation_state_change_event` ```python theme={null} create_automation_state_change_event(self, firing: Firing, trigger_state: TriggerState) -> ReceivedEvent ``` Returns a ReceivedEvent for an automation state change into a triggered or resolved state. #### `enforce_minimum_within_for_proactive_triggers` ```python theme={null} enforce_minimum_within_for_proactive_triggers(cls, data: Dict[str, Any] | Any) -> Dict[str, Any] ``` #### `event_pattern` ```python theme={null} event_pattern(self) -> re.Pattern[str] ``` A regular expression which may be evaluated against any event string to determine if this trigger would be interested in the event #### `expects` ```python theme={null} expects(self, event: str) -> bool ``` #### `immediate` ```python theme={null} immediate(self) -> bool ``` Does this reactive trigger fire immediately for all events? #### `meets_threshold` ```python theme={null} meets_threshold(self, event_count: int) -> bool ``` #### `starts_after` ```python theme={null} starts_after(self, event: str) -> bool ``` ### `AutomationCore` Defines an action a user wants to take when a certain number of events do or don't happen to the matching resources **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `prevent_run_deployment_loops` ```python theme={null} prevent_run_deployment_loops(self) -> Self ``` Detects potential infinite loops in automations with RunDeployment actions #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `trigger_by_id` ```python theme={null} trigger_by_id(self, trigger_id: UUID) -> Optional[Trigger] ``` Returns the trigger with the given ID, or None if no such trigger exists #### `triggers` ```python theme={null} triggers(self) -> Sequence[Trigger] ``` Returns all triggers within this automation #### `triggers_of_type` ```python theme={null} triggers_of_type(self, trigger_type: Type[T]) -> Sequence[T] ``` Returns all triggers of the specified type within this automation ### `Automation` **Methods:** #### `model_validate` ```python theme={null} model_validate(cls: type[Self], obj: Any) -> Self ``` ### `AutomationCreate` ### `AutomationUpdate` ### `AutomationPartialUpdate` ### `AutomationSort` Defines automations sorting options. **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `Firing` Represents one instance of a trigger firing **Methods:** #### `all_events` ```python theme={null} all_events(self) -> Sequence[ReceivedEvent] ``` #### `all_firings` ```python theme={null} all_firings(self) -> Sequence[Firing] ``` #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `validate_trigger_states` ```python theme={null} validate_trigger_states(cls, value: set[TriggerState]) -> set[TriggerState] ``` ### `TriggeredAction` An action caused as the result of an automation **Methods:** #### `all_events` ```python theme={null} all_events(self) -> Sequence[ReceivedEvent] ``` #### `all_firings` ```python theme={null} all_firings(self) -> Sequence[Firing] ``` #### `idempotency_key` ```python theme={null} idempotency_key(self) -> str ``` Produce a human-friendly idempotency key for this action #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. # events Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-events-schemas-events # `prefect.server.events.schemas.events` ## Functions ### `matches` ```python theme={null} matches(expected: str, value: Optional[str]) -> bool ``` Returns true if the given value matches the expected string. **Args:** * `expected`: A glob pattern to match against; if it starts with an `!`, the pattern is negated. * `value`: The value of the label. ## Classes ### `Resource` An observable business object of interest to the user **Methods:** #### `as_label_value_array` ```python theme={null} as_label_value_array(self) -> List[Dict[str, str]] ``` #### `enforce_maximum_labels` ```python theme={null} enforce_maximum_labels(self) -> Self ``` #### `get` ```python theme={null} get(self, label: str, default: Optional[str] = None) -> Optional[str] ``` #### `has_all_labels` ```python theme={null} has_all_labels(self, labels: Dict[str, str]) -> bool ``` #### `id` ```python theme={null} id(self) -> str ``` #### `items` ```python theme={null} items(self) -> Iterable[Tuple[str, str]] ``` #### `keys` ```python theme={null} keys(self) -> Iterable[str] ``` #### `labels` ```python theme={null} labels(self) -> LabelDiver ``` #### `name` ```python theme={null} name(self) -> Optional[str] ``` #### `prefect_object_id` ```python theme={null} prefect_object_id(self, kind: str) -> UUID ``` Extracts the UUID from an event's resource ID if it's the expected kind of prefect resource #### `requires_resource_id` ```python theme={null} requires_resource_id(self) -> Self ``` ### `RelatedResource` A Resource with a specific role in an Event **Methods:** #### `enforce_maximum_labels` ```python theme={null} enforce_maximum_labels(self) -> Self ``` #### `id` ```python theme={null} id(self) -> str ``` #### `name` ```python theme={null} name(self) -> Optional[str] ``` #### `prefect_object_id` ```python theme={null} prefect_object_id(self, kind: str) -> UUID ``` Extracts the UUID from an event's resource ID if it's the expected kind of prefect resource #### `requires_resource_id` ```python theme={null} requires_resource_id(self) -> Self ``` #### `requires_resource_role` ```python theme={null} requires_resource_role(self) -> Self ``` #### `role` ```python theme={null} role(self) -> str ``` ### `Event` The client-side view of an event that has happened to a Resource **Methods:** #### `enforce_maximum_related_resources` ```python theme={null} enforce_maximum_related_resources(cls, value: List[RelatedResource]) -> List[RelatedResource] ``` #### `find_resource_label` ```python theme={null} find_resource_label(self, label: str) -> Optional[str] ``` Finds the value of the given label in this event's resource or one of its related resources. If the label starts with `related::`, search for the first matching label in a related resource with that role. #### `involved_resources` ```python theme={null} involved_resources(self) -> Sequence[Resource] ``` #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `receive` ```python theme={null} receive(self, received: Optional[prefect.types._datetime.DateTime] = None) -> 'ReceivedEvent' ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `resource_in_role` ```python theme={null} resource_in_role(self) -> Mapping[str, RelatedResource] ``` Returns a mapping of roles to the first related resource in that role #### `resources_in_role` ```python theme={null} resources_in_role(self) -> Mapping[str, Sequence[RelatedResource]] ``` Returns a mapping of roles to related resources in that role #### `size_bytes` ```python theme={null} size_bytes(self) -> int ``` ### `ReceivedEvent` The server-side view of an event that has happened to a Resource after it has been received by the server **Methods:** #### `as_database_resource_rows` ```python theme={null} as_database_resource_rows(self) -> List[Dict[str, Any]] ``` #### `as_database_row` ```python theme={null} as_database_row(self) -> dict[str, Any] ``` #### `is_set` ```python theme={null} is_set(self) ``` #### `set` ```python theme={null} set(self) -> None ``` Set the flag, notifying all waiters. Unlike `asyncio.Event`, waiters may not be notified immediately when this is called; instead, notification will be placed on the owning loop of each waiter for thread safety. #### `url` ```python theme={null} url(self) -> Optional[str] ``` Returns the UI URL for this event, allowing users to link to events in automation templates without parsing date strings. #### `wait` ```python theme={null} wait(self) -> Literal[True] ``` Block until the internal flag is true. If the internal flag is true on entry, return True immediately. Otherwise, block until another `set()` is called, then return True. ### `ResourceSpecification` **Methods:** #### `deepcopy` ```python theme={null} deepcopy(self) -> 'ResourceSpecification' ``` #### `get` ```python theme={null} get(self, key: str, default: Optional[Union[str, List[str]]] = None) -> Optional[List[str]] ``` #### `includes` ```python theme={null} includes(self, candidates: Iterable[Resource]) -> bool ``` #### `items` ```python theme={null} items(self) -> Iterable[Tuple[str, List[str]]] ``` #### `matches` ```python theme={null} matches(self, resource: Resource) -> bool ``` #### `matches_every_resource` ```python theme={null} matches_every_resource(self) -> bool ``` #### `matches_every_resource_of_kind` ```python theme={null} matches_every_resource_of_kind(self, prefix: str) -> bool ``` #### `pop` ```python theme={null} pop(self, key: str, default: Optional[Union[str, List[str]]] = None) -> Optional[List[str]] ``` ### `EventPage` A single page of events returned from the API, with an optional link to the next page of results **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `EventCount` The count of events with the given filter value **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. # labelling Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-events-schemas-labelling # `prefect.server.events.schemas.labelling` ## Classes ### `LabelDiver` The LabelDiver supports templating use cases for any Labelled object, by presenting the labels as a graph of objects that may be accessed by attribute. For example: ```python theme={null} diver = LabelDiver({ 'hello.world': 'foo', 'hello.world.again': 'bar' }) assert str(diver.hello.world) == 'foo' assert str(diver.hello.world.again) == 'bar' ``` ### `Labelled` **Methods:** #### `as_label_value_array` ```python theme={null} as_label_value_array(self) -> List[Dict[str, str]] ``` #### `get` ```python theme={null} get(self, label: str, default: Optional[str] = None) -> Optional[str] ``` #### `has_all_labels` ```python theme={null} has_all_labels(self, labels: Dict[str, str]) -> bool ``` #### `items` ```python theme={null} items(self) -> Iterable[Tuple[str, str]] ``` #### `keys` ```python theme={null} keys(self) -> Iterable[str] ``` #### `labels` ```python theme={null} labels(self) -> LabelDiver ``` # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-events-services-__init__ # `prefect.server.events.services` *This module is empty or contains only private/internal implementations.* # actions Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-events-services-actions # `prefect.server.events.services.actions` ## Classes ### `Actions` Runs the actions triggered by automations **Methods:** #### `all_services` ```python theme={null} all_services(cls) -> Sequence[type[Self]] ``` Get list of all service classes #### `enabled` ```python theme={null} enabled(cls) -> bool ``` Whether the service is enabled #### `enabled_services` ```python theme={null} enabled_services(cls) -> list[type[Self]] ``` Get list of enabled service classes #### `environment_variable_name` ```python theme={null} environment_variable_name(cls) -> str ``` #### `run_services` ```python theme={null} run_services(cls) -> NoReturn ``` Run enabled services until cancelled. #### `running` ```python theme={null} running(cls) -> AsyncGenerator[None, None] ``` A context manager that runs enabled services on entry and stops them on exit. #### `service_settings` ```python theme={null} service_settings(cls) -> ServicesBaseSetting ``` #### `service_settings` ```python theme={null} service_settings(cls) -> ServicesBaseSetting ``` The Prefect setting that controls whether the service is enabled #### `start` ```python theme={null} start(self) -> NoReturn ``` #### `start` ```python theme={null} start(self) -> NoReturn ``` Start running the service, which may run indefinitely #### `stop` ```python theme={null} stop(self) -> None ``` #### `stop` ```python theme={null} stop(self) -> None ``` Stop the service # event_logger Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-events-services-event_logger # `prefect.server.events.services.event_logger` ## Classes ### `EventLogger` A debugging service that logs events to the console as they arrive. **Methods:** #### `all_services` ```python theme={null} all_services(cls) -> Sequence[type[Self]] ``` Get list of all service classes #### `enabled` ```python theme={null} enabled(cls) -> bool ``` Whether the service is enabled #### `enabled_services` ```python theme={null} enabled_services(cls) -> list[type[Self]] ``` Get list of enabled service classes #### `environment_variable_name` ```python theme={null} environment_variable_name(cls) -> str ``` #### `run_services` ```python theme={null} run_services(cls) -> NoReturn ``` Run enabled services until cancelled. #### `running` ```python theme={null} running(cls) -> AsyncGenerator[None, None] ``` A context manager that runs enabled services on entry and stops them on exit. #### `service_settings` ```python theme={null} service_settings(cls) -> ServicesBaseSetting ``` #### `service_settings` ```python theme={null} service_settings(cls) -> ServicesBaseSetting ``` The Prefect setting that controls whether the service is enabled #### `start` ```python theme={null} start(self) -> NoReturn ``` #### `start` ```python theme={null} start(self) -> NoReturn ``` Start running the service, which may run indefinitely #### `stop` ```python theme={null} stop(self) -> None ``` #### `stop` ```python theme={null} stop(self) -> None ``` Stop the service # event_persister Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-events-services-event_persister # `prefect.server.events.services.event_persister` The event persister moves event messages from the event bus to storage storage as fast as it can. Never gets tired. ## Functions ### `create_handler` ```python theme={null} create_handler(batch_size: int = 20, flush_every: timedelta = timedelta(seconds=5), queue_max_size: int = 50000, max_flush_retries: int = 5) -> AsyncGenerator[MessageHandler, None] ``` Set up a message handler that will accumulate and send events to the database every `batch_size` messages, or every `flush_every` interval to flush any remaining messages. Event trimming/retention is handled by the db\_vacuum service (vacuum\_old\_events and vacuum\_events\_with\_retention\_overrides tasks). **Args:** * `batch_size`: Number of events to accumulate before flushing * `flush_every`: Maximum time between flushes * `queue_max_size`: Maximum events in queue before dropping new events * `max_flush_retries`: Consecutive flush failures before dropping events ## Classes ### `EventPersister` A service that persists events to the database as they arrive. **Methods:** #### `all_services` ```python theme={null} all_services(cls) -> Sequence[type[Self]] ``` Get list of all service classes #### `enabled` ```python theme={null} enabled(cls) -> bool ``` Whether the service is enabled #### `enabled_services` ```python theme={null} enabled_services(cls) -> list[type[Self]] ``` Get list of enabled service classes #### `environment_variable_name` ```python theme={null} environment_variable_name(cls) -> str ``` #### `run_services` ```python theme={null} run_services(cls) -> NoReturn ``` Run enabled services until cancelled. #### `running` ```python theme={null} running(cls) -> AsyncGenerator[None, None] ``` A context manager that runs enabled services on entry and stops them on exit. #### `service_settings` ```python theme={null} service_settings(cls) -> ServerServicesEventPersisterSettings ``` #### `service_settings` ```python theme={null} service_settings(cls) -> ServicesBaseSetting ``` The Prefect setting that controls whether the service is enabled #### `start` ```python theme={null} start(self) -> NoReturn ``` #### `start` ```python theme={null} start(self) -> NoReturn ``` Start running the service, which may run indefinitely #### `started_event` ```python theme={null} started_event(self) -> asyncio.Event ``` #### `started_event` ```python theme={null} started_event(self, value: asyncio.Event) -> None ``` #### `stop` ```python theme={null} stop(self) -> None ``` #### `stop` ```python theme={null} stop(self) -> None ``` Stop the service # triggers Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-events-services-triggers # `prefect.server.events.services.triggers` ## Functions ### `evaluate_proactive_triggers_periodic` ```python theme={null} evaluate_proactive_triggers_periodic(perpetual: Perpetual = Perpetual(automatic=True, every=get_current_settings().server.events.proactive_granularity)) -> None ``` Evaluate proactive automation triggers on a periodic schedule. ## Classes ### `ReactiveTriggers` Evaluates reactive automation triggers **Methods:** #### `all_services` ```python theme={null} all_services(cls) -> Sequence[type[Self]] ``` Get list of all service classes #### `enabled` ```python theme={null} enabled(cls) -> bool ``` Whether the service is enabled #### `enabled_services` ```python theme={null} enabled_services(cls) -> list[type[Self]] ``` Get list of enabled service classes #### `environment_variable_name` ```python theme={null} environment_variable_name(cls) -> str ``` #### `run_services` ```python theme={null} run_services(cls) -> NoReturn ``` Run enabled services until cancelled. #### `running` ```python theme={null} running(cls) -> AsyncGenerator[None, None] ``` A context manager that runs enabled services on entry and stops them on exit. #### `service_settings` ```python theme={null} service_settings(cls) -> ServicesBaseSetting ``` #### `service_settings` ```python theme={null} service_settings(cls) -> ServicesBaseSetting ``` The Prefect setting that controls whether the service is enabled #### `start` ```python theme={null} start(self) -> NoReturn ``` #### `start` ```python theme={null} start(self) -> NoReturn ``` Start running the service, which may run indefinitely #### `stop` ```python theme={null} stop(self) -> None ``` #### `stop` ```python theme={null} stop(self) -> None ``` Stop the service # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-events-storage-__init__ # `prefect.server.events.storage` ## Functions ### `to_page_token` ```python theme={null} to_page_token(filter: 'EventFilter', count: int, page_size: int, current_offset: int) -> Optional[str] ``` ### `from_page_token` ```python theme={null} from_page_token(page_token: str) -> Tuple['EventFilter', int, int, int] ``` ### `process_time_based_counts` ```python theme={null} process_time_based_counts(filter: 'EventFilter', time_unit: TimeUnit, time_interval: float, counts: List[EventCount]) -> List[EventCount] ``` Common logic for processing time-based counts across different event backends. When doing time-based counting we want to do two things: 1. Backfill any missing intervals with 0 counts. 2. Update the start/end times that are emitted to match the beginning and end of the intervals rather than having them reflect the true max/min occurred time of the events themselves. ## Classes ### `InvalidTokenError` ### `QueryRangeTooLarge` # database Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-events-storage-database # `prefect.server.events.storage.database` ## Functions ### `build_distinct_queries` ```python theme={null} build_distinct_queries(db: PrefectDBInterface, events_filter: EventFilter) -> list[sa.Column['ORMEvent']] ``` ### `query_events` ```python theme={null} query_events(session: AsyncSession, filter: EventFilter, page_size: int = INTERACTIVE_PAGE_SIZE) -> tuple[list[ReceivedEvent], int, Optional[str]] ``` ### `query_next_page` ```python theme={null} query_next_page(session: AsyncSession, page_token: str) -> tuple[list[ReceivedEvent], int, Optional[str]] ``` ### `count_events` ```python theme={null} count_events(session: AsyncSession, filter: EventFilter, countable: Countable, time_unit: TimeUnit, time_interval: float) -> list[EventCount] ``` ### `raw_count_events` ```python theme={null} raw_count_events(db: PrefectDBInterface, session: AsyncSession, events_filter: EventFilter) -> int ``` Count events from the database with the given filter. Only returns the count and does not return any addition metadata. For additional metadata, use `count_events`. **Args:** * `session`: a database session * `events_filter`: filter criteria for events **Returns:** * The count of events in the database that match the filter criteria. ### `read_events` ```python theme={null} read_events(db: PrefectDBInterface, session: AsyncSession, events_filter: EventFilter, limit: Optional[int] = None, offset: Optional[int] = None) -> Sequence['ORMEvent'] ``` Read events from the Postgres database. **Args:** * `session`: a Postgres events session. * `filter`: filter criteria for events. * `limit`: limit for the query. * `offset`: offset for the query. **Returns:** * A list of events ORM objects. ### `write_events` ```python theme={null} write_events(session: AsyncSession, events: list[ReceivedEvent]) -> None ``` Write events to the database. **Args:** * `session`: a database session * `events`: the events to insert ### `get_max_query_parameters` ```python theme={null} get_max_query_parameters() -> int ``` ### `get_number_of_event_fields` ```python theme={null} get_number_of_event_fields() -> int ``` ### `get_number_of_resource_fields` ```python theme={null} get_number_of_resource_fields() -> int ``` # stream Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-events-stream # `prefect.server.events.stream` ## Functions ### `subscribed` ```python theme={null} subscribed(filter: EventFilter) -> AsyncGenerator['Queue[ReceivedEvent]', None] ``` ### `events` ```python theme={null} events(filter: EventFilter) -> AsyncGenerator[AsyncIterable[Optional[ReceivedEvent]], None] ``` ### `distributor` ```python theme={null} distributor() -> AsyncGenerator[messaging.MessageHandler, None] ``` ### `start_distributor` ```python theme={null} start_distributor() -> None ``` Starts the distributor consumer as a global background task ### `stop_distributor` ```python theme={null} stop_distributor() -> None ``` Stops the distributor consumer global background task ### `run_distributor` ```python theme={null} run_distributor(started: asyncio.Event) -> NoReturn ``` Runs the distributor consumer forever until it is cancelled ## Classes ### `Distributor` **Methods:** #### `all_services` ```python theme={null} all_services(cls) -> Sequence[type[Self]] ``` Get list of all service classes #### `enabled` ```python theme={null} enabled(cls) -> bool ``` #### `enabled` ```python theme={null} enabled(cls) -> bool ``` Whether the service is enabled #### `enabled_services` ```python theme={null} enabled_services(cls) -> list[type[Self]] ``` Get list of enabled service classes #### `environment_variable_name` ```python theme={null} environment_variable_name(cls) -> dict[str, str] ``` #### `environment_variable_name` ```python theme={null} environment_variable_name(cls) -> str ``` #### `run_services` ```python theme={null} run_services(cls) -> NoReturn ``` Run enabled services until cancelled. #### `running` ```python theme={null} running(cls) -> AsyncGenerator[None, None] ``` A context manager that runs enabled services on entry and stops them on exit. #### `service_settings` ```python theme={null} service_settings(cls) -> ServicesBaseSetting ``` #### `service_settings` ```python theme={null} service_settings(cls) -> ServicesBaseSetting ``` The Prefect setting that controls whether the service is enabled #### `start` ```python theme={null} start(self) -> None ``` #### `start` ```python theme={null} start(self) -> NoReturn ``` Start running the service, which may run indefinitely #### `stop` ```python theme={null} stop(self) -> None ``` #### `stop` ```python theme={null} stop(self) -> None ``` Stop the service # triggers Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-events-triggers # `prefect.server.events.triggers` The triggers consumer watches events streaming in from the event bus and decides whether to act on them based on the automations that users have set up. ## Functions ### `evaluate` ```python theme={null} evaluate(session: AsyncSession, trigger: EventTrigger, bucket: 'ORMAutomationBucket', now: prefect.types._datetime.DateTime, triggering_event: Optional[ReceivedEvent]) -> 'ORMAutomationBucket | None' ``` Evaluates an Automation, either triggered by a specific event or proactively on a time interval. Evaluating a Automation updates the associated counters for each automation, and will fire the associated action if it has met the threshold. ### `fire` ```python theme={null} fire(session: AsyncSession, firing: Firing) -> None ``` ### `evaluate_composite_trigger` ```python theme={null} evaluate_composite_trigger(session: AsyncSession, firing: Firing) -> None ``` ### `act` ```python theme={null} act(firing: Firing) -> None ``` Given a Automation that has been triggered, the triggering labels and event (if there was one), publish an action for the `actions` service to process. ### `update_events_clock` ```python theme={null} update_events_clock(event: ReceivedEvent) -> None ``` ### `get_events_clock` ```python theme={null} get_events_clock() -> Optional[float] ``` ### `get_events_clock_offset` ```python theme={null} get_events_clock_offset() -> float ``` Calculate the current clock offset. This takes into account both the `occurred` of the last event, as well as the time we *saw* the last event. This helps to ensure that in low volume environments, we don't end up getting huge offsets. ### `reset_events_clock` ```python theme={null} reset_events_clock() -> None ``` ### `reactive_evaluation` ```python theme={null} reactive_evaluation(event: ReceivedEvent, depth: int = 0) -> None ``` Evaluate all automations that may apply to this event. event (ReceivedEvent): The event to evaluate. This object contains all the necessary information about the event, including its type, associated resources, and metadata. depth (int, optional): The current recursion depth. This is used to prevent infinite recursion due to cyclic event dependencies. Defaults to 0 and is incremented with each recursive call. ### `get_lost_followers` ```python theme={null} get_lost_followers() -> List[ReceivedEvent] ``` Get followers that have been sitting around longer than our lookback ### `periodic_evaluation` ```python theme={null} periodic_evaluation(now: prefect.types._datetime.DateTime) -> None ``` Periodic tasks that should be run regularly, but not as often as every event ### `evaluate_periodically` ```python theme={null} evaluate_periodically(periodic_granularity: timedelta) -> None ``` Runs periodic evaluation on the given interval ### `find_interested_triggers` ```python theme={null} find_interested_triggers(event: ReceivedEvent) -> Collection[EventTrigger] ``` ### `clear_loaded_automations` ```python theme={null} clear_loaded_automations() -> None ``` ### `load_automation` ```python theme={null} load_automation(automation: Optional[Automation]) -> None ``` Loads the given automation into memory so that it is available for evaluations ### `forget_automation` ```python theme={null} forget_automation(automation_id: UUID) -> None ``` Unloads the given automation from memory ### `automation_changed` ```python theme={null} automation_changed(automation_id: UUID, event: Literal['automation__created', 'automation__updated', 'automation__deleted']) -> None ``` ### `load_automations` ```python theme={null} load_automations(db: PrefectDBInterface, session: AsyncSession) ``` Loads all automations for the given set of accounts ### `read_automation_state_snapshot` ```python theme={null} read_automation_state_snapshot(db: PrefectDBInterface, session: AsyncSession) -> AutomationStateSnapshot ``` ### `reconcile_automations` ```python theme={null} reconcile_automations(force: bool = False) -> bool ``` ### `remove_buckets_exceeding_threshold` ```python theme={null} remove_buckets_exceeding_threshold(db: PrefectDBInterface, session: AsyncSession, trigger: EventTrigger) ``` Deletes bucket where the count has already exceeded the threshold ### `read_buckets_for_automation` ```python theme={null} read_buckets_for_automation(db: PrefectDBInterface, session: AsyncSession, trigger: Trigger, batch_size: int = AUTOMATION_BUCKET_BATCH_SIZE) -> AsyncGenerator['ORMAutomationBucket', None] ``` Yields buckets for the given automation and trigger in batches. ### `read_bucket` ```python theme={null} read_bucket(db: PrefectDBInterface, session: AsyncSession, trigger: Trigger, bucketing_key: Tuple[str, ...]) -> Optional['ORMAutomationBucket'] ``` Gets the bucket this event would fall into for the given Automation, if there is one currently ### `read_bucket_by_trigger_id` ```python theme={null} read_bucket_by_trigger_id(db: PrefectDBInterface, session: AsyncSession, automation_id: UUID, trigger_id: UUID, bucketing_key: Tuple[str, ...]) -> 'ORMAutomationBucket | None' ``` Gets the bucket this event would fall into for the given Automation, if there is one currently ### `increment_bucket` ```python theme={null} increment_bucket(db: PrefectDBInterface, session: AsyncSession, bucket: 'ORMAutomationBucket', count: int, last_event: Optional[ReceivedEvent]) -> 'ORMAutomationBucket' ``` Adds the given count to the bucket, returning the new bucket ### `start_new_bucket` ```python theme={null} start_new_bucket(db: PrefectDBInterface, session: AsyncSession, trigger: EventTrigger, bucketing_key: Tuple[str, ...], start: prefect.types._datetime.DateTime, end: prefect.types._datetime.DateTime, count: int, triggered_at: Optional[prefect.types._datetime.DateTime] = None, last_event: Optional[ReceivedEvent] = None) -> 'ORMAutomationBucket' ``` Ensures that a bucket with the given start and end exists with the given count, returning the new bucket ### `ensure_bucket` ```python theme={null} ensure_bucket(db: PrefectDBInterface, session: AsyncSession, trigger: EventTrigger, bucketing_key: Tuple[str, ...], start: prefect.types._datetime.DateTime, end: prefect.types._datetime.DateTime, last_event: Optional[ReceivedEvent], initial_count: int = 0) -> 'ORMAutomationBucket' ``` Ensures that a bucket has been started for the given automation and key, returning the current bucket. Will not modify the existing bucket. ### `remove_bucket` ```python theme={null} remove_bucket(db: PrefectDBInterface, session: AsyncSession, bucket: 'ORMAutomationBucket') ``` Removes the given bucket from the database ### `sweep_closed_buckets` ```python theme={null} sweep_closed_buckets(db: PrefectDBInterface, session: AsyncSession, older_than: prefect.types._datetime.DateTime) -> None ``` ### `reset` ```python theme={null} reset() -> None ``` Resets the in-memory state of the service ### `listen_for_automation_changes` ```python theme={null} listen_for_automation_changes() -> None ``` Listens for any changes to automations via PostgreSQL NOTIFY/LISTEN, and applies those changes to the set of loaded automations. ### `consumer` ```python theme={null} consumer(periodic_granularity: timedelta = timedelta(seconds=5)) -> AsyncGenerator[MessageHandler, None] ``` The `triggers.consumer` processes all Events arriving on the event bus to determine if they meet the automation criteria, queuing up a corresponding `TriggeredAction` for the `actions` service if the automation criteria is met. ### `proactive_evaluation` ```python theme={null} proactive_evaluation(trigger: EventTrigger, as_of: prefect.types._datetime.DateTime) -> prefect.types._datetime.DateTime ``` The core proactive evaluation operation for a single Automation ### `evaluate_proactive_triggers` ```python theme={null} evaluate_proactive_triggers() -> None ``` # exceptions Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-exceptions # `prefect.server.exceptions` ## Classes ### `ObjectNotFoundError` Error raised by the Prefect REST API when a requested object is not found. If thrown during a request, this exception will be caught and a 404 response will be returned. ### `OrchestrationError` An error raised while orchestrating a state transition ### `MissingVariableError` An error raised by the Prefect REST API when attempting to create or update a deployment with missing required variables. ### `FlowRunGraphTooLarge` Raised to indicate that a flow run's graph has more nodes that the configured maximum # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-logs-__init__ # `prefect.server.logs` *This module is empty or contains only private/internal implementations.* # messaging Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-logs-messaging # `prefect.server.logs.messaging` Log messaging for streaming logs through the messaging system. ## Functions ### `create_log_publisher` ```python theme={null} create_log_publisher() -> AsyncGenerator[messaging.Publisher, None] ``` Creates a publisher for sending logs to the messaging system. **Returns:** * A messaging publisher configured for the "logs" topic ### `publish_logs` ```python theme={null} publish_logs(logs: list[Log]) -> None ``` Publishes logs to the messaging system. **Args:** * `logs`: The logs to publish # stream Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-logs-stream # `prefect.server.logs.stream` Log streaming for live log distribution via websockets. ## Functions ### `subscribed` ```python theme={null} subscribed(filter: LogFilter) -> AsyncGenerator['Queue[Log]', None] ``` Subscribe to a stream of logs matching the given filter. **Args:** * `filter`: The log filter to apply ### `logs` ```python theme={null} logs(filter: LogFilter) -> AsyncGenerator[AsyncIterable[Log | None], None] ``` Create a stream of logs matching the given filter. **Args:** * `filter`: The log filter to apply ### `log_matches_filter` ```python theme={null} log_matches_filter(log: Log, filter: LogFilter) -> bool ``` Check if a log matches the given filter criteria. **Args:** * `log`: The log to check * `filter`: The filter to apply **Returns:** * True if the log matches the filter, False otherwise ### `distributor` ```python theme={null} distributor() -> AsyncGenerator[messaging.MessageHandler, None] ``` Create a message handler that distributes logs to subscribed clients. ### `start_distributor` ```python theme={null} start_distributor() -> None ``` Starts the distributor consumer as a global background task ### `stop_distributor` ```python theme={null} stop_distributor() -> None ``` Stops the distributor consumer global background task ### `run_distributor` ```python theme={null} run_distributor(started: asyncio.Event) -> NoReturn ``` Runs the distributor consumer forever until it is cancelled ## Classes ### `LogDistributor` Service for distributing logs to websocket subscribers **Methods:** #### `all_services` ```python theme={null} all_services(cls) -> Sequence[type[Self]] ``` Get list of all service classes #### `enabled` ```python theme={null} enabled(cls) -> bool ``` #### `enabled` ```python theme={null} enabled(cls) -> bool ``` Whether the service is enabled #### `enabled_services` ```python theme={null} enabled_services(cls) -> list[type[Self]] ``` Get list of enabled service classes #### `environment_variable_name` ```python theme={null} environment_variable_name(cls) -> str ``` #### `environment_variable_name` ```python theme={null} environment_variable_name(cls) -> str ``` #### `run_services` ```python theme={null} run_services(cls) -> NoReturn ``` Run enabled services until cancelled. #### `running` ```python theme={null} running(cls) -> AsyncGenerator[None, None] ``` A context manager that runs enabled services on entry and stops them on exit. #### `service_settings` ```python theme={null} service_settings(cls) -> ServicesBaseSetting ``` #### `service_settings` ```python theme={null} service_settings(cls) -> ServicesBaseSetting ``` The Prefect setting that controls whether the service is enabled #### `start` ```python theme={null} start(self) -> NoReturn ``` #### `start` ```python theme={null} start(self) -> NoReturn ``` Start running the service, which may run indefinitely #### `stop` ```python theme={null} stop(self) -> None ``` #### `stop` ```python theme={null} stop(self) -> None ``` Stop the service # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-models-__init__ # `prefect.server.models` *This module is empty or contains only private/internal implementations.* # artifacts Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-models-artifacts # `prefect.server.models.artifacts` ## Functions ### `create_artifact` ```python theme={null} create_artifact(session: AsyncSession, artifact: Artifact) -> orm_models.Artifact ``` ### `read_latest_artifact` ```python theme={null} read_latest_artifact(db: PrefectDBInterface, session: AsyncSession, key: str) -> Union[orm_models.ArtifactCollection, None] ``` Reads the latest artifact by key. Args: session: A database session key: The artifact key Returns: Artifact: The latest artifact ### `read_artifact` ```python theme={null} read_artifact(db: PrefectDBInterface, session: AsyncSession, artifact_id: UUID) -> Union[orm_models.Artifact, None] ``` Reads an artifact by id. ### `read_artifacts` ```python theme={null} read_artifacts(db: PrefectDBInterface, session: AsyncSession, offset: Optional[int] = None, limit: Optional[int] = None, artifact_filter: Optional[filters.ArtifactFilter] = None, flow_run_filter: Optional[filters.FlowRunFilter] = None, task_run_filter: Optional[filters.TaskRunFilter] = None, deployment_filter: Optional[filters.DeploymentFilter] = None, flow_filter: Optional[filters.FlowFilter] = None, sort: sorting.ArtifactSort = sorting.ArtifactSort.ID_DESC) -> Sequence[orm_models.Artifact] ``` Reads artifacts. **Args:** * `session`: A database session * `offset`: Query offset * `limit`: Query limit * `artifact_filter`: Only select artifacts matching this filter * `flow_run_filter`: Only select artifacts whose flow runs matching this filter * `task_run_filter`: Only select artifacts whose task runs matching this filter * `deployment_filter`: Only select artifacts whose flow runs belong to deployments matching this filter * `flow_filter`: Only select artifacts whose flow runs belong to flows matching this filter * `work_pool_filter`: Only select artifacts whose flow runs belong to work pools matching this filter ### `read_latest_artifacts` ```python theme={null} read_latest_artifacts(db: PrefectDBInterface, session: AsyncSession, offset: Optional[int] = None, limit: Optional[int] = None, artifact_filter: Optional[filters.ArtifactCollectionFilter] = None, flow_run_filter: Optional[filters.FlowRunFilter] = None, task_run_filter: Optional[filters.TaskRunFilter] = None, deployment_filter: Optional[filters.DeploymentFilter] = None, flow_filter: Optional[filters.FlowFilter] = None, sort: sorting.ArtifactCollectionSort = sorting.ArtifactCollectionSort.ID_DESC) -> Sequence[orm_models.ArtifactCollection] ``` Reads artifacts. **Args:** * `session`: A database session * `offset`: Query offset * `limit`: Query limit * `artifact_filter`: Only select artifacts matching this filter * `flow_run_filter`: Only select artifacts whose flow runs matching this filter * `task_run_filter`: Only select artifacts whose task runs matching this filter * `deployment_filter`: Only select artifacts whose flow runs belong to deployments matching this filter * `flow_filter`: Only select artifacts whose flow runs belong to flows matching this filter * `work_pool_filter`: Only select artifacts whose flow runs belong to work pools matching this filter ### `count_artifacts` ```python theme={null} count_artifacts(db: PrefectDBInterface, session: AsyncSession, artifact_filter: Optional[filters.ArtifactFilter] = None, flow_run_filter: Optional[filters.FlowRunFilter] = None, task_run_filter: Optional[filters.TaskRunFilter] = None, deployment_filter: Optional[filters.DeploymentFilter] = None, flow_filter: Optional[filters.FlowFilter] = None) -> int ``` Counts artifacts. Args: session: A database session artifact\_filter: Only select artifacts matching this filter flow\_run\_filter: Only select artifacts whose flow runs matching this filter task\_run\_filter: Only select artifacts whose task runs matching this filter ### `count_latest_artifacts` ```python theme={null} count_latest_artifacts(db: PrefectDBInterface, session: AsyncSession, artifact_filter: Optional[filters.ArtifactCollectionFilter] = None, flow_run_filter: Optional[filters.FlowRunFilter] = None, task_run_filter: Optional[filters.TaskRunFilter] = None, deployment_filter: Optional[filters.DeploymentFilter] = None, flow_filter: Optional[filters.FlowFilter] = None) -> int ``` Counts artifacts. Args: session: A database session artifact\_filter: Only select artifacts matching this filter flow\_run\_filter: Only select artifacts whose flow runs matching this filter task\_run\_filter: Only select artifacts whose task runs matching this filter ### `update_artifact` ```python theme={null} update_artifact(db: PrefectDBInterface, session: AsyncSession, artifact_id: UUID, artifact: actions.ArtifactUpdate) -> bool ``` Updates an artifact by id. **Args:** * `session`: A database session * `artifact_id`: The artifact id to update * `artifact`: An artifact model **Returns:** * True if the update was successful, False otherwise ### `delete_artifact` ```python theme={null} delete_artifact(db: PrefectDBInterface, session: AsyncSession, artifact_id: UUID) -> bool ``` Deletes an artifact by id. The ArtifactCollection table is used to track the latest version of an artifact by key. If we are deleting the latest version of an artifact from the Artifact table, we need to first update the latest version referenced in ArtifactCollection so that it points to the next latest version of the artifact. Example: If we have the following artifacts in Artifact: * key: "foo", id: 1, created: 2020-01-01 * key: "foo", id: 2, created: 2020-01-02 * key: "foo", id: 3, created: 2020-01-03 the ArtifactCollection table has the following entry: * key: "foo", latest\_id: 3 If we delete the artifact with id 3, we need to update the latest version of the artifact with key "foo" to be the artifact with id 2. **Args:** * `session`: A database session * `artifact_id`: The artifact id to delete **Returns:** * True if the delete was successful, False otherwise # block_documents Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-models-block_documents # `prefect.server.models.block_documents` Functions for interacting with block document ORM objects. Intended for internal use by the Prefect REST API. ## Functions ### `create_block_document` ```python theme={null} create_block_document(db: PrefectDBInterface, session: AsyncSession, block_document: schemas.actions.BlockDocumentCreate) -> BlockDocument ``` ### `block_document_with_unique_values_exists` ```python theme={null} block_document_with_unique_values_exists(db: PrefectDBInterface, session: AsyncSession, block_type_id: UUID, name: str) -> bool ``` ### `read_block_document_by_id` ```python theme={null} read_block_document_by_id(session: AsyncSession, block_document_id: UUID, include_secrets: bool = False) -> Union[BlockDocument, None] ``` ### `read_block_document_by_name` ```python theme={null} read_block_document_by_name(session: AsyncSession, name: str, block_type_slug: str, include_secrets: bool = False) -> Union[BlockDocument, None] ``` Read a block document with the given name and block type slug. ### `read_block_documents` ```python theme={null} read_block_documents(db: PrefectDBInterface, session: AsyncSession, block_document_filter: Optional[schemas.filters.BlockDocumentFilter] = None, block_type_filter: Optional[schemas.filters.BlockTypeFilter] = None, block_schema_filter: Optional[schemas.filters.BlockSchemaFilter] = None, include_secrets: bool = False, sort: schemas.sorting.BlockDocumentSort = schemas.sorting.BlockDocumentSort.NAME_ASC, offset: Optional[int] = None, limit: Optional[int] = None) -> List[BlockDocument] ``` Read block documents with an optional limit and offset ### `count_block_documents` ```python theme={null} count_block_documents(db: PrefectDBInterface, session: AsyncSession, block_document_filter: Optional[schemas.filters.BlockDocumentFilter] = None, block_type_filter: Optional[schemas.filters.BlockTypeFilter] = None, block_schema_filter: Optional[schemas.filters.BlockSchemaFilter] = None) -> int ``` Count block documents that match the filters. ### `delete_block_document` ```python theme={null} delete_block_document(db: PrefectDBInterface, session: AsyncSession, block_document_id: UUID) -> bool ``` ### `update_block_document` ```python theme={null} update_block_document(db: PrefectDBInterface, session: AsyncSession, block_document_id: UUID, block_document: schemas.actions.BlockDocumentUpdate) -> bool ``` ### `create_block_document_reference` ```python theme={null} create_block_document_reference(db: PrefectDBInterface, session: AsyncSession, block_document_reference: schemas.actions.BlockDocumentReferenceCreate) -> Union[orm_models.BlockDocumentReference, None] ``` ### `delete_block_document_reference` ```python theme={null} delete_block_document_reference(db: PrefectDBInterface, session: AsyncSession, block_document_reference_id: UUID) -> bool ``` # block_registration Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-models-block_registration # `prefect.server.models.block_registration` ## Functions ### `register_block_schema` ```python theme={null} register_block_schema(session: AsyncSession, block_schema: Union[schemas.core.BlockSchema, 'ClientBlockSchema']) -> UUID ``` Stores the provided block schema in the Prefect REST API database. If a block schema with a matching checksum and version is already saved, then the ID of the existing block schema will be returned. **Args:** * `session`: A database session. * `block_schema`: A block schema object. **Returns:** * The ID of the registered block schema. ### `register_block_type` ```python theme={null} register_block_type(session: AsyncSession, block_type: Union[schemas.core.BlockType, 'ClientBlockType']) -> UUID ``` Stores the provided block type in the Prefect REST API database. If a block type with a matching slug is already saved, then the block type will be updated to match the passed in block type. **Args:** * `session`: A database session. * `block_type`: A block type object. **Returns:** * The ID of the registered block type. ### `run_block_auto_registration` ```python theme={null} run_block_auto_registration(session: AsyncSession) -> None ``` Registers all blocks in the client block registry and any blocks from Prefect Collections that are configured for auto-registration. **Args:** * `session`: A database session. # block_schemas Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-models-block_schemas # `prefect.server.models.block_schemas` Functions for interacting with block schema ORM objects. Intended for internal use by the Prefect REST API. ## Functions ### `create_block_schema` ```python theme={null} create_block_schema(db: PrefectDBInterface, session: AsyncSession, block_schema: Union[schemas.actions.BlockSchemaCreate, schemas.core.BlockSchema, 'ClientBlockSchemaCreate', 'ClientBlockSchema'], override: bool = False, definitions: Optional[dict[str, Any]] = None) -> Union[BlockSchema, orm_models.BlockSchema] ``` Create a new block schema. **Args:** * `session`: A database session * `block_schema`: a block schema object * `definitions`: Definitions of fields from block schema fields attribute. Used when recursively creating nested block schemas **Returns:** * an ORM block schema model ### `delete_block_schema` ```python theme={null} delete_block_schema(db: PrefectDBInterface, session: AsyncSession, block_schema_id: UUID) -> bool ``` Delete a block schema by id. **Args:** * `session`: A database session * `block_schema_id`: a block schema id **Returns:** * whether or not the block schema was deleted ### `read_block_schema` ```python theme={null} read_block_schema(db: PrefectDBInterface, session: AsyncSession, block_schema_id: UUID) -> Union[BlockSchema, None] ``` Reads a block schema by id. Will reconstruct the block schema's fields attribute to include block schema references. **Args:** * `session`: A database session * `block_schema_id`: a block\_schema id **Returns:** * orm\_models..BlockSchema: the block\_schema ### `read_block_schemas` ```python theme={null} read_block_schemas(db: PrefectDBInterface, session: AsyncSession, block_schema_filter: Optional[schemas.filters.BlockSchemaFilter] = None, limit: Optional[int] = None, offset: Optional[int] = None) -> List[BlockSchema] ``` Reads block schemas, optionally filtered by type or name. **Args:** * `session`: A database session * `block_schema_filter`: a block schema filter object * `limit`: query limit * `offset`: query offset **Returns:** * List\[orm\_models.BlockSchema]: the block\_schemas ### `read_block_schema_by_checksum` ```python theme={null} read_block_schema_by_checksum(db: PrefectDBInterface, session: AsyncSession, checksum: str, version: Optional[str] = None) -> Optional[BlockSchema] ``` Reads a block\_schema by checksum. Will reconstruct the block schema's fields attribute to include block schema references. **Args:** * `session`: A database session * `checksum`: a block\_schema checksum * `version`: A block\_schema version **Returns:** * orm\_models.BlockSchema: the block\_schema ### `read_available_block_capabilities` ```python theme={null} read_available_block_capabilities(db: PrefectDBInterface, session: AsyncSession) -> List[str] ``` Retrieves a list of all available block capabilities. **Args:** * `session`: A database session. **Returns:** * List\[str]: List of all available block capabilities. ### `create_block_schema_reference` ```python theme={null} create_block_schema_reference(db: PrefectDBInterface, session: AsyncSession, block_schema_reference: schemas.core.BlockSchemaReference) -> Union[orm_models.BlockSchemaReference, None] ``` Retrieves a list of all available block capabilities. **Args:** * `session`: A database session. * `block_schema_reference`: A block schema reference object. **Returns:** * orm\_models.BlockSchemaReference: The created BlockSchemaReference ## Classes ### `MissingBlockTypeException` Raised when the block type corresponding to a block schema cannot be found # block_types Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-models-block_types # `prefect.server.models.block_types` Functions for interacting with block type ORM objects. Intended for internal use by the Prefect REST API. ## Functions ### `create_block_type` ```python theme={null} create_block_type(db: PrefectDBInterface, session: AsyncSession, block_type: Union[schemas.core.BlockType, 'ClientBlockType'], override: bool = False) -> Union[BlockType, None] ``` Create a new block type. **Args:** * `session`: A database session * `block_type`: a block type object **Returns:** * an ORM block type model ### `read_block_type` ```python theme={null} read_block_type(db: PrefectDBInterface, session: AsyncSession, block_type_id: UUID) -> Union[BlockType, None] ``` Reads a block type by id. **Args:** * `session`: A database session * `block_type_id`: a block\_type id **Returns:** * an ORM block type model ### `read_block_type_by_slug` ```python theme={null} read_block_type_by_slug(db: PrefectDBInterface, session: AsyncSession, block_type_slug: str) -> Union[BlockType, None] ``` Reads a block type by slug. **Args:** * `session`: A database session * `block_type_slug`: a block type slug **Returns:** * an ORM block type model ### `read_block_types` ```python theme={null} read_block_types(db: PrefectDBInterface, session: AsyncSession, block_type_filter: Optional[schemas.filters.BlockTypeFilter] = None, block_schema_filter: Optional[schemas.filters.BlockSchemaFilter] = None, limit: Optional[int] = None, offset: Optional[int] = None) -> Sequence[BlockType] ``` Reads block types with an optional limit and offset Args: **Returns:** * List\[BlockType]: List of ### `update_block_type` ```python theme={null} update_block_type(db: PrefectDBInterface, session: AsyncSession, block_type_id: Union[str, UUID], block_type: Union[schemas.actions.BlockTypeUpdate, schemas.core.BlockType, 'ClientBlockTypeUpdate', 'ClientBlockType']) -> bool ``` Update a block type by id. **Args:** * `session`: A database session * `block_type_id`: Data to update block type with * `block_type`: A block type id **Returns:** * True if the block type was updated ### `delete_block_type` ```python theme={null} delete_block_type(db: PrefectDBInterface, session: AsyncSession, block_type_id: str) -> bool ``` Delete a block type by id. **Args:** * `session`: A database session * `block_type_id`: A block type id **Returns:** * True if the block type was updated # concurrency_limits Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-models-concurrency_limits # `prefect.server.models.concurrency_limits` Functions for interacting with concurrency limit ORM objects. Intended for internal use by the Prefect REST API. ## Functions ### `create_concurrency_limit` ```python theme={null} create_concurrency_limit(db: PrefectDBInterface, session: AsyncSession, concurrency_limit: schemas.core.ConcurrencyLimit) -> orm_models.ConcurrencyLimit ``` ### `read_concurrency_limit` ```python theme={null} read_concurrency_limit(db: PrefectDBInterface, session: AsyncSession, concurrency_limit_id: UUID) -> Union[orm_models.ConcurrencyLimit, None] ``` Reads a concurrency limit by id. If used for orchestration, simultaneous read race conditions might allow the concurrency limit to be temporarily exceeded. ### `read_concurrency_limit_by_tag` ```python theme={null} read_concurrency_limit_by_tag(db: PrefectDBInterface, session: AsyncSession, tag: str) -> Union[orm_models.ConcurrencyLimit, None] ``` Reads a concurrency limit by tag. If used for orchestration, simultaneous read race conditions might allow the concurrency limit to be temporarily exceeded. ### `reset_concurrency_limit_by_tag` ```python theme={null} reset_concurrency_limit_by_tag(db: PrefectDBInterface, session: AsyncSession, tag: str, slot_override: Optional[List[UUID]] = None) -> Union[orm_models.ConcurrencyLimit, None] ``` Resets a concurrency limit by tag. ### `filter_concurrency_limits_for_orchestration` ```python theme={null} filter_concurrency_limits_for_orchestration(db: PrefectDBInterface, session: AsyncSession, tags: List[str]) -> Sequence[orm_models.ConcurrencyLimit] ``` Filters concurrency limits by tag. This will apply a "select for update" lock on these rows to prevent simultaneous read race conditions from enabling the the concurrency limit on these tags from being temporarily exceeded. ### `delete_concurrency_limit` ```python theme={null} delete_concurrency_limit(db: PrefectDBInterface, session: AsyncSession, concurrency_limit_id: UUID) -> bool ``` ### `delete_concurrency_limit_by_tag` ```python theme={null} delete_concurrency_limit_by_tag(db: PrefectDBInterface, session: AsyncSession, tag: str) -> bool ``` ### `read_concurrency_limits` ```python theme={null} read_concurrency_limits(db: PrefectDBInterface, session: AsyncSession, limit: Optional[int] = None, offset: Optional[int] = None) -> Sequence[orm_models.ConcurrencyLimit] ``` Reads a concurrency limits. If used for orchestration, simultaneous read race conditions might allow the concurrency limit to be temporarily exceeded. **Args:** * `session`: A database session * `offset`: Query offset * `limit`: Query limit **Returns:** * List\[orm\_models.ConcurrencyLimit]: concurrency limits # concurrency_limits_v2 Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-models-concurrency_limits_v2 # `prefect.server.models.concurrency_limits_v2` ## Functions ### `active_slots_after_decay` ```python theme={null} active_slots_after_decay(db: PrefectDBInterface) -> ColumnElement[float] ``` ### `denied_slots_after_decay` ```python theme={null} denied_slots_after_decay(db: PrefectDBInterface) -> ColumnElement[float] ``` Calculate denied\_slots after applying decay. Denied slots decay at a rate of `slot_decay_per_second` per second if it's greater than 0 (rate limits), otherwise for concurrency limits it decays at a rate based on clamped `avg_slot_occupancy_seconds`. The clamping matches the retry-after calculation to prevent denied\_slots from accumulating when clients retry faster than the unclamped decay rate. ### `create_concurrency_limit` ```python theme={null} create_concurrency_limit(db: PrefectDBInterface, session: AsyncSession, concurrency_limit: Union[schemas.actions.ConcurrencyLimitV2Create, schemas.core.ConcurrencyLimitV2]) -> orm_models.ConcurrencyLimitV2 ``` ### `read_concurrency_limit` ```python theme={null} read_concurrency_limit(db: PrefectDBInterface, session: AsyncSession, concurrency_limit_id: Optional[UUID] = None, name: Optional[str] = None) -> Union[orm_models.ConcurrencyLimitV2, None] ``` ### `read_all_concurrency_limits` ```python theme={null} read_all_concurrency_limits(db: PrefectDBInterface, session: AsyncSession, limit: int, offset: int) -> Sequence[orm_models.ConcurrencyLimitV2] ``` ### `update_concurrency_limit` ```python theme={null} update_concurrency_limit(db: PrefectDBInterface, session: AsyncSession, concurrency_limit: schemas.actions.ConcurrencyLimitV2Update, concurrency_limit_id: Optional[UUID] = None, name: Optional[str] = None) -> bool ``` ### `delete_concurrency_limit` ```python theme={null} delete_concurrency_limit(db: PrefectDBInterface, session: AsyncSession, concurrency_limit_id: Optional[UUID] = None, name: Optional[str] = None) -> bool ``` ### `bulk_read_concurrency_limits` ```python theme={null} bulk_read_concurrency_limits(db: PrefectDBInterface, session: AsyncSession, names: List[str]) -> List[orm_models.ConcurrencyLimitV2] ``` ### `bulk_increment_active_slots` ```python theme={null} bulk_increment_active_slots(db: PrefectDBInterface, session: AsyncSession, concurrency_limit_ids: List[UUID], slots: int) -> bool ``` ### `bulk_decrement_active_slots` ```python theme={null} bulk_decrement_active_slots(db: PrefectDBInterface, session: AsyncSession, concurrency_limit_ids: List[UUID], slots: int, occupancy_seconds: Optional[float] = None) -> bool ``` ### `bulk_update_denied_slots` ```python theme={null} bulk_update_denied_slots(db: PrefectDBInterface, session: AsyncSession, concurrency_limit_ids: List[UUID], slots: int) -> bool ``` # configuration Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-models-configuration # `prefect.server.models.configuration` ## Functions ### `write_configuration` ```python theme={null} write_configuration(db: PrefectDBInterface, session: AsyncSession, configuration: schemas.core.Configuration) -> orm_models.Configuration ``` ### `read_configuration` ```python theme={null} read_configuration(db: PrefectDBInterface, session: AsyncSession, key: str) -> Optional[schemas.core.Configuration] ``` # csrf_token Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-models-csrf_token # `prefect.server.models.csrf_token` ## Functions ### `create_or_update_csrf_token` ```python theme={null} create_or_update_csrf_token(db: PrefectDBInterface, session: AsyncSession, client: str) -> core.CsrfToken ``` Create or update a CSRF token for a client. If the client already has a token, it will be updated. **Args:** * `session`: The database session * `client`: The client identifier **Returns:** * core.CsrfToken: The CSRF token ### `read_token_for_client` ```python theme={null} read_token_for_client(db: PrefectDBInterface, session: AsyncSession, client: str) -> Optional[core.CsrfToken] ``` Read a CSRF token for a client. **Args:** * `session`: The database session * `client`: The client identifier **Returns:** * Optional\[core.CsrfToken]: The CSRF token, if it exists and is not expired. ### `delete_expired_tokens` ```python theme={null} delete_expired_tokens(db: PrefectDBInterface, session: AsyncSession) -> int ``` Delete expired CSRF tokens. **Args:** * `session`: The database session **Returns:** * The number of tokens deleted # deployments Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-models-deployments # `prefect.server.models.deployments` Functions for interacting with deployment ORM objects. Intended for internal use by the Prefect REST API. ## Functions ### `create_deployment` ```python theme={null} create_deployment(db: PrefectDBInterface, session: AsyncSession, deployment: schemas.core.Deployment | schemas.actions.DeploymentCreate) -> Optional[orm_models.Deployment] ``` Upserts a deployment. **Args:** * `session`: a database session * `deployment`: a deployment model **Returns:** * orm\_models.Deployment: the newly-created or updated deployment ### `update_deployment` ```python theme={null} update_deployment(db: PrefectDBInterface, session: AsyncSession, deployment_id: UUID, deployment: schemas.actions.DeploymentUpdate) -> bool ``` Updates a deployment. **Args:** * `session`: a database session * `deployment_id`: the ID of the deployment to modify * `deployment`: changes to a deployment model **Returns:** * whether the deployment was updated ### `read_deployment` ```python theme={null} read_deployment(db: PrefectDBInterface, session: AsyncSession, deployment_id: UUID) -> Optional[orm_models.Deployment] ``` Reads a deployment by id. **Args:** * `session`: A database session * `deployment_id`: a deployment id **Returns:** * orm\_models.Deployment: the deployment ### `read_deployment_by_name` ```python theme={null} read_deployment_by_name(db: PrefectDBInterface, session: AsyncSession, name: str, flow_name: str) -> Optional[orm_models.Deployment] ``` Reads a deployment by name. **Args:** * `session`: A database session * `name`: a deployment name * `flow_name`: the name of the flow the deployment belongs to **Returns:** * orm\_models.Deployment: the deployment ### `read_deployments` ```python theme={null} read_deployments(db: PrefectDBInterface, session: AsyncSession, offset: Optional[int] = None, limit: Optional[int] = None, flow_filter: Optional[schemas.filters.FlowFilter] = None, flow_run_filter: Optional[schemas.filters.FlowRunFilter] = None, task_run_filter: Optional[schemas.filters.TaskRunFilter] = None, deployment_filter: Optional[schemas.filters.DeploymentFilter] = None, work_pool_filter: Optional[schemas.filters.WorkPoolFilter] = None, work_queue_filter: Optional[schemas.filters.WorkQueueFilter] = None, sort: schemas.sorting.DeploymentSort = schemas.sorting.DeploymentSort.NAME_ASC) -> Sequence[orm_models.Deployment] ``` Read deployments. **Args:** * `session`: A database session * `offset`: Query offset * `limit`: Query limit * `flow_filter`: only select deployments whose flows match these criteria * `flow_run_filter`: only select deployments whose flow runs match these criteria * `task_run_filter`: only select deployments whose task runs match these criteria * `deployment_filter`: only select deployment that match these filters * `work_pool_filter`: only select deployments whose work pools match these criteria * `work_queue_filter`: only select deployments whose work pool queues match these criteria * `sort`: the sort criteria for selected deployments. Defaults to `name` ASC. **Returns:** * list\[orm\_models.Deployment]: deployments ### `count_deployments` ```python theme={null} count_deployments(db: PrefectDBInterface, session: AsyncSession, flow_filter: Optional[schemas.filters.FlowFilter] = None, flow_run_filter: Optional[schemas.filters.FlowRunFilter] = None, task_run_filter: Optional[schemas.filters.TaskRunFilter] = None, deployment_filter: Optional[schemas.filters.DeploymentFilter] = None, work_pool_filter: Optional[schemas.filters.WorkPoolFilter] = None, work_queue_filter: Optional[schemas.filters.WorkQueueFilter] = None) -> int ``` Count deployments. **Args:** * `session`: A database session * `flow_filter`: only count deployments whose flows match these criteria * `flow_run_filter`: only count deployments whose flow runs match these criteria * `task_run_filter`: only count deployments whose task runs match these criteria * `deployment_filter`: only count deployment that match these filters * `work_pool_filter`: only count deployments that match these work pool filters * `work_queue_filter`: only count deployments that match these work pool queue filters **Returns:** * the number of deployments matching filters ### `delete_deployment` ```python theme={null} delete_deployment(db: PrefectDBInterface, session: AsyncSession, deployment_id: UUID) -> bool ``` Delete a deployment by id. **Args:** * `session`: A database session * `deployment_id`: a deployment id **Returns:** * whether or not the deployment was deleted ### `delete_deployments` ```python theme={null} delete_deployments(db: PrefectDBInterface, session: AsyncSession, deployment_ids: list[UUID]) -> list[UUID] ``` Delete multiple deployments by their IDs. **Args:** * `session`: A database session * `deployment_ids`: a list of deployment ids to delete **Returns:** * List\[UUID]: the IDs of the deployments that were deleted ### `schedule_runs` ```python theme={null} schedule_runs(db: PrefectDBInterface, session: AsyncSession, deployment_id: UUID, start_time: Optional[datetime.datetime] = None, end_time: Optional[datetime.datetime] = None, min_time: Optional[datetime.timedelta] = None, min_runs: Optional[int] = None, max_runs: Optional[int] = None, auto_scheduled: bool = True) -> Sequence[UUID] ``` Schedule flow runs for a deployment **Args:** * `session`: a database session * `deployment_id`: the id of the deployment to schedule * `start_time`: the time from which to start scheduling runs * `end_time`: runs will be scheduled until at most this time * `min_time`: runs will be scheduled until at least this far in the future * `min_runs`: a minimum amount of runs to schedule * `max_runs`: a maximum amount of runs to schedule This function will generate the minimum number of runs that satisfy the min and max times, and the min and max counts. Specifically, the following order will be respected. * Runs will be generated starting on or after the `start_time` * No more than `max_runs` runs will be generated * No runs will be generated after `end_time` is reached * At least `min_runs` runs will be generated * Runs will be generated until at least `start_time` + `min_time` is reached **Returns:** * a list of flow run ids scheduled for the deployment ### `check_work_queues_for_deployment` ```python theme={null} check_work_queues_for_deployment(db: PrefectDBInterface, session: AsyncSession, deployment_id: UUID) -> Sequence[orm_models.WorkQueue] ``` Get work queues that can pick up the specified deployment. Work queues will pick up a deployment when all of the following are met. * The deployment has ALL tags that the work queue has (i.e. the work queue's tags must be a subset of the deployment's tags). * The work queue's specified deployment IDs match the deployment's ID, or the work queue does NOT have specified deployment IDs. * The work queue's specified flow runners match the deployment's flow runner or the work queue does NOT have a specified flow runner. Notes on the query: * Our database currently allows either "null" and empty lists as null values in filters, so we need to catch both cases with "or". * `A.contains(B)` should be interpreted as "True if A contains B". **Returns:** * List\[orm\_models.WorkQueue]: WorkQueues ### `create_deployment_schedules` ```python theme={null} create_deployment_schedules(db: PrefectDBInterface, session: AsyncSession, deployment_id: UUID, schedules: list[schemas.actions.DeploymentScheduleCreate]) -> list[schemas.core.DeploymentSchedule] ``` Creates a deployment's schedules. **Args:** * `session`: A database session * `deployment_id`: a deployment id * `schedules`: a list of deployment schedule create actions ### `read_deployment_schedules` ```python theme={null} read_deployment_schedules(db: PrefectDBInterface, session: AsyncSession, deployment_id: UUID, deployment_schedule_filter: Optional[schemas.filters.DeploymentScheduleFilter] = None) -> list[schemas.core.DeploymentSchedule] ``` Reads a deployment's schedules. **Args:** * `session`: A database session * `deployment_id`: a deployment id **Returns:** * list\[schemas.core.DeploymentSchedule]: the deployment's schedules ### `update_deployment_schedule` ```python theme={null} update_deployment_schedule(db: PrefectDBInterface, session: AsyncSession, deployment_id: UUID, schedule: schemas.actions.DeploymentScheduleUpdate, deployment_schedule_id: UUID | None = None, deployment_schedule_slug: str | None = None) -> bool ``` Updates a deployment's schedules. **Args:** * `session`: A database session * `deployment_schedule_id`: a deployment schedule id * `schedule`: a deployment schedule update action ### `delete_schedules_for_deployment` ```python theme={null} delete_schedules_for_deployment(db: PrefectDBInterface, session: AsyncSession, deployment_id: UUID) -> bool ``` Deletes a deployment schedule. **Args:** * `session`: A database session * `deployment_id`: a deployment id ### `delete_deployment_schedule` ```python theme={null} delete_deployment_schedule(db: PrefectDBInterface, session: AsyncSession, deployment_id: UUID, deployment_schedule_id: UUID) -> bool ``` Deletes a deployment schedule. **Args:** * `session`: A database session * `deployment_schedule_id`: a deployment schedule id ### `mark_deployments_ready` ```python theme={null} mark_deployments_ready() -> None ``` ### `mark_deployments_not_ready` ```python theme={null} mark_deployments_not_ready(db: PrefectDBInterface, deployment_ids: Optional[Iterable[UUID]] = None, work_queue_ids: Optional[Iterable[UUID]] = None) -> None ``` ### `with_system_labels_for_deployment` ```python theme={null} with_system_labels_for_deployment(session: AsyncSession, deployment: schemas.core.Deployment) -> schemas.core.KeyValueLabels ``` Augment user supplied labels with system default labels for a deployment. ### `with_system_labels_for_deployment_flow_run` ```python theme={null} with_system_labels_for_deployment_flow_run(session: AsyncSession, deployment: orm_models.Deployment, user_supplied_labels: Optional[schemas.core.KeyValueLabels] = None) -> schemas.core.KeyValueLabels ``` Generate system labels for a flow run created from a deployment. **Args:** * `session`: Database session * `deployment`: The deployment the flow run is created from * `user_supplied_labels`: Optional user-supplied labels to include **Returns:** * Complete set of labels for the flow run ### `emit_deployment_created_event` ```python theme={null} emit_deployment_created_event(session: AsyncSession, deployment: orm_models.Deployment) -> None ``` Emit an event when a deployment is created. ### `emit_deployment_updated_event` ```python theme={null} emit_deployment_updated_event(session: AsyncSession, deployment: orm_models.Deployment, changed_fields: dict[str, dict[str, Any]]) -> None ``` Emit an event when a deployment is updated. ### `emit_deployment_deleted_event` ```python theme={null} emit_deployment_deleted_event(session: AsyncSession, deployment: orm_models.Deployment) -> None ``` Emit an event when a deployment is deleted. # events Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-models-events # `prefect.server.models.events` ## Functions ### `flow_run_state_change_event` ```python theme={null} flow_run_state_change_event(session: AsyncSession, occurred: datetime, flow_run: ORMFlowRun, initial_state_id: Optional[UUID], initial_state: Optional[schemas.states.State], validated_state_id: Optional[UUID], validated_state: schemas.states.State) -> Event ``` ### `state_payload` ```python theme={null} state_payload(state: Optional[schemas.states.State]) -> Optional[Dict[str, str]] ``` Given a State, return the essential string parts of it for use in an event payload ### `deployment_status_event` ```python theme={null} deployment_status_event(session: AsyncSession, deployment_id: UUID, status: DeploymentStatus, occurred: DateTime) -> Event ``` ### `deployment_created_event` ```python theme={null} deployment_created_event(session: AsyncSession, deployment: ORMDeployment, occurred: DateTime) -> Event ``` Create an event for deployment creation. ### `deployment_updated_event` ```python theme={null} deployment_updated_event(session: AsyncSession, deployment: ORMDeployment, changed_fields: Dict[str, Dict[str, Any]], occurred: DateTime) -> Event ``` Create an event for deployment field updates. ### `deployment_deleted_event` ```python theme={null} deployment_deleted_event(session: AsyncSession, deployment: ORMDeployment, occurred: DateTime) -> Event ``` Create an event for deployment deletion. ### `work_queue_status_event` ```python theme={null} work_queue_status_event(session: AsyncSession, work_queue: 'ORMWorkQueue', occurred: DateTime) -> Event ``` ### `work_pool_status_event` ```python theme={null} work_pool_status_event(event_id: UUID, occurred: DateTime, pre_update_work_pool: Optional['ORMWorkPool'], work_pool: 'ORMWorkPool') -> Event ``` ### `work_pool_updated_event` ```python theme={null} work_pool_updated_event(session: AsyncSession, work_pool: 'ORMWorkPool', changed_fields: Dict[str, Dict[str, Any]], occurred: DateTime) -> Event ``` Create an event for work pool field updates (non-status). ### `work_queue_updated_event` ```python theme={null} work_queue_updated_event(session: AsyncSession, work_queue: 'ORMWorkQueue', changed_fields: Dict[str, Dict[str, Any]], occurred: DateTime) -> Event ``` Create an event for work queue field updates (non-status). # flow_run_input Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-models-flow_run_input # `prefect.server.models.flow_run_input` ## Functions ### `create_flow_run_input` ```python theme={null} create_flow_run_input(db: PrefectDBInterface, session: AsyncSession, flow_run_input: schemas.core.FlowRunInput) -> schemas.core.FlowRunInput ``` ### `filter_flow_run_input` ```python theme={null} filter_flow_run_input(db: PrefectDBInterface, session: AsyncSession, flow_run_id: uuid.UUID, prefix: str, limit: int, exclude_keys: List[str]) -> List[schemas.core.FlowRunInput] ``` ### `read_flow_run_input` ```python theme={null} read_flow_run_input(db: PrefectDBInterface, session: AsyncSession, flow_run_id: uuid.UUID, key: str) -> Optional[schemas.core.FlowRunInput] ``` ### `delete_flow_run_input` ```python theme={null} delete_flow_run_input(db: PrefectDBInterface, session: AsyncSession, flow_run_id: uuid.UUID, key: str) -> bool ``` # flow_run_states Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-models-flow_run_states # `prefect.server.models.flow_run_states` Functions for interacting with flow run state ORM objects. Intended for internal use by the Prefect REST API. ## Functions ### `read_flow_run_state` ```python theme={null} read_flow_run_state(db: PrefectDBInterface, session: AsyncSession, flow_run_state_id: UUID) -> Union[orm_models.FlowRunState, None] ``` Reads a flow run state by id. **Args:** * `session`: A database session * `flow_run_state_id`: a flow run state id **Returns:** * orm\_models.FlowRunState: the flow state ### `read_flow_run_states` ```python theme={null} read_flow_run_states(db: PrefectDBInterface, session: AsyncSession, flow_run_id: UUID) -> Sequence[orm_models.FlowRunState] ``` Reads flow runs states for a flow run. **Args:** * `session`: A database session * `flow_run_id`: the flow run id **Returns:** * List\[orm\_models.FlowRunState]: the flow run states ### `delete_flow_run_state` ```python theme={null} delete_flow_run_state(db: PrefectDBInterface, session: AsyncSession, flow_run_state_id: UUID) -> bool ``` Delete a flow run state by id. **Args:** * `session`: A database session * `flow_run_state_id`: a flow run state id **Returns:** * whether or not the flow run state was deleted # flow_runs Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-models-flow_runs # `prefect.server.models.flow_runs` Functions for interacting with flow run ORM objects. Intended for internal use by the Prefect REST API. ## Functions ### `create_flow_run` ```python theme={null} create_flow_run(db: PrefectDBInterface, session: AsyncSession, flow_run: schemas.core.FlowRun, orchestration_parameters: Optional[dict[str, Any]] = None) -> orm_models.FlowRun ``` Creates a new flow run. If the provided flow run has a state attached, it will also be created. **Args:** * `session`: a database session * `flow_run`: a flow run model **Returns:** * orm\_models.FlowRun: the newly-created flow run ### `update_flow_run` ```python theme={null} update_flow_run(db: PrefectDBInterface, session: AsyncSession, flow_run_id: UUID, flow_run: schemas.actions.FlowRunUpdate) -> bool ``` Updates a flow run. **Args:** * `session`: a database session * `flow_run_id`: the flow run id to update * `flow_run`: a flow run model **Returns:** * whether or not matching rows were found to update ### `read_flow_run` ```python theme={null} read_flow_run(db: PrefectDBInterface, session: AsyncSession, flow_run_id: UUID, for_update: bool = False) -> Optional[orm_models.FlowRun] ``` Reads a flow run by id. **Args:** * `session`: A database session * `flow_run_id`: a flow run id **Returns:** * orm\_models.FlowRun: the flow run ### `read_flow_runs` ```python theme={null} read_flow_runs(db: PrefectDBInterface, session: AsyncSession, columns: Optional[list[str]] = None, flow_filter: Optional[schemas.filters.FlowFilter] = None, flow_run_filter: Optional[schemas.filters.FlowRunFilter] = None, task_run_filter: Optional[schemas.filters.TaskRunFilter] = None, deployment_filter: Optional[schemas.filters.DeploymentFilter] = None, work_pool_filter: Optional[schemas.filters.WorkPoolFilter] = None, work_queue_filter: Optional[schemas.filters.WorkQueueFilter] = None, offset: Optional[int] = None, limit: Optional[int] = None, sort: schemas.sorting.FlowRunSort = schemas.sorting.FlowRunSort.ID_DESC) -> Sequence[orm_models.FlowRun] ``` Read flow runs. **Args:** * `session`: a database session * `columns`: a list of the flow run ORM columns to load, for performance * `flow_filter`: only select flow runs whose flows match these filters * `flow_run_filter`: only select flow runs match these filters * `task_run_filter`: only select flow runs whose task runs match these filters * `deployment_filter`: only select flow runs whose deployments match these filters * `offset`: Query offset * `limit`: Query limit * `sort`: Query sort **Returns:** * List\[orm\_models.FlowRun]: flow runs ### `cleanup_flow_run_concurrency_slots` ```python theme={null} cleanup_flow_run_concurrency_slots(session: AsyncSession, flow_run: orm_models.FlowRun) -> None ``` Cleanup flow run related resources, such as releasing concurrency slots. All operations should be idempotent and safe to call multiple times. IMPORTANT: This run may no longer exist in the database when this operation occurs. ### `read_task_run_dependencies` ```python theme={null} read_task_run_dependencies(session: AsyncSession, flow_run_id: UUID) -> List[DependencyResult] ``` Get a task run dependency map for a given flow run. ### `count_flow_runs` ```python theme={null} count_flow_runs(db: PrefectDBInterface, session: AsyncSession, flow_filter: Optional[schemas.filters.FlowFilter] = None, flow_run_filter: Optional[schemas.filters.FlowRunFilter] = None, task_run_filter: Optional[schemas.filters.TaskRunFilter] = None, deployment_filter: Optional[schemas.filters.DeploymentFilter] = None, work_pool_filter: Optional[schemas.filters.WorkPoolFilter] = None, work_queue_filter: Optional[schemas.filters.WorkQueueFilter] = None) -> int ``` Count flow runs. **Args:** * `session`: a database session * `flow_filter`: only count flow runs whose flows match these filters * `flow_run_filter`: only count flow runs that match these filters * `task_run_filter`: only count flow runs whose task runs match these filters * `deployment_filter`: only count flow runs whose deployments match these filters **Returns:** * count of flow runs ### `delete_flow_run` ```python theme={null} delete_flow_run(db: PrefectDBInterface, session: AsyncSession, flow_run_id: UUID) -> bool ``` Delete a flow run by flow\_run\_id, handling concurrency limits if applicable. **Args:** * `session`: A database session * `flow_run_id`: a flow run id **Returns:** * whether or not the flow run was deleted ### `delete_flow_runs` ```python theme={null} delete_flow_runs(db: PrefectDBInterface, session: AsyncSession, flow_run_ids: List[UUID]) -> List[UUID] ``` Delete multiple flow runs by their IDs, handling concurrency limits. **Args:** * `session`: A database session * `flow_run_ids`: a list of flow run ids to delete **Returns:** * List\[UUID]: the IDs of the flow runs that were deleted ### `set_flow_run_state` ```python theme={null} set_flow_run_state(session: AsyncSession, flow_run_id: UUID, state: schemas.states.State, force: bool = False, flow_policy: Optional[Type[FlowRunOrchestrationPolicy]] = None, orchestration_parameters: Optional[Dict[str, Any]] = None, client_version: Optional[str] = None) -> OrchestrationResult ``` Creates a new orchestrated flow run state. Setting a new state on a run is the one of the principal actions that is governed by Prefect's orchestration logic. Setting a new run state will not guarantee creation, but instead trigger orchestration rules to govern the proposed `state` input. If the state is considered valid, it will be written to the database. Otherwise, a it's possible a different state, or no state, will be created. A `force` flag is supplied to bypass a subset of orchestration logic. **Args:** * `session`: a database session * `flow_run_id`: the flow run id * `state`: a flow run state model * `force`: if False, orchestration rules will be applied that may alter or prevent the state transition. If True, orchestration rules are not applied. **Returns:** * OrchestrationResult object ### `read_flow_run_graph` ```python theme={null} read_flow_run_graph(db: PrefectDBInterface, session: AsyncSession, flow_run_id: UUID, since: datetime.datetime = earliest_possible_datetime()) -> Graph ``` Given a flow run, return the graph of it's task and subflow runs. If a `since` datetime is provided, only return items that may have changed since that time. ### `with_system_labels_for_flow_run` ```python theme={null} with_system_labels_for_flow_run(session: AsyncSession, flow_run: Union[schemas.core.FlowRun, schemas.actions.FlowRunCreate]) -> schemas.core.KeyValueLabels ``` Augment user supplied labels with system default labels for a flow run. ### `update_flow_run_labels` ```python theme={null} update_flow_run_labels(db: PrefectDBInterface, session: AsyncSession, flow_run_id: UUID, labels: KeyValueLabels) -> bool ``` Update flow run labels by patching existing labels with new values. Args: session: A database session flow\_run\_id: the flow run id to update labels: the new labels to patch into existing labels Returns: bool: whether the update was successful ## Classes ### `DependencyResult` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. # flows Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-models-flows # `prefect.server.models.flows` Functions for interacting with flow ORM objects. Intended for internal use by the Prefect REST API. ## Functions ### `create_flow` ```python theme={null} create_flow(db: PrefectDBInterface, session: AsyncSession, flow: schemas.core.Flow) -> orm_models.Flow ``` Creates a new flow. If a flow with the same name already exists, the existing flow is returned. **Args:** * `session`: a database session * `flow`: a flow model **Returns:** * orm\_models.Flow: the newly-created or existing flow ### `update_flow` ```python theme={null} update_flow(db: PrefectDBInterface, session: AsyncSession, flow_id: UUID, flow: schemas.actions.FlowUpdate) -> bool ``` Updates a flow. **Args:** * `session`: a database session * `flow_id`: the flow id to update * `flow`: a flow update model **Returns:** * whether or not matching rows were found to update ### `read_flow` ```python theme={null} read_flow(db: PrefectDBInterface, session: AsyncSession, flow_id: UUID) -> Optional[orm_models.Flow] ``` Reads a flow by id. **Args:** * `session`: A database session * `flow_id`: a flow id **Returns:** * orm\_models.Flow: the flow ### `read_flow_by_name` ```python theme={null} read_flow_by_name(db: PrefectDBInterface, session: AsyncSession, name: str) -> Optional[orm_models.Flow] ``` Reads a flow by name. **Args:** * `session`: A database session * `name`: a flow name **Returns:** * orm\_models.Flow: the flow ### `read_flows` ```python theme={null} read_flows(db: PrefectDBInterface, session: AsyncSession, flow_filter: Union[schemas.filters.FlowFilter, None] = None, flow_run_filter: Union[schemas.filters.FlowRunFilter, None] = None, task_run_filter: Union[schemas.filters.TaskRunFilter, None] = None, deployment_filter: Union[schemas.filters.DeploymentFilter, None] = None, work_pool_filter: Union[schemas.filters.WorkPoolFilter, None] = None, sort: schemas.sorting.FlowSort = schemas.sorting.FlowSort.NAME_ASC, offset: Union[int, None] = None, limit: Union[int, None] = None) -> Sequence[orm_models.Flow] ``` Read multiple flows. **Args:** * `session`: A database session * `flow_filter`: only select flows that match these filters * `flow_run_filter`: only select flows whose flow runs match these filters * `task_run_filter`: only select flows whose task runs match these filters * `deployment_filter`: only select flows whose deployments match these filters * `work_pool_filter`: only select flows whose work pools match these filters * `offset`: Query offset * `limit`: Query limit **Returns:** * List\[orm\_models.Flow]: flows ### `count_flows` ```python theme={null} count_flows(db: PrefectDBInterface, session: AsyncSession, flow_filter: Union[schemas.filters.FlowFilter, None] = None, flow_run_filter: Union[schemas.filters.FlowRunFilter, None] = None, task_run_filter: Union[schemas.filters.TaskRunFilter, None] = None, deployment_filter: Union[schemas.filters.DeploymentFilter, None] = None, work_pool_filter: Union[schemas.filters.WorkPoolFilter, None] = None) -> int ``` Count flows. **Args:** * `session`: A database session * `flow_filter`: only count flows that match these filters * `flow_run_filter`: only count flows whose flow runs match these filters * `task_run_filter`: only count flows whose task runs match these filters * `deployment_filter`: only count flows whose deployments match these filters * `work_pool_filter`: only count flows whose work pools match these filters **Returns:** * count of flows ### `delete_flow` ```python theme={null} delete_flow(db: PrefectDBInterface, session: AsyncSession, flow_id: UUID) -> bool ``` Delete a flow by id. **Args:** * `session`: A database session * `flow_id`: a flow id **Returns:** * whether or not the flow was deleted ### `delete_flows` ```python theme={null} delete_flows(db: PrefectDBInterface, session: AsyncSession, flow_ids: List[UUID]) -> List[UUID] ``` Delete multiple flows by their IDs. This also deletes all associated deployments (hard delete). **Args:** * `session`: A database session * `flow_ids`: a list of flow ids to delete **Returns:** * List\[UUID]: the IDs of the flows that were deleted ### `read_flow_labels` ```python theme={null} read_flow_labels(db: PrefectDBInterface, session: AsyncSession, flow_id: UUID) -> Union[schemas.core.KeyValueLabels, None] ``` # logs Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-models-logs # `prefect.server.models.logs` Functions for interacting with log ORM objects. Intended for internal use by the Prefect REST API. ## Functions ### `split_logs_into_batches` ```python theme={null} split_logs_into_batches(logs: Sequence[schemas.actions.LogCreate]) -> Generator[Tuple[LogCreate, ...], None, None] ``` ### `create_logs` ```python theme={null} create_logs(db: PrefectDBInterface, session: AsyncSession, logs: Sequence[LogCreate]) -> None ``` Creates new logs **Args:** * `session`: a database session * `logs`: a list of log schemas **Returns:** * None ### `read_logs` ```python theme={null} read_logs(db: PrefectDBInterface, session: AsyncSession, log_filter: Optional[schemas.filters.LogFilter], offset: Optional[int] = None, limit: Optional[int] = None, sort: schemas.sorting.LogSort = schemas.sorting.LogSort.TIMESTAMP_ASC) -> Sequence[orm_models.Log] ``` Read logs. **Args:** * `session`: a database session * `db`: the database interface * `log_filter`: only select logs that match these filters * `offset`: Query offset * `limit`: Query limit * `sort`: Query sort **Returns:** * List\[orm\_models.Log]: the matching logs ### `delete_logs` ```python theme={null} delete_logs(db: PrefectDBInterface, session: AsyncSession, log_filter: schemas.filters.LogFilter) -> int ``` # saved_searches Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-models-saved_searches # `prefect.server.models.saved_searches` Functions for interacting with saved search ORM objects. Intended for internal use by the Prefect REST API. ## Functions ### `create_saved_search` ```python theme={null} create_saved_search(db: PrefectDBInterface, session: AsyncSession, saved_search: schemas.core.SavedSearch) -> orm_models.SavedSearch ``` Upserts a SavedSearch. If a SavedSearch with the same name exists, all properties will be updated. **Args:** * `session`: a database session * `saved_search`: a SavedSearch model **Returns:** * orm\_models.SavedSearch: the newly-created or updated SavedSearch ### `read_saved_search` ```python theme={null} read_saved_search(db: PrefectDBInterface, session: AsyncSession, saved_search_id: UUID) -> Union[orm_models.SavedSearch, None] ``` Reads a SavedSearch by id. **Args:** * `session`: A database session * `saved_search_id`: a SavedSearch id **Returns:** * orm\_models.SavedSearch: the SavedSearch ### `read_saved_search_by_name` ```python theme={null} read_saved_search_by_name(db: PrefectDBInterface, session: AsyncSession, name: str) -> Union[orm_models.SavedSearch, None] ``` Reads a SavedSearch by name. **Args:** * `session`: A database session * `name`: a SavedSearch name **Returns:** * orm\_models.SavedSearch: the SavedSearch ### `read_saved_searches` ```python theme={null} read_saved_searches(db: PrefectDBInterface, session: AsyncSession, offset: Optional[int] = None, limit: Optional[int] = None) -> Sequence[orm_models.SavedSearch] ``` Read SavedSearches. **Args:** * `session`: A database session * `offset`: Query offset * `limit`: Query limit **Returns:** * List\[orm\_models.SavedSearch]: SavedSearches ### `delete_saved_search` ```python theme={null} delete_saved_search(db: PrefectDBInterface, session: AsyncSession, saved_search_id: UUID) -> bool ``` Delete a SavedSearch by id. **Args:** * `session`: A database session * `saved_search_id`: a SavedSearch id **Returns:** * whether or not the SavedSearch was deleted # task_run_states Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-models-task_run_states # `prefect.server.models.task_run_states` Functions for interacting with task run state ORM objects. Intended for internal use by the Prefect REST API. ## Functions ### `read_task_run_state` ```python theme={null} read_task_run_state(db: PrefectDBInterface, session: AsyncSession, task_run_state_id: UUID) -> Union[orm_models.TaskRunState, None] ``` Reads a task run state by id. **Args:** * `session`: A database session * `task_run_state_id`: a task run state id **Returns:** * orm\_models.TaskRunState: the task state ### `read_task_run_states` ```python theme={null} read_task_run_states(db: PrefectDBInterface, session: AsyncSession, task_run_id: UUID) -> Sequence[orm_models.TaskRunState] ``` Reads task runs states for a task run. **Args:** * `session`: A database session * `task_run_id`: the task run id **Returns:** * List\[orm\_models.TaskRunState]: the task run states ### `delete_task_run_state` ```python theme={null} delete_task_run_state(db: PrefectDBInterface, session: AsyncSession, task_run_state_id: UUID) -> bool ``` Delete a task run state by id. **Args:** * `session`: A database session * `task_run_state_id`: a task run state id **Returns:** * whether or not the task run state was deleted # task_runs Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-models-task_runs # `prefect.server.models.task_runs` Functions for interacting with task run ORM objects. Intended for internal use by the Prefect REST API. ## Functions ### `create_task_run` ```python theme={null} create_task_run(db: PrefectDBInterface, session: AsyncSession, task_run: schemas.core.TaskRun, orchestration_parameters: Optional[Dict[str, Any]] = None) -> orm_models.TaskRun ``` Creates a new task run. If a task run with the same flow\_run\_id, task\_key, and dynamic\_key already exists, the existing task run will be returned. If the provided task run has a state attached, it will also be created. **Args:** * `session`: a database session * `task_run`: a task run model **Returns:** * orm\_models.TaskRun: the newly-created or existing task run ### `update_task_run` ```python theme={null} update_task_run(db: PrefectDBInterface, session: AsyncSession, task_run_id: UUID, task_run: schemas.actions.TaskRunUpdate) -> bool ``` Updates a task run. **Args:** * `session`: a database session * `task_run_id`: the task run id to update * `task_run`: a task run model **Returns:** * whether or not matching rows were found to update ### `read_task_run` ```python theme={null} read_task_run(db: PrefectDBInterface, session: AsyncSession, task_run_id: UUID) -> Union[orm_models.TaskRun, None] ``` Read a task run by id. **Args:** * `session`: a database session * `task_run_id`: the task run id **Returns:** * orm\_models.TaskRun: the task run ### `read_task_run_with_flow_run_name` ```python theme={null} read_task_run_with_flow_run_name(db: PrefectDBInterface, session: AsyncSession, task_run_id: UUID) -> Union[orm_models.TaskRun, None] ``` Read a task run by id. **Args:** * `session`: a database session * `task_run_id`: the task run id **Returns:** * orm\_models.TaskRun: the task run with the flow run name ### `read_task_runs` ```python theme={null} read_task_runs(db: PrefectDBInterface, session: AsyncSession, flow_filter: Optional[schemas.filters.FlowFilter] = None, flow_run_filter: Optional[schemas.filters.FlowRunFilter] = None, task_run_filter: Optional[schemas.filters.TaskRunFilter] = None, deployment_filter: Optional[schemas.filters.DeploymentFilter] = None, offset: Optional[int] = None, limit: Optional[int] = None, sort: schemas.sorting.TaskRunSort = schemas.sorting.TaskRunSort.ID_DESC) -> Sequence[orm_models.TaskRun] ``` Read task runs. **Args:** * `session`: a database session * `flow_filter`: only select task runs whose flows match these filters * `flow_run_filter`: only select task runs whose flow runs match these filters * `task_run_filter`: only select task runs that match these filters * `deployment_filter`: only select task runs whose deployments match these filters * `offset`: Query offset * `limit`: Query limit * `sort`: Query sort **Returns:** * List\[orm\_models.TaskRun]: the task runs ### `count_task_runs` ```python theme={null} count_task_runs(db: PrefectDBInterface, session: AsyncSession, flow_filter: Optional[schemas.filters.FlowFilter] = None, flow_run_filter: Optional[schemas.filters.FlowRunFilter] = None, task_run_filter: Optional[schemas.filters.TaskRunFilter] = None, deployment_filter: Optional[schemas.filters.DeploymentFilter] = None) -> int ``` Count task runs. **Args:** * `session`: a database session * `flow_filter`: only count task runs whose flows match these filters * `flow_run_filter`: only count task runs whose flow runs match these filters * `task_run_filter`: only count task runs that match these filters * `deployment_filter`: only count task runs whose deployments match these filters Returns: int: count of task runs ### `count_task_runs_by_state` ```python theme={null} count_task_runs_by_state(db: PrefectDBInterface, session: AsyncSession, flow_filter: Optional[schemas.filters.FlowFilter] = None, flow_run_filter: Optional[schemas.filters.FlowRunFilter] = None, task_run_filter: Optional[schemas.filters.TaskRunFilter] = None, deployment_filter: Optional[schemas.filters.DeploymentFilter] = None) -> schemas.states.CountByState ``` Count task runs by state. **Args:** * `session`: a database session * `flow_filter`: only count task runs whose flows match these filters * `flow_run_filter`: only count task runs whose flow runs match these filters * `task_run_filter`: only count task runs that match these filters * `deployment_filter`: only count task runs whose deployments match these filters Returns: schemas.states.CountByState: count of task runs by state ### `delete_task_run` ```python theme={null} delete_task_run(db: PrefectDBInterface, session: AsyncSession, task_run_id: UUID) -> bool ``` Delete a task run by id. **Args:** * `session`: a database session * `task_run_id`: the task run id to delete **Returns:** * whether or not the task run was deleted ### `set_task_run_state` ```python theme={null} set_task_run_state(session: AsyncSession, task_run_id: UUID, state: schemas.states.State, force: bool = False, task_policy: Optional[Type[TaskRunOrchestrationPolicy]] = None, orchestration_parameters: Optional[Dict[str, Any]] = None) -> OrchestrationResult ``` Creates a new orchestrated task run state. Setting a new state on a run is the one of the principal actions that is governed by Prefect's orchestration logic. Setting a new run state will not guarantee creation, but instead trigger orchestration rules to govern the proposed `state` input. If the state is considered valid, it will be written to the database. Otherwise, a it's possible a different state, or no state, will be created. A `force` flag is supplied to bypass a subset of orchestration logic. **Args:** * `session`: a database session * `task_run_id`: the task run id * `state`: a task run state model * `force`: if False, orchestration rules will be applied that may alter or prevent the state transition. If True, orchestration rules are not applied. **Returns:** * OrchestrationResult object ### `with_system_labels_for_task_run` ```python theme={null} with_system_labels_for_task_run(session: AsyncSession, task_run: schemas.core.TaskRun) -> schemas.core.KeyValueLabels ``` Augment user supplied labels with system default labels for a task run. # task_workers Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-models-task_workers # `prefect.server.models.task_workers` ## Functions ### `observe_worker` ```python theme={null} observe_worker(task_keys: List[TaskKey], worker_id: WorkerId) -> None ``` ### `forget_worker` ```python theme={null} forget_worker(worker_id: WorkerId) -> None ``` ### `get_workers_for_task_keys` ```python theme={null} get_workers_for_task_keys(task_keys: List[TaskKey]) -> List[TaskWorkerResponse] ``` ### `get_all_workers` ```python theme={null} get_all_workers() -> List[TaskWorkerResponse] ``` ## Classes ### `TaskWorkerResponse` ### `InMemoryTaskWorkerTracker` **Methods:** #### `forget_worker` ```python theme={null} forget_worker(self, worker_id: WorkerId) -> None ``` #### `get_all_workers` ```python theme={null} get_all_workers(self) -> List[TaskWorkerResponse] ``` #### `get_workers_for_task_keys` ```python theme={null} get_workers_for_task_keys(self, task_keys: List[TaskKey]) -> List[TaskWorkerResponse] ``` #### `observe_worker` ```python theme={null} observe_worker(self, task_keys: List[TaskKey], worker_id: WorkerId) -> None ``` #### `reset` ```python theme={null} reset(self) -> None ``` Testing utility to reset the state of the task worker tracker # variables Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-models-variables # `prefect.server.models.variables` ## Functions ### `create_variable` ```python theme={null} create_variable(db: PrefectDBInterface, session: AsyncSession, variable: VariableCreate) -> orm_models.Variable ``` Create a variable **Args:** * `session`: async database session * `variable`: variable to create **Returns:** * orm\_models.Variable ### `read_variable` ```python theme={null} read_variable(db: PrefectDBInterface, session: AsyncSession, variable_id: UUID) -> Optional[orm_models.Variable] ``` Reads a variable by id. ### `read_variable_by_name` ```python theme={null} read_variable_by_name(db: PrefectDBInterface, session: AsyncSession, name: str) -> Optional[orm_models.Variable] ``` Reads a variable by name. ### `read_variables` ```python theme={null} read_variables(db: PrefectDBInterface, session: AsyncSession, variable_filter: Optional[filters.VariableFilter] = None, sort: sorting.VariableSort = sorting.VariableSort.NAME_ASC, offset: Optional[int] = None, limit: Optional[int] = None) -> Sequence[orm_models.Variable] ``` Read variables, applying filers. ### `count_variables` ```python theme={null} count_variables(db: PrefectDBInterface, session: AsyncSession, variable_filter: Optional[filters.VariableFilter] = None) -> int ``` Count variables, applying filters. ### `update_variable` ```python theme={null} update_variable(db: PrefectDBInterface, session: AsyncSession, variable_id: UUID, variable: VariableUpdate) -> bool ``` Updates a variable by id. ### `update_variable_by_name` ```python theme={null} update_variable_by_name(db: PrefectDBInterface, session: AsyncSession, name: str, variable: VariableUpdate) -> bool ``` Updates a variable by name. ### `delete_variable` ```python theme={null} delete_variable(db: PrefectDBInterface, session: AsyncSession, variable_id: UUID) -> bool ``` Delete a variable by id. ### `delete_variable_by_name` ```python theme={null} delete_variable_by_name(db: PrefectDBInterface, session: AsyncSession, name: str) -> bool ``` Delete a variable by name. # work_queues Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-models-work_queues # `prefect.server.models.work_queues` Functions for interacting with work queue ORM objects. Intended for internal use by the Prefect REST API. ## Functions ### `create_work_queue` ```python theme={null} create_work_queue(db: PrefectDBInterface, session: AsyncSession, work_queue: Union[schemas.core.WorkQueue, schemas.actions.WorkQueueCreate]) -> orm_models.WorkQueue ``` Inserts a WorkQueue. If a WorkQueue with the same name exists, an error will be thrown. **Args:** * `session`: a database session * `work_queue`: a WorkQueue model **Returns:** * orm\_models.WorkQueue: the newly-created or updated WorkQueue ### `read_work_queue` ```python theme={null} read_work_queue(db: PrefectDBInterface, session: AsyncSession, work_queue_id: Union[UUID, PrefectUUID]) -> Optional[orm_models.WorkQueue] ``` Reads a WorkQueue by id. **Args:** * `session`: A database session * `work_queue_id`: a WorkQueue id **Returns:** * orm\_models.WorkQueue: the WorkQueue ### `count_work_queue_active_slots` ```python theme={null} count_work_queue_active_slots(db: PrefectDBInterface, session: AsyncSession, work_queue_id: UUID) -> int ``` Count flow runs occupying concurrency slots for a given work queue. For standard queues (including pool-backed and default-agent queues), counts Pending/Running/Cancelling flow runs by work\_queue\_id FK. For legacy tag-based queues, counts Pending/Running flow runs matching the queue's tag/deployment filter (matching \_legacy\_get\_runs\_in\_work\_queue). ### `count_work_queue_active_slots_bulk` ```python theme={null} count_work_queue_active_slots_bulk(db: PrefectDBInterface, session: AsyncSession, work_queue_ids: Sequence[UUID]) -> dict[UUID, int] ``` Count active slots for multiple work queues. Standard queues are counted in a single bulk GROUP BY query; legacy tag-based queues fall back to per-queue counting since each has its own filter criteria. ### `read_work_queue_by_name` ```python theme={null} read_work_queue_by_name(db: PrefectDBInterface, session: AsyncSession, name: str) -> Optional[orm_models.WorkQueue] ``` Reads a WorkQueue by id. **Args:** * `session`: A database session * `work_queue_id`: a WorkQueue id **Returns:** * orm\_models.WorkQueue: the WorkQueue ### `read_work_queues` ```python theme={null} read_work_queues(db: PrefectDBInterface, session: AsyncSession, offset: Optional[int] = None, limit: Optional[int] = None, work_queue_filter: Optional[schemas.filters.WorkQueueFilter] = None) -> Sequence[orm_models.WorkQueue] ``` Read WorkQueues. **Args:** * `session`: A database session * `offset`: Query offset * `limit`: Query limit * `work_queue_filter`: only select work queues matching these filters Returns: Sequence\[orm\_models.WorkQueue]: WorkQueues ### `is_last_polled_recent` ```python theme={null} is_last_polled_recent(last_polled: Optional[DateTime]) -> bool ``` ### `update_work_queue` ```python theme={null} update_work_queue(db: PrefectDBInterface, session: AsyncSession, work_queue_id: UUID, work_queue: schemas.actions.WorkQueueUpdate, emit_status_change: Optional[Callable[[orm_models.WorkQueue], Awaitable[None]]] = None) -> bool ``` Update a WorkQueue by id. **Args:** * `session`: A database session * `work_queue`: the work queue data * `work_queue_id`: a WorkQueue id **Returns:** * whether or not the WorkQueue was updated ### `delete_work_queue` ```python theme={null} delete_work_queue(db: PrefectDBInterface, session: AsyncSession, work_queue_id: UUID) -> bool ``` Delete a WorkQueue by id. **Args:** * `session`: A database session * `work_queue_id`: a WorkQueue id **Returns:** * whether or not the WorkQueue was deleted ### `get_runs_in_work_queue` ```python theme={null} get_runs_in_work_queue(db: PrefectDBInterface, session: AsyncSession, work_queue_id: UUID, limit: Optional[int] = None, scheduled_before: Optional[datetime.datetime] = None) -> Tuple[orm_models.WorkQueue, Sequence[orm_models.FlowRun]] ``` Get runs from a work queue. **Args:** * `session`: A database session. work\_queue\_id: The work queue id. * `scheduled_before`: Only return runs scheduled to start before this time. * `limit`: An optional limit for the number of runs to return from the queue. This limit applies to the request only. It does not affect the work queue's concurrency limit. If `limit` exceeds the work queue's concurrency limit, it will be ignored. ### `ensure_work_queue_exists` ```python theme={null} ensure_work_queue_exists(session: AsyncSession, name: str) -> orm_models.WorkQueue ``` Checks if a work queue exists and creates it if it does not. Useful when working with deployments, agents, and flow runs that automatically create work queues. Will also create a work pool queue in the default agent pool to facilitate migration to work pools. ### `read_work_queue_status` ```python theme={null} read_work_queue_status(session: AsyncSession, work_queue_id: UUID) -> schemas.core.WorkQueueStatusDetail ``` Get work queue status by id. **Args:** * `session`: A database session * `work_queue_id`: a WorkQueue id **Returns:** * Information about the status of the work queue. ### `record_work_queue_polls` ```python theme={null} record_work_queue_polls(db: PrefectDBInterface, session: AsyncSession, polled_work_queue_ids: Sequence[UUID], ready_work_queue_ids: Sequence[UUID]) -> None ``` Record that the given work queues were polled, and also update the given ready\_work\_queue\_ids to READY. ### `mark_work_queues_ready` ```python theme={null} mark_work_queues_ready() -> None ``` ### `mark_work_queues_not_ready` ```python theme={null} mark_work_queues_not_ready(db: PrefectDBInterface, work_queue_ids: Iterable[UUID]) -> None ``` ### `emit_work_queue_status_event` ```python theme={null} emit_work_queue_status_event(db: PrefectDBInterface, work_queue: orm_models.WorkQueue) -> None ``` Emit an event when work queue fields are updated. ### `emit_work_queue_updated_event` ```python theme={null} emit_work_queue_updated_event(session: AsyncSession, work_queue: orm_models.WorkQueue, changed_fields: Dict[str, Dict[str, Any]]) -> None ``` # workers Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-models-workers # `prefect.server.models.workers` Functions for interacting with worker ORM objects. Intended for internal use by the Prefect REST API. ## Functions ### `create_work_pool` ```python theme={null} create_work_pool(db: PrefectDBInterface, session: AsyncSession, work_pool: Union[schemas.core.WorkPool, schemas.actions.WorkPoolCreate]) -> orm_models.WorkPool ``` Creates a work pool. If a WorkPool with the same name exists, an error will be thrown. **Args:** * `session`: a database session * `work_pool`: a WorkPool model **Returns:** * orm\_models.WorkPool: the newly-created WorkPool ### `read_work_pool` ```python theme={null} read_work_pool(db: PrefectDBInterface, session: AsyncSession, work_pool_id: UUID) -> Optional[orm_models.WorkPool] ``` Reads a WorkPool by id. **Args:** * `session`: A database session * `work_pool_id`: a WorkPool id **Returns:** * orm\_models.WorkPool: the WorkPool ### `read_work_pool_by_name` ```python theme={null} read_work_pool_by_name(db: PrefectDBInterface, session: AsyncSession, work_pool_name: str) -> Optional[orm_models.WorkPool] ``` Reads a WorkPool by name. **Args:** * `session`: A database session * `work_pool_name`: a WorkPool name **Returns:** * orm\_models.WorkPool: the WorkPool ### `read_work_pools` ```python theme={null} read_work_pools(db: PrefectDBInterface, session: AsyncSession, work_pool_filter: Optional[schemas.filters.WorkPoolFilter] = None, offset: Optional[int] = None, limit: Optional[int] = None) -> Sequence[orm_models.WorkPool] ``` Read worker configs. **Args:** * `session`: A database session * `offset`: Query offset * `limit`: Query limit Returns: List\[orm\_models.WorkPool]: worker configs ### `count_work_pools` ```python theme={null} count_work_pools(db: PrefectDBInterface, session: AsyncSession, work_pool_filter: Optional[schemas.filters.WorkPoolFilter] = None) -> int ``` Read worker configs. **Args:** * `session`: A database session * `work_pool_filter`: filter criteria to apply to the count Returns: int: the count of work pools matching the criteria ### `count_work_pool_active_slots` ```python theme={null} count_work_pool_active_slots(db: PrefectDBInterface, session: AsyncSession, work_pool_id: UUID) -> int ``` Count flow runs in slot-occupying states (Pending, Running) for a given work pool. Does not filter on queue pause status — paused queues may still have running/pending runs consuming resources. This matches the behavior of count\_work\_pool\_slot\_holders / get\_work\_pool\_slot\_holders. ### `count_work_pool_active_slots_bulk` ```python theme={null} count_work_pool_active_slots_bulk(db: PrefectDBInterface, session: AsyncSession, work_pool_ids: Sequence[UUID]) -> dict[UUID, int] ``` Count active slots for multiple work pools in a single query. Returns a mapping of work\_pool\_id -> active slot count. Does not filter on queue pause status (see count\_work\_pool\_active\_slots). ### `count_work_queue_active_slots` ```python theme={null} count_work_queue_active_slots(db: PrefectDBInterface, session: AsyncSession, work_queue_id: UUID) -> int ``` Count flow runs in slot-occupying states (Pending, Running) for a given work queue under a work pool. Counts by work\_queue\_id FK only. ### `count_work_queue_active_slots_bulk` ```python theme={null} count_work_queue_active_slots_bulk(db: PrefectDBInterface, session: AsyncSession, work_queue_ids: Sequence[UUID]) -> dict[UUID, int] ``` Count active slots for multiple work queues in a single query. Returns a mapping of work\_queue\_id -> active slot count. ### `update_work_pool` ```python theme={null} update_work_pool(db: PrefectDBInterface, session: AsyncSession, work_pool_id: UUID, work_pool: schemas.actions.WorkPoolUpdate, emit_status_change: Optional[Callable[[UUID, DateTime, orm_models.WorkPool, orm_models.WorkPool], Awaitable[None]]] = None) -> bool ``` Update a WorkPool by id. **Args:** * `session`: A database session * `work_pool_id`: a WorkPool id * `worker`: the work queue data * `emit_status_change`: function to call when work pool status is changed **Returns:** * whether or not the worker was updated ### `delete_work_pool` ```python theme={null} delete_work_pool(db: PrefectDBInterface, session: AsyncSession, work_pool_id: UUID) -> bool ``` Delete a WorkPool by id. **Args:** * `session`: A database session * `work_pool_id`: a work pool id **Returns:** * whether or not the WorkPool was deleted ### `get_scheduled_flow_runs` ```python theme={null} get_scheduled_flow_runs(db: PrefectDBInterface, session: AsyncSession, work_pool_ids: Optional[List[UUID]] = None, work_queue_ids: Optional[List[UUID]] = None, scheduled_before: Optional[datetime.datetime] = None, scheduled_after: Optional[datetime.datetime] = None, limit: Optional[int] = None, respect_queue_priorities: Optional[bool] = None) -> Sequence[schemas.responses.WorkerFlowRunResponse] ``` Get runs from queues in a specific work pool. **Args:** * `session`: a database session * `work_pool_ids`: a list of work pool ids * `work_queue_ids`: a list of work pool queue ids * `scheduled_before`: a datetime to filter runs scheduled before * `scheduled_after`: a datetime to filter runs scheduled after * `respect_queue_priorities`: whether or not to respect queue priorities * `limit`: the maximum number of runs to return * `db`: a database interface **Returns:** * List\[WorkerFlowRunResponse]: the runs, as well as related work pool details ### `create_work_queue` ```python theme={null} create_work_queue(db: PrefectDBInterface, session: AsyncSession, work_pool_id: UUID, work_queue: schemas.actions.WorkQueueCreate) -> orm_models.WorkQueue ``` Creates a work pool queue. **Args:** * `session`: a database session * `work_pool_id`: a work pool id * `work_queue`: a WorkQueue action model **Returns:** * orm\_models.WorkQueue: the newly-created WorkQueue ### `bulk_update_work_queue_priorities` ```python theme={null} bulk_update_work_queue_priorities(db: PrefectDBInterface, session: AsyncSession, work_pool_id: UUID, new_priorities: Dict[UUID, int]) -> None ``` This is a brute force update of all work pool queue priorities for a given work pool. It loads all queues fully into memory, sorts them, and flushes the update to the orm\_models. The algorithm ensures that priorities are unique integers > 0, and makes the minimum number of changes required to satisfy the provided `new_priorities`. For example, if no queues currently have the provided `new_priorities`, then they are assigned without affecting other queues. If they are held by other queues, then those queues' priorities are incremented as necessary. Updating queue priorities is not a common operation (happens on the same scale as queue modification, which is significantly less than reading from queues), so while this implementation is slow, it may suffice and make up for that with extreme simplicity. ### `read_work_queues` ```python theme={null} read_work_queues(db: PrefectDBInterface, session: AsyncSession, work_pool_id: UUID, work_queue_filter: Optional[schemas.filters.WorkQueueFilter] = None, offset: Optional[int] = None, limit: Optional[int] = None) -> Sequence[orm_models.WorkQueue] ``` Read all work pool queues for a work pool. Results are ordered by ascending priority. **Args:** * `session`: a database session * `work_pool_id`: a work pool id * `work_queue_filter`: Filter criteria for work pool queues * `offset`: Query offset * `limit`: Query limit **Returns:** * List\[orm\_models.WorkQueue]: the WorkQueues ### `count_work_queues` ```python theme={null} count_work_queues(db: PrefectDBInterface, session: AsyncSession, work_pool_id: UUID, work_queue_filter: Optional[schemas.filters.WorkQueueFilter] = None) -> int ``` Count work pool queues for a work pool. ### `read_work_queue` ```python theme={null} read_work_queue(db: PrefectDBInterface, session: AsyncSession, work_queue_id: Union[UUID, PrefectUUID]) -> Optional[orm_models.WorkQueue] ``` Read a specific work pool queue. **Args:** * `session`: a database session * `work_queue_id`: a work pool queue id **Returns:** * orm\_models.WorkQueue: the WorkQueue ### `read_work_queue_by_name` ```python theme={null} read_work_queue_by_name(db: PrefectDBInterface, session: AsyncSession, work_pool_name: str, work_queue_name: str) -> Optional[orm_models.WorkQueue] ``` Reads a WorkQueue by name. **Args:** * `session`: A database session * `work_pool_name`: a WorkPool name * `work_queue_name`: a WorkQueue name **Returns:** * orm\_models.WorkQueue: the WorkQueue ### `update_work_queue` ```python theme={null} update_work_queue(db: PrefectDBInterface, session: AsyncSession, work_queue_id: UUID, work_queue: schemas.actions.WorkQueueUpdate, emit_status_change: Optional[Callable[[orm_models.WorkQueue], Awaitable[None]]] = None, default_status: WorkQueueStatus = WorkQueueStatus.NOT_READY) -> bool ``` Update a work pool queue. **Args:** * `session`: a database session * `work_queue_id`: a work pool queue ID * `work_queue`: a WorkQueue model * `emit_status_change`: function to call when work queue status is changed **Returns:** * whether or not the WorkQueue was updated ### `delete_work_queue` ```python theme={null} delete_work_queue(db: PrefectDBInterface, session: AsyncSession, work_queue_id: UUID) -> bool ``` Delete a work pool queue. **Args:** * `session`: a database session * `work_queue_id`: a work pool queue ID **Returns:** * whether or not the WorkQueue was deleted ### `read_workers` ```python theme={null} read_workers(db: PrefectDBInterface, session: AsyncSession, work_pool_id: UUID, worker_filter: Optional[schemas.filters.WorkerFilter] = None, limit: Optional[int] = None, offset: Optional[int] = None) -> Sequence[orm_models.Worker] ``` ### `worker_heartbeat` ```python theme={null} worker_heartbeat(db: PrefectDBInterface, session: AsyncSession, work_pool_id: UUID, worker_name: str, heartbeat_interval_seconds: Optional[int] = None) -> bool ``` Record a worker process heartbeat. **Args:** * `session`: a database session * `work_pool_id`: a work pool ID * `worker_name`: a worker name **Returns:** * whether or not the worker was updated ### `delete_worker` ```python theme={null} delete_worker(db: PrefectDBInterface, session: AsyncSession, work_pool_id: UUID, worker_name: str) -> bool ``` Delete a work pool's worker. **Args:** * `session`: a database session * `work_pool_id`: a work pool ID * `worker_name`: a worker name **Returns:** * whether or not the Worker was deleted ### `count_work_pool_slot_holders` ```python theme={null} count_work_pool_slot_holders(db: PrefectDBInterface, session: AsyncSession, work_pool_id: UUID) -> int ``` Counts flow runs in slot-occupying states for a work pool. ### `get_work_pool_slot_holders` ```python theme={null} get_work_pool_slot_holders(db: PrefectDBInterface, session: AsyncSession, work_pool_id: UUID, work_queue_ids: Optional[List[UUID]] = None, flow_run_limit: Optional[int] = None) -> Sequence[tuple[orm_models.FlowRun, Optional[DateTime]]] ``` Returns flow runs in slot-occupying states for a work pool. Each result is a tuple of (FlowRun, slot\_acquired\_at) where slot\_acquired\_at is when the current slot-occupying sequence began. **Args:** * `work_pool_id`: The work pool to query. * `work_queue_ids`: If provided, only return runs for these queues. * `flow_run_limit`: If provided, cap results per work\_queue\_id. ### `count_work_pool_slot_holders_by_queue` ```python theme={null} count_work_pool_slot_holders_by_queue(db: PrefectDBInterface, session: AsyncSession, work_pool_id: UUID) -> dict[UUID, int] ``` Returns `{work_queue_id: count}` for slot-holding runs in a pool. ### `count_work_queue_slot_holders` ```python theme={null} count_work_queue_slot_holders(db: PrefectDBInterface, session: AsyncSession, work_queue_id: UUID) -> int ``` Counts flow runs in slot-occupying states for a single work queue. ### `get_work_queue_slot_holders` ```python theme={null} get_work_queue_slot_holders(db: PrefectDBInterface, session: AsyncSession, work_queue_id: UUID, offset: Optional[int] = None, limit: Optional[int] = None) -> Sequence[tuple[orm_models.FlowRun, Optional[DateTime]]] ``` Returns flow runs in slot-occupying states for a single work queue. Each result is a tuple of (FlowRun, slot\_acquired\_at) where slot\_acquired\_at is when the current slot-occupying sequence began. ### `emit_work_pool_updated_event` ```python theme={null} emit_work_pool_updated_event(session: AsyncSession, work_pool: orm_models.WorkPool, changed_fields: Dict[str, Dict[str, Any]]) -> None ``` Emit an event when work pool fields are updated. ### `emit_work_pool_status_event` ```python theme={null} emit_work_pool_status_event(event_id: UUID, occurred: DateTime, pre_update_work_pool: Optional[orm_models.WorkPool], work_pool: orm_models.WorkPool) -> None ``` # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-orchestration-__init__ # `prefect.server.orchestration` *This module is empty or contains only private/internal implementations.* # core_policy Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-orchestration-core_policy # `prefect.server.orchestration.core_policy` Orchestration logic that fires on state transitions. `CoreFlowPolicy` and `CoreTaskPolicy` contain all default orchestration rules that Prefect enforces on a state transition. ## Classes ### `CoreFlowPolicy` Orchestration rules that run against flow-run-state transitions in priority order. **Methods:** #### `priority` ```python theme={null} priority() -> list[Union[type[BaseUniversalTransform[orm_models.FlowRun, core.FlowRunPolicy]], type[BaseOrchestrationRule[orm_models.FlowRun, core.FlowRunPolicy]]]] ``` ### `CoreTaskPolicy` Orchestration rules that run against task-run-state transitions in priority order. **Methods:** #### `priority` ```python theme={null} priority() -> list[Union[type[BaseUniversalTransform[orm_models.TaskRun, core.TaskRunPolicy]], type[BaseOrchestrationRule[orm_models.TaskRun, core.TaskRunPolicy]]]] ``` ### `ClientSideTaskOrchestrationPolicy` Orchestration rules that run against task-run-state transitions in priority order, specifically for clients doing client-side orchestration. **Methods:** #### `priority` ```python theme={null} priority() -> list[Union[type[BaseUniversalTransform[orm_models.TaskRun, core.TaskRunPolicy]], type[BaseOrchestrationRule[orm_models.TaskRun, core.TaskRunPolicy]]]] ``` ### `BackgroundTaskPolicy` Orchestration rules that run against task-run-state transitions in priority order. **Methods:** #### `priority` ```python theme={null} priority() -> list[type[BaseUniversalTransform[orm_models.TaskRun, core.TaskRunPolicy]] | type[BaseOrchestrationRule[orm_models.TaskRun, core.TaskRunPolicy]]] ``` ### `MinimalFlowPolicy` **Methods:** #### `priority` ```python theme={null} priority() -> list[Union[type[BaseUniversalTransform[orm_models.FlowRun, core.FlowRunPolicy]], type[BaseOrchestrationRule[orm_models.FlowRun, core.FlowRunPolicy]]]] ``` ### `MarkLateRunsPolicy` **Methods:** #### `priority` ```python theme={null} priority() -> list[Union[type[BaseUniversalTransform[orm_models.FlowRun, core.FlowRunPolicy]], type[BaseOrchestrationRule[orm_models.FlowRun, core.FlowRunPolicy]]]] ``` ### `MinimalTaskPolicy` **Methods:** #### `priority` ```python theme={null} priority() -> list[Union[type[BaseUniversalTransform[orm_models.TaskRun, core.TaskRunPolicy]], type[BaseOrchestrationRule[orm_models.TaskRun, core.TaskRunPolicy]]]] ``` ### `SecureTaskConcurrencySlots` Checks relevant concurrency slots are available before entering a Running state. This rule checks if concurrency limits have been set on the tags associated with a TaskRun. If so, a concurrency slot will be secured against each concurrency limit before being allowed to transition into a running state. If a concurrency limit has been reached, the client will be instructed to delay the transition for the duration specified by the "PREFECT\_TASK\_RUN\_TAG\_CONCURRENCY\_SLOT\_WAIT\_SECONDS" setting before trying again. If the concurrency limit set on a tag is 0, the transition will be aborted to prevent deadlocks. **Methods:** #### `before_transition` ```python theme={null} before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.TaskRun, core.TaskRunPolicy]) -> None ``` #### `cleanup` ```python theme={null} cleanup(self, initial_state: states.State[Any] | None, validated_state: states.State[Any] | None, context: OrchestrationContext[orm_models.TaskRun, core.TaskRunPolicy]) -> None ``` ### `ReleaseTaskConcurrencySlots` Releases any concurrency slots held by a run upon exiting a Running or Cancelling state. **Methods:** #### `after_transition` ```python theme={null} after_transition(self, context: OrchestrationContext[orm_models.TaskRun, core.TaskRunPolicy]) -> None ``` ### `SecureFlowConcurrencySlots` Enforce deployment concurrency limits. This rule enforces concurrency limits on deployments. If a deployment has a concurrency limit, this rule will prevent more than that number of flow runs from being submitted concurrently based on the concurrency limit behavior configured for the deployment. We use the PENDING state as the target transition because this allows workers to secure a slot before provisioning dynamic infrastructure to run a flow. If a slot isn't available, the worker won't provision infrastructure. A lease is created for the concurrency limit. The client will be responsible for maintaining the lease. **Methods:** #### `before_transition` ```python theme={null} before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: FlowOrchestrationContext) -> None ``` #### `cleanup` ```python theme={null} cleanup(self, initial_state: states.State[Any] | None, validated_state: states.State[Any] | None, context: FlowOrchestrationContext) -> None ``` ### `ValidateDeploymentConcurrencyAtRunning` Validates and renews deployment concurrency leases at the PENDING→RUNNING transition. This prevents concurrency violations that occur when the lease reaper reclaims slots from PENDING flows. Without this validation, a flow can lose its slot while provisioning infrastructure and still transition to RUNNING, violating the concurrency limit. **Methods:** #### `before_transition` ```python theme={null} before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` ### `RemoveDeploymentConcurrencyLeaseForOldClientVersions` Removes a deployment concurrency lease if the client version is less than the minimum version for leasing. **Methods:** #### `after_transition` ```python theme={null} after_transition(self, initial_state: states.State[Any] | None, validated_state: states.State[Any] | None, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` ### `ReleaseFlowConcurrencySlots` Releases deployment concurrency slots held by a flow run. This rule releases a concurrency slot for a deployment when a flow run transitions out of the Running or Cancelling state. **Methods:** #### `after_transition` ```python theme={null} after_transition(self, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` ### `CacheInsertion` Caches completed states with cache keys after they are validated. **Methods:** #### `after_transition` ```python theme={null} after_transition(self, db: PrefectDBInterface, initial_state: states.State[Any] | None, validated_state: states.State[Any] | None, context: OrchestrationContext[orm_models.TaskRun, core.TaskRunPolicy]) -> None ``` #### `before_transition` ```python theme={null} before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.TaskRun, core.TaskRunPolicy]) -> None ``` ### `CacheRetrieval` Rejects running states if a completed state has been cached. This rule rejects transitions into a running state with a cache key if the key has already been associated with a completed state in the cache table. The client will be instructed to transition into the cached completed state instead. **Methods:** #### `before_transition` ```python theme={null} before_transition(self, db: PrefectDBInterface, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.TaskRun, core.TaskRunPolicy]) -> None ``` ### `RetryFailedFlows` Rejects failed states and schedules a retry if the retry limit has not been reached. This rule rejects transitions into a failed state if `retries` has been set and the run count has not reached the specified limit. The client will be instructed to transition into a scheduled state to retry flow execution. **Methods:** #### `before_transition` ```python theme={null} before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` ### `RetryFailedTasks` Rejects failed states and schedules a retry if the retry limit has not been reached. This rule rejects transitions into a failed state if `retries` has been set, the run count has not reached the specified limit, and the client asserts it is a retriable task run. The client will be instructed to transition into a scheduled state to retry task execution. **Methods:** #### `before_transition` ```python theme={null} before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.TaskRun, core.TaskRunPolicy]) -> None ``` ### `EnqueueScheduledTasks` Enqueues background task runs when they are scheduled **Methods:** #### `after_transition` ```python theme={null} after_transition(self, initial_state: states.State[Any] | None, validated_state: states.State[Any] | None, context: OrchestrationContext[orm_models.TaskRun, core.TaskRunPolicy]) -> None ``` ### `RenameReruns` Name the states if they have run more than once. In the special case where the initial state is an "AwaitingRetry" scheduled state, the proposed state will be renamed to "Retrying" instead. **Methods:** #### `before_transition` ```python theme={null} before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.Run, core.TaskRunPolicy | core.FlowRunPolicy]) -> None ``` ### `CopyScheduledTime` Ensures scheduled time is copied from scheduled states to pending states. If a new scheduled time has been proposed on the pending state, the scheduled time on the scheduled state will be ignored. **Methods:** #### `before_transition` ```python theme={null} before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.Run, core.TaskRunPolicy | core.FlowRunPolicy]) -> None ``` ### `WaitForScheduledTime` Prevents transitions to running states from happening too early. This rule enforces that all scheduled states will only start with the machine clock used by the Prefect REST API instance. This rule will identify transitions from scheduled states that are too early and nullify them. Instead, no state will be written to the database and the client will be sent an instruction to wait for `delay_seconds` before attempting the transition again. **Methods:** #### `before_transition` ```python theme={null} before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.Run, core.TaskRunPolicy | core.FlowRunPolicy]) -> None ``` ### `CopyTaskParametersID` Ensures a task's parameters ID is copied from Scheduled to Pending and from Pending to Running states. If a parameters ID has been included on the proposed state, the parameters ID on the initial state will be ignored. **Methods:** #### `before_transition` ```python theme={null} before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.TaskRun, core.TaskRunPolicy]) -> None ``` ### `HandlePausingFlows` Governs runs attempting to enter a Paused/Suspended state **Methods:** #### `after_transition` ```python theme={null} after_transition(self, initial_state: states.State[Any] | None, validated_state: states.State[Any] | None, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` #### `before_transition` ```python theme={null} before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` ### `HandleResumingPausedFlows` Governs runs attempting to leave a Paused state **Methods:** #### `after_transition` ```python theme={null} after_transition(self, initial_state: states.State[Any] | None, validated_state: states.State[Any] | None, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` #### `before_transition` ```python theme={null} before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` ### `UpdateFlowRunTrackerOnTasks` Tracks the flow run attempt a task run state is associated with. **Methods:** #### `after_transition` ```python theme={null} after_transition(self, initial_state: states.State[Any] | None, validated_state: states.State[Any] | None, context: OrchestrationContext[orm_models.TaskRun, core.TaskRunPolicy]) -> None ``` ### `HandleTaskTerminalStateTransitions` We do not allow tasks to leave terminal states if: * The task is completed and has a persisted result * The task is going to CANCELLING / PAUSED / CRASHED We reset the run count when a task leaves a terminal state for a non-terminal state which resets task run retries; this is particularly relevant for flow run retries. **Methods:** #### `before_transition` ```python theme={null} before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.TaskRun, core.TaskRunPolicy]) -> None ``` #### `cleanup` ```python theme={null} cleanup(self, initial_state: states.State[Any] | None, validated_state: states.State[Any] | None, context: OrchestrationContext[orm_models.TaskRun, core.TaskRunPolicy]) -> None ``` ### `HandleFlowTerminalStateTransitions` We do not allow flows to leave terminal states if: * The flow is completed and has a persisted result * The flow is going to CANCELLING / PAUSED / CRASHED * The flow is going to scheduled and has no deployment We reset the pause metadata when a flow leaves a terminal state for a non-terminal state. This resets pause behavior during manual flow run retries. **Methods:** #### `before_transition` ```python theme={null} before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` #### `cleanup` ```python theme={null} cleanup(self, initial_state: states.State[Any] | None, validated_state: states.State[Any] | None, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` ### `PreventPendingTransitions` Prevents transitions to PENDING. This rule is only used for flow runs. This is intended to prevent race conditions during duplicate submissions of runs. Before a run is submitted to its execution environment, it should be placed in a PENDING state. If two workers attempt to submit the same run, one of them should encounter a PENDING -> PENDING transition and abort orchestration of the run. Similarly, if the execution environment starts quickly the run may be in a RUNNING state when the second worker attempts the PENDING transition. We deny these state changes as well to prevent duplicate submission. If a run has transitioned to a RUNNING state a worker should not attempt to submit it again unless it has moved into a terminal state. CANCELLING and CANCELLED runs should not be allowed to transition to PENDING. For re-runs of deployed runs, they should transition to SCHEDULED first. For re-runs of ad-hoc runs, they should transition directly to RUNNING. **Methods:** #### `before_transition` ```python theme={null} before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.Run, Union[core.FlowRunPolicy, core.TaskRunPolicy]]) -> None ``` ### `EnsureOnlyScheduledFlowsMarkedLate` **Methods:** #### `before_transition` ```python theme={null} before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` ### `EnforceDeploymentConcurrencyOnLate` Enforce the CANCEL\_NEW deployment concurrency strategy when marking runs late. When a flow run would be marked Late and its deployment uses the CANCEL\_NEW collision strategy with a fully occupied concurrency limit, this rule rejects the Late transition and replaces it with a Cancelled state. This closes the gap where CANCEL\_NEW is normally enforced at the \* -> PENDING transition (by SecureFlowConcurrencySlots), but runs that never reach PENDING because they go late would accumulate in a Late state instead of being cancelled. **Methods:** #### `before_transition` ```python theme={null} before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` ### `PreventRunningTasksFromStoppedFlows` Prevents running tasks from stopped flows. A running state implies execution, but also the converse. This rule ensures that a flow's tasks cannot be run unless the flow is also running. **Methods:** #### `before_transition` ```python theme={null} before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.TaskRun, core.TaskRunPolicy]) -> None ``` ### `EnforceCancellingToCancelledTransition` Rejects transitions from Cancelling to any terminal state except for Cancelled. **Methods:** #### `before_transition` ```python theme={null} before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.TaskRun, core.TaskRunPolicy]) -> None ``` ### `BypassCancellingFlowRunsWithNoInfra` Rejects transitions from Scheduled to Cancelling, and instead sets the state to Cancelled, if the flow run has no associated infrastructure process ID. Also Rejects transitions from Paused to Cancelling if the Paused state's details indicates the flow run has been suspended, exiting the flow and tearing down infra. The `Cancelling` state is used to clean up infrastructure. If there is not infrastructure to clean up, we can transition directly to `Cancelled`. Runs that are `Resuming` are in a `Scheduled` state that were previously `Suspended` and do not yet have infrastructure. Runs that are `AwaitingRetry` are a `Scheduled` state that may have associated infrastructure. **Methods:** #### `before_transition` ```python theme={null} before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` ### `PreserveDeploymentConcurrencyLeaseId` Preserves the deployment concurrency lease ID across state transitions. Workers send deployment\_concurrency\_lease\_id: null in the proposed state JSON body (e.g., for PENDING→PENDING(Submitting)). Pydantic v2 treats null JSON fields as explicitly set, so the lease ID would otherwise be silently dropped. This transform copies the lease ID forward whenever the initial state has one and the proposed state does not. **Methods:** #### `before_transition` ```python theme={null} before_transition(self, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` ### `PreventDuplicateTransitions` Prevent duplicate transitions from being made right after one another. This rule allows for clients to set an optional transition\_id on a state. If the run's next transition has the same transition\_id, the transition will be rejected and the existing state will be returned. This allows for clients to make state transition requests without worrying about the following case: * A client making a state transition request * The server accepts transition and commits the transition * The client is unable to receive the response and retries the request **Methods:** #### `before_transition` ```python theme={null} before_transition(self, initial_state: states.State[Any] | None, proposed_state: states.State[Any] | None, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` # dependencies Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-orchestration-dependencies # `prefect.server.orchestration.dependencies` Injected orchestration dependencies ## Functions ### `provide_task_policy` ```python theme={null} provide_task_policy() -> type[TaskRunOrchestrationPolicy] ``` ### `provide_flow_policy` ```python theme={null} provide_flow_policy() -> type[FlowRunOrchestrationPolicy] ``` ### `provide_task_orchestration_parameters` ```python theme={null} provide_task_orchestration_parameters() -> dict[str, Any] ``` ### `provide_flow_orchestration_parameters` ```python theme={null} provide_flow_orchestration_parameters() -> dict[str, Any] ``` ### `temporary_task_policy` ```python theme={null} temporary_task_policy(tmp_task_policy: type[TaskRunOrchestrationPolicy]) ``` ### `temporary_flow_policy` ```python theme={null} temporary_flow_policy(tmp_flow_policy: type[FlowRunOrchestrationPolicy]) ``` ### `temporary_task_orchestration_parameters` ```python theme={null} temporary_task_orchestration_parameters(tmp_orchestration_parameters: dict[str, Any]) ``` ### `temporary_flow_orchestration_parameters` ```python theme={null} temporary_flow_orchestration_parameters(tmp_orchestration_parameters: dict[str, Any]) ``` ## Classes ### `OrchestrationDependencies` # global_policy Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-orchestration-global_policy # `prefect.server.orchestration.global_policy` Bookkeeping logic that fires on every state transition. For clarity, `GlobalFlowpolicy` and `GlobalTaskPolicy` contain all transition logic implemented using `BaseUniversalTransform`. None of these operations modify state, and regardless of what orchestration Prefect REST API might enforce on a transition, the global policies contain Prefect's necessary bookkeeping. Because these transforms record information about the validated state committed to the state database, they should be the most deeply nested contexts in orchestration loop. ## Functions ### `COMMON_GLOBAL_TRANSFORMS` ```python theme={null} COMMON_GLOBAL_TRANSFORMS() -> list[type[BaseUniversalTransform[orm_models.Run, Union[core.FlowRunPolicy, core.TaskRunPolicy]]]] ``` ## Classes ### `GlobalFlowPolicy` Global transforms that run against flow-run-state transitions in priority order. These transforms are intended to run immediately before and after a state transition is validated. **Methods:** #### `priority` ```python theme={null} priority() -> list[Union[type[BaseUniversalTransform[orm_models.FlowRun, core.FlowRunPolicy]], type[BaseOrchestrationRule[orm_models.FlowRun, core.FlowRunPolicy]]]] ``` ### `GlobalTaskPolicy` Global transforms that run against task-run-state transitions in priority order. These transforms are intended to run immediately before and after a state transition is validated. **Methods:** #### `priority` ```python theme={null} priority() -> list[Union[type[BaseUniversalTransform[orm_models.TaskRun, core.TaskRunPolicy]], type[BaseOrchestrationRule[orm_models.TaskRun, core.TaskRunPolicy]]]] ``` ### `SetRunStateType` Updates the state type of a run on a state transition. **Methods:** #### `before_transition` ```python theme={null} before_transition(self, context: GenericOrchestrationContext[orm_models.Run, Any]) -> None ``` ### `SetRunStateName` Updates the state name of a run on a state transition. **Methods:** #### `before_transition` ```python theme={null} before_transition(self, context: GenericOrchestrationContext[orm_models.Run, Any]) -> None ``` ### `SetStartTime` Records the time a run enters a running state for the first time. **Methods:** #### `before_transition` ```python theme={null} before_transition(self, context: GenericOrchestrationContext[orm_models.Run, Any]) -> None ``` ### `SetRunStateTimestamp` Records the time a run changes states. **Methods:** #### `before_transition` ```python theme={null} before_transition(self, context: GenericOrchestrationContext[orm_models.Run, Any]) -> None ``` ### `SetEndTime` Records the time a run enters a terminal state. With normal client usage, a run will not transition out of a terminal state. However, it's possible to force these transitions manually via the API. While leaving a terminal state, the end time will be unset. **Methods:** #### `before_transition` ```python theme={null} before_transition(self, context: GenericOrchestrationContext[orm_models.Run, Any]) -> None ``` ### `IncrementRunTime` Records the amount of time a run spends in the running state. **Methods:** #### `before_transition` ```python theme={null} before_transition(self, context: GenericOrchestrationContext[orm_models.Run, Any]) -> None ``` ### `IncrementFlowRunCount` Records the number of times a run enters a running state. For use with retries. **Methods:** #### `before_transition` ```python theme={null} before_transition(self, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` ### `RemoveResumingIndicator` Removes the indicator on a flow run that marks it as resuming. **Methods:** #### `before_transition` ```python theme={null} before_transition(self, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` ### `IncrementTaskRunCount` Records the number of times a run enters a running state. For use with retries. **Methods:** #### `before_transition` ```python theme={null} before_transition(self, context: OrchestrationContext[orm_models.TaskRun, core.TaskRunPolicy]) -> None ``` ### `SetExpectedStartTime` Estimates the time a state is expected to start running if not set. For scheduled states, this estimate is simply the scheduled time. For other states, this is set to the time the proposed state was created by Prefect. **Methods:** #### `before_transition` ```python theme={null} before_transition(self, context: GenericOrchestrationContext[orm_models.Run, Any]) -> None ``` ### `SetNextScheduledStartTime` Records the scheduled time on a run. When a run enters a scheduled state, `run.next_scheduled_start_time` is set to the state's scheduled time. When leaving a scheduled state, `run.next_scheduled_start_time` is unset. **Methods:** #### `before_transition` ```python theme={null} before_transition(self, context: GenericOrchestrationContext[orm_models.Run, Any]) -> None ``` ### `UpdateSubflowParentTask` Whenever a subflow changes state, it must update its parent task run's state. **Methods:** #### `after_transition` ```python theme={null} after_transition(self, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` ### `UpdateSubflowStateDetails` Update a child subflow state's references to a corresponding tracking task run id in the parent flow run **Methods:** #### `before_transition` ```python theme={null} before_transition(self, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` ### `UpdateStateDetails` Update a state's references to a corresponding flow- or task- run. **Methods:** #### `before_transition` ```python theme={null} before_transition(self, context: GenericOrchestrationContext) -> None ``` # instrumentation_policies Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-orchestration-instrumentation_policies # `prefect.server.orchestration.instrumentation_policies` Orchestration rules related to instrumenting the orchestration engine for Prefect Observability ## Classes ### `InstrumentFlowRunStateTransitions` When a Flow Run changes states, fire a Prefect Event for the state change **Methods:** #### `after_transition` ```python theme={null} after_transition(self, context: OrchestrationContext[orm_models.FlowRun, core.FlowRunPolicy]) -> None ``` # policies Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-orchestration-policies # `prefect.server.orchestration.policies` Policies are collections of orchestration rules and transforms. Prefect implements (most) orchestration with logic that governs a Prefect flow or task changing state. Policies organize of orchestration logic both to provide an ordering mechanism as well as provide observability into the orchestration process. While Prefect's orchestration rules can gracefully run independently of one another, ordering can still have an impact on the observed behavior of the system. For example, it makes no sense to secure a concurrency slot for a run if a cached state exists. Furthermore, policies, provide a mechanism to configure and observe exactly what logic will fire against a transition. ## Classes ### `BaseOrchestrationPolicy` An abstract base class used to organize orchestration rules in priority order. Different collections of orchestration rules might be used to govern various kinds of transitions. For example, flow-run states and task-run states might require different orchestration logic. **Methods:** #### `compile_transition_rules` ```python theme={null} compile_transition_rules(cls, from_state: states.StateType | None = None, to_state: states.StateType | None = None) -> list[type[BaseUniversalTransform[T, RP] | BaseOrchestrationRule[T, RP]]] ``` Returns rules in policy that are valid for the specified state transition. #### `priority` ```python theme={null} priority() -> list[type[BaseUniversalTransform[T, RP] | BaseOrchestrationRule[T, RP]]] ``` A list of orchestration rules in priority order. ### `TaskRunOrchestrationPolicy` ### `FlowRunOrchestrationPolicy` ### `GenericOrchestrationPolicy` # rules Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-orchestration-rules # `prefect.server.orchestration.rules` Prefect's flow and task-run orchestration machinery. This module contains all the core concepts necessary to implement Prefect's state orchestration engine. These states correspond to intuitive descriptions of all the points that a Prefect flow or task can observe executing user code and intervene, if necessary. A detailed description of states can be found in our concept [documentation](https://docs.prefect.io/v3/concepts/states). Prefect's orchestration engine operates under the assumption that no governed user code will execute without first requesting Prefect REST API validate a change in state and record metadata about the run. With all attempts to run user code being checked against a Prefect instance, the Prefect REST API database becomes the unambiguous source of truth for managing the execution of complex interacting workflows. Orchestration rules can be implemented as discrete units of logic that operate against each state transition and can be fully observable, extensible, and customizable -- all without needing to store or parse a single line of user code. ## Classes ### `OrchestrationContext` A container for a state transition, governed by orchestration rules. When a flow- or task- run attempts to change state, Prefect REST API has an opportunity to decide whether this transition can proceed. All the relevant information associated with the state transition is stored in an `OrchestrationContext`, which is subsequently governed by nested orchestration rules implemented using the `BaseOrchestrationRule` ABC. `OrchestrationContext` introduces the concept of a state being `None` in the context of an intended state transition. An initial state can be `None` if a run is is attempting to set a state for the first time. The proposed state might be `None` if a rule governing the transition determines that no state change should occur at all and nothing is written to the database. **Attributes:** * `session`: a SQLAlchemy database session * `initial_state`: the initial state of a run * `proposed_state`: the proposed state a run is transitioning into * `validated_state`: a proposed state that has committed to the database * `rule_signature`: a record of rules that have fired on entry into a managed context, currently only used for debugging purposes * `finalization_signature`: a record of rules that have fired on exit from a managed context, currently only used for debugging purposes * `response_status`: a SetStateStatus object used to build the API response * `response_details`: a StateResponseDetails object use to build the API response **Args:** * `session`: a SQLAlchemy database session * `initial_state`: the initial state of a run * `proposed_state`: the proposed state a run is transitioning into **Methods:** #### `entry_context` ```python theme={null} entry_context(self) -> tuple[Optional[states.State], Optional[states.State], Self] ``` A convenience method that generates input parameters for orchestration rules. An `OrchestrationContext` defines a state transition that is managed by orchestration rules which can fire hooks before a transition has been committed to the database. These hooks have a consistent interface which can be generated with this method. #### `exit_context` ```python theme={null} exit_context(self) -> tuple[Optional[states.State], Optional[states.State], Self] ``` A convenience method that generates input parameters for orchestration rules. An `OrchestrationContext` defines a state transition that is managed by orchestration rules which can fire hooks after a transition has been committed to the database. These hooks have a consistent interface which can be generated with this method. #### `flow_run` ```python theme={null} flow_run(self) -> orm_models.FlowRun | None ``` #### `initial_state_type` ```python theme={null} initial_state_type(self) -> Optional[states.StateType] ``` The state type of `self.initial_state` if it exists. #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `proposed_state_type` ```python theme={null} proposed_state_type(self) -> Optional[states.StateType] ``` The state type of `self.proposed_state` if it exists. #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `run_settings` ```python theme={null} run_settings(self) -> RP ``` Run-level settings used to orchestrate the state transition. #### `safe_copy` ```python theme={null} safe_copy(self) -> Self ``` Creates a mostly-mutation-safe copy for use in orchestration rules. Orchestration rules govern state transitions using information stored in an `OrchestrationContext`. However, mutating objects stored on the context directly can have unintended side-effects. To guard against this, `self.safe_copy` can be used to pass information to orchestration rules without risking mutation. **Returns:** * A mutation-safe copy of the `OrchestrationContext` #### `validated_state_type` ```python theme={null} validated_state_type(self) -> Optional[states.StateType] ``` The state type of `self.validated_state` if it exists. ### `FlowOrchestrationContext` A container for a flow run state transition, governed by orchestration rules. When a flow- run attempts to change state, Prefect REST API has an opportunity to decide whether this transition can proceed. All the relevant information associated with the state transition is stored in an `OrchestrationContext`, which is subsequently governed by nested orchestration rules implemented using the `BaseOrchestrationRule` ABC. `FlowOrchestrationContext` introduces the concept of a state being `None` in the context of an intended state transition. An initial state can be `None` if a run is is attempting to set a state for the first time. The proposed state might be `None` if a rule governing the transition determines that no state change should occur at all and nothing is written to the database. **Attributes:** * `session`: a SQLAlchemy database session * `run`: the flow run attempting to change state * `initial_state`: the initial state of the run * `proposed_state`: the proposed state the run is transitioning into * `validated_state`: a proposed state that has committed to the database * `rule_signature`: a record of rules that have fired on entry into a managed context, currently only used for debugging purposes * `finalization_signature`: a record of rules that have fired on exit from a managed context, currently only used for debugging purposes * `response_status`: a SetStateStatus object used to build the API response * `response_details`: a StateResponseDetails object use to build the API response **Args:** * `session`: a SQLAlchemy database session * `run`: the flow run attempting to change state * `initial_state`: the initial state of a run * `proposed_state`: the proposed state a run is transitioning into **Methods:** #### `flow_run` ```python theme={null} flow_run(self) -> orm_models.FlowRun ``` #### `run_settings` ```python theme={null} run_settings(self) -> core.FlowRunPolicy ``` Run-level settings used to orchestrate the state transition. #### `safe_copy` ```python theme={null} safe_copy(self) -> Self ``` Creates a mostly-mutation-safe copy for use in orchestration rules. Orchestration rules govern state transitions using information stored in an `OrchestrationContext`. However, mutating objects stored on the context directly can have unintended side-effects. To guard against this, `self.safe_copy` can be used to pass information to orchestration rules without risking mutation. **Returns:** * A mutation-safe copy of `FlowOrchestrationContext` #### `task_run` ```python theme={null} task_run(self) -> None ``` #### `validate_proposed_state` ```python theme={null} validate_proposed_state(self, db: PrefectDBInterface) ``` Validates a proposed state by committing it to the database. After the `FlowOrchestrationContext` is governed by orchestration rules, the proposed state can be validated: the proposed state is added to the current SQLAlchemy session and is flushed. `self.validated_state` set to the flushed state. The state on the run is set to the validated state as well. If the proposed state is `None` when this method is called, no state will be written and `self.validated_state` will be set to the run's current state. **Returns:** * None ### `TaskOrchestrationContext` A container for a task run state transition, governed by orchestration rules. When a task- run attempts to change state, Prefect REST API has an opportunity to decide whether this transition can proceed. All the relevant information associated with the state transition is stored in an `OrchestrationContext`, which is subsequently governed by nested orchestration rules implemented using the `BaseOrchestrationRule` ABC. `TaskOrchestrationContext` introduces the concept of a state being `None` in the context of an intended state transition. An initial state can be `None` if a run is is attempting to set a state for the first time. The proposed state might be `None` if a rule governing the transition determines that no state change should occur at all and nothing is written to the database. **Attributes:** * `session`: a SQLAlchemy database session * `run`: the task run attempting to change state * `initial_state`: the initial state of the run * `proposed_state`: the proposed state the run is transitioning into * `validated_state`: a proposed state that has committed to the database * `rule_signature`: a record of rules that have fired on entry into a managed context, currently only used for debugging purposes * `finalization_signature`: a record of rules that have fired on exit from a managed context, currently only used for debugging purposes * `response_status`: a SetStateStatus object used to build the API response * `response_details`: a StateResponseDetails object use to build the API response **Args:** * `session`: a SQLAlchemy database session * `run`: the task run attempting to change state * `initial_state`: the initial state of a run * `proposed_state`: the proposed state a run is transitioning into **Methods:** #### `flow_run` ```python theme={null} flow_run(self) -> orm_models.FlowRun | None ``` #### `run_settings` ```python theme={null} run_settings(self) -> core.TaskRunPolicy ``` Run-level settings used to orchestrate the state transition. #### `safe_copy` ```python theme={null} safe_copy(self) -> Self ``` Creates a mostly-mutation-safe copy for use in orchestration rules. Orchestration rules govern state transitions using information stored in an `OrchestrationContext`. However, mutating objects stored on the context directly can have unintended side-effects. To guard against this, `self.safe_copy` can be used to pass information to orchestration rules without risking mutation. **Returns:** * A mutation-safe copy of `TaskOrchestrationContext` #### `task_run` ```python theme={null} task_run(self) -> orm_models.TaskRun ``` #### `validate_proposed_state` ```python theme={null} validate_proposed_state(self, db: PrefectDBInterface) ``` Validates a proposed state by committing it to the database. After the `TaskOrchestrationContext` is governed by orchestration rules, the proposed state can be validated: the proposed state is added to the current SQLAlchemy session and is flushed. `self.validated_state` set to the flushed state. The state on the run is set to the validated state as well. If the proposed state is `None` when this method is called, no state will be written and `self.validated_state` will be set to the run's current state. **Returns:** * None ### `BaseOrchestrationRule` An abstract base class used to implement a discrete piece of orchestration logic. An `OrchestrationRule` is a stateful context manager that directly governs a state transition. Complex orchestration is achieved by nesting multiple rules. Each rule runs against an `OrchestrationContext` that contains the transition details; this context is then passed to subsequent rules. The context can be modified by hooks that fire before and after a new state is validated and committed to the database. These hooks will fire as long as the state transition is considered "valid" and govern a transition by either modifying the proposed state before it is validated or by producing a side-effect. A state transition occurs whenever a flow- or task- run changes state, prompting Prefect REST API to decide whether or not this transition can proceed. The current state of the run is referred to as the "initial state", and the state a run is attempting to transition into is the "proposed state". Together, the initial state transitioning into the proposed state is the intended transition that is governed by these orchestration rules. After using rules to enter a runtime context, the `OrchestrationContext` will contain a proposed state that has been governed by each rule, and at that point can validate the proposed state and commit it to the database. The validated state will be set on the context as `context.validated_state`, and rules will call the `self.after_transition` hook upon exiting the managed context. Examples: Create a rule: ```python theme={null} class BasicRule(BaseOrchestrationRule): # allowed initial state types FROM_STATES = [StateType.RUNNING] # allowed proposed state types TO_STATES = [StateType.COMPLETED, StateType.FAILED] async def before_transition(initial_state, proposed_state, ctx): # side effects and proposed state mutation can happen here ... async def after_transition(initial_state, validated_state, ctx): # operations on states that have been validated can happen here ... async def cleanup(intitial_state, validated_state, ctx): # reverts side effects generated by `before_transition` if necessary ... ``` Use a rule: ```python theme={null} intended_transition = (StateType.RUNNING, StateType.COMPLETED) async with BasicRule(context, *intended_transition): # context.proposed_state has been governed by BasicRule ... ``` Use multiple rules: ```python theme={null} rules = [BasicRule, BasicRule] intended_transition = (StateType.RUNNING, StateType.COMPLETED) async with contextlib.AsyncExitStack() as stack: for rule in rules: stack.enter_async_context(rule(context, *intended_transition)) # context.proposed_state has been governed by all rules ... ``` **Attributes:** * `FROM_STATES`: list of valid initial state types this rule governs * `TO_STATES`: list of valid proposed state types this rule governs * `context`: the orchestration context * `from_state_type`: the state type a run is currently in * `to_state_type`: the intended proposed state type prior to any orchestration **Args:** * `context`: A `FlowOrchestrationContext` or `TaskOrchestrationContext` that is passed between rules * `from_state_type`: The state type of the initial state of a run, if this state type is not contained in `FROM_STATES`, no hooks will fire * `to_state_type`: The state type of the proposed state before orchestration, if this state type is not contained in `TO_STATES`, no hooks will fire **Methods:** #### `abort_transition` ```python theme={null} abort_transition(self, reason: str) -> None ``` Aborts a proposed transition before the transition is validated. This method will abort a proposed transition, expecting no further action to occur for this run. The proposed state is set to `None`, signaling to the `OrchestrationContext` that no state should be written to the database. A reason for aborting the transition is also provided. Rules that abort the transition will not fizzle, despite the proposed state type changing. **Args:** * `reason`: The reason for aborting the transition #### `after_transition` ```python theme={null} after_transition(self, initial_state: Optional[states.State], validated_state: Optional[states.State], context: OrchestrationContext[T, RP]) -> None ``` Implements a hook that can fire after a state is committed to the database. **Args:** * `initial_state`: The initial state of a transition * `validated_state`: The governed state that has been committed to the database * `context`: A safe copy of the `OrchestrationContext`, with the exception of `context.run`, mutating this context will have no effect on the broader orchestration environment. **Returns:** * None #### `before_transition` ```python theme={null} before_transition(self, initial_state: Optional[states.State], proposed_state: Optional[states.State], context: OrchestrationContext[T, RP]) -> None ``` Implements a hook that can fire before a state is committed to the database. This hook may produce side-effects or mutate the proposed state of a transition using one of four methods: `self.reject_transition`, `self.delay_transition`, `self.abort_transition`, and `self.rename_state`. **Args:** * `initial_state`: The initial state of a transition * `proposed_state`: The proposed state of a transition * `context`: A safe copy of the `OrchestrationContext`, with the exception of `context.run`, mutating this context will have no effect on the broader orchestration environment. **Returns:** * None #### `cleanup` ```python theme={null} cleanup(self, initial_state: Optional[states.State], validated_state: Optional[states.State], context: OrchestrationContext[T, RP]) -> None ``` Implements a hook that can fire after a state is committed to the database. The intended use of this method is to revert side-effects produced by `self.before_transition` when the transition is found to be invalid on exit. This allows multiple rules to be gracefully run in sequence, without logic that keeps track of all other rules that might govern a transition. **Args:** * `initial_state`: The initial state of a transition * `validated_state`: The governed state that has been committed to the database * `context`: A safe copy of the `OrchestrationContext`, with the exception of `context.run`, mutating this context will have no effect on the broader orchestration environment. **Returns:** * None #### `delay_transition` ```python theme={null} delay_transition(self, delay_seconds: int, reason: str) -> None ``` Delays a proposed transition before the transition is validated. This method will delay a proposed transition, setting the proposed state to `None`, signaling to the `OrchestrationContext` that no state should be written to the database. The number of seconds a transition should be delayed is passed to the `OrchestrationContext`. A reason for delaying the transition is also provided. Rules that delay the transition will not fizzle, despite the proposed state type changing. **Args:** * `delay_seconds`: The number of seconds the transition should be delayed * `reason`: The reason for delaying the transition #### `fizzled` ```python theme={null} fizzled(self) -> bool ``` Determines if a rule is fizzled and side-effects need to be reverted. Rules are fizzled if the transitions were valid on entry (thus firing `self.before_transition`) but are invalid upon exiting the governed context, most likely caused by another rule mutating the transition. **Returns:** * True if the rule is fizzled, False otherwise. #### `invalid` ```python theme={null} invalid(self) -> bool ``` Determines if a rule is invalid. Invalid rules do nothing and no hooks fire upon entering or exiting a governed context. Rules are invalid if the transition states types are not contained in `self.FROM_STATES` and `self.TO_STATES`, or if the context is proposing a transition that differs from the transition the rule was instantiated with. **Returns:** * True if the rules in invalid, False otherwise. #### `invalid_transition` ```python theme={null} invalid_transition(self) -> bool ``` Determines if the transition proposed by the `OrchestrationContext` is invalid. If the `OrchestrationContext` is attempting to manage a transition with this rule that differs from the transition the rule was instantiated with, the transition is considered to be invalid. Depending on the context, a rule with an invalid transition is either "invalid" or "fizzled". **Returns:** * True if the transition is invalid, False otherwise. #### `reject_transition` ```python theme={null} reject_transition(self, state: Optional[states.State], reason: str) -> None ``` Rejects a proposed transition before the transition is validated. This method will reject a proposed transition, mutating the proposed state to the provided `state`. A reason for rejecting the transition is also passed on to the `OrchestrationContext`. Rules that reject the transition will not fizzle, despite the proposed state type changing. **Args:** * `state`: The new proposed state. If `None`, the current run state will be returned in the result instead. * `reason`: The reason for rejecting the transition #### `rename_state` ```python theme={null} rename_state(self, state_name: str) -> None ``` Sets the "name" attribute on a proposed state. The name of a state is an annotation intended to provide rich, human-readable context for how a run is progressing. This method only updates the name and not the canonical state TYPE, and will not fizzle or invalidate any other rules that might govern this state transition. #### `update_context_parameters` ```python theme={null} update_context_parameters(self, key: str, value: Any) -> None ``` Updates the "parameters" dictionary attribute with the specified key-value pair. This mechanism streamlines the process of passing messages and information between orchestration rules if necessary and is simpler and more ephemeral than message-passing via the database or some other side-effect. This mechanism can be used to break up large rules for ease of testing or comprehension, but note that any rules coupled this way (or any other way) are no longer independent and the order in which they appear in the orchestration policy priority will matter. ### `FlowRunOrchestrationRule` ### `TaskRunOrchestrationRule` ### `GenericOrchestrationRule` ### `BaseUniversalTransform` An abstract base class used to implement privileged bookkeeping logic. Beyond the orchestration rules implemented with the `BaseOrchestrationRule` ABC, Universal transforms are not stateful, and fire their before- and after- transition hooks on every state transition unless the proposed state is `None`, indicating that no state should be written to the database. Because there are no guardrails in place to prevent directly mutating state or other parts of the orchestration context, universal transforms should only be used with care. **Attributes:** * `FROM_STATES`: for compatibility with `BaseOrchestrationPolicy` * `TO_STATES`: for compatibility with `BaseOrchestrationPolicy` * `context`: the orchestration context * `from_state_type`: the state type a run is currently in * `to_state_type`: the intended proposed state type prior to any orchestration **Args:** * `context`: A `FlowOrchestrationContext` or `TaskOrchestrationContext` that is passed between transforms **Methods:** #### `after_transition` ```python theme={null} after_transition(self, context: OrchestrationContext[T, RP]) -> None ``` Implements a hook that can fire after a state is committed to the database. **Args:** * `context`: the `OrchestrationContext` that contains transition details **Returns:** * None #### `before_transition` ```python theme={null} before_transition(self, context: OrchestrationContext[T, RP]) -> None ``` Implements a hook that fires before a state is committed to the database. **Args:** * `context`: the `OrchestrationContext` that contains transition details **Returns:** * None #### `exception_in_transition` ```python theme={null} exception_in_transition(self) -> bool ``` Determines if the transition has encountered an exception. **Returns:** * True if the transition is encountered an exception, False otherwise. #### `nullified_transition` ```python theme={null} nullified_transition(self) -> bool ``` Determines if the transition has been nullified. Transitions are nullified if the proposed state is `None`, indicating that nothing should be written to the database. **Returns:** * True if the transition is nullified, False otherwise. ### `TaskRunUniversalTransform` ### `FlowRunUniversalTransform` ### `GenericUniversalTransform` # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-schemas-__init__ # `prefect.server.schemas` *This module is empty or contains only private/internal implementations.* # actions Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-schemas-actions # `prefect.server.schemas.actions` Reduced schemas for accepting API actions. ## Functions ### `validate_base_job_template` ```python theme={null} validate_base_job_template(v: dict[str, Any]) -> dict[str, Any] ``` ## Classes ### `ActionBaseModel` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowCreate` Data used by the Prefect REST API to create a flow. ### `FlowUpdate` Data used by the Prefect REST API to update a flow. ### `DeploymentScheduleCreate` **Methods:** #### `validate_max_scheduled_runs` ```python theme={null} validate_max_scheduled_runs(cls, v: PositiveInteger | None) -> PositiveInteger | None ``` ### `DeploymentScheduleUpdate` **Methods:** #### `validate_max_scheduled_runs` ```python theme={null} validate_max_scheduled_runs(cls, v: PositiveInteger | None) -> PositiveInteger | None ``` ### `DeploymentCreate` Data used by the Prefect REST API to create a deployment. **Methods:** #### `check_valid_configuration` ```python theme={null} check_valid_configuration(self, base_job_template: dict[str, Any]) -> None ``` Check that the combination of base\_job\_template defaults and job\_variables conforms to the specified schema. NOTE: This method does not hydrate block references in default values within the base job template to validate them. Failing to do this can cause user-facing errors. Instead of this method, use `validate_job_variables_for_deployment` function from `prefect_cloud.orion.api.validation`. #### `remove_old_fields` ```python theme={null} remove_old_fields(cls, values: dict[str, Any]) -> dict[str, Any] ``` ### `DeploymentUpdate` Data used by the Prefect REST API to update a deployment. **Methods:** #### `check_valid_configuration` ```python theme={null} check_valid_configuration(self, base_job_template: dict[str, Any]) -> None ``` Check that the combination of base\_job\_template defaults and job\_variables conforms to the schema specified in the base\_job\_template. NOTE: This method does not hydrate block references in default values within the base job template to validate them. Failing to do this can cause user-facing errors. Instead of this method, use `validate_job_variables_for_deployment` function from `prefect_cloud.orion.api.validation`. #### `remove_old_fields` ```python theme={null} remove_old_fields(cls, values: dict[str, Any]) -> dict[str, Any] ``` ### `FlowRunUpdate` Data used by the Prefect REST API to update a flow run. **Methods:** #### `set_name` ```python theme={null} set_name(cls, name: str) -> str ``` ### `StateCreate` Data used by the Prefect REST API to create a new state. **Methods:** #### `default_name_from_type` ```python theme={null} default_name_from_type(self) ``` If a name is not provided, use the type #### `default_scheduled_start_time` ```python theme={null} default_scheduled_start_time(self) ``` ### `TaskRunCreate` Data used by the Prefect REST API to create a task run **Methods:** #### `set_name` ```python theme={null} set_name(cls, name: str) -> str ``` #### `validate_cache_key` ```python theme={null} validate_cache_key(cls, cache_key: str | None) -> str | None ``` ### `TaskRunUpdate` Data used by the Prefect REST API to update a task run **Methods:** #### `set_name` ```python theme={null} set_name(cls, name: str) -> str ``` ### `FlowRunCreate` Data used by the Prefect REST API to create a flow run. **Methods:** #### `set_name` ```python theme={null} set_name(cls, name: str) -> str ``` ### `DeploymentFlowRunCreate` Data used by the Prefect REST API to create a flow run from a deployment. **Methods:** #### `set_name` ```python theme={null} set_name(cls, name: str) -> str ``` ### `SavedSearchCreate` Data used by the Prefect REST API to create a saved search. ### `ConcurrencyLimitCreate` Data used by the Prefect REST API to create a concurrency limit. ### `ConcurrencyLimitV2Create` Data used by the Prefect REST API to create a v2 concurrency limit. ### `ConcurrencyLimitV2Update` Data used by the Prefect REST API to update a v2 concurrency limit. ### `BlockTypeCreate` Data used by the Prefect REST API to create a block type. ### `BlockTypeUpdate` Data used by the Prefect REST API to update a block type. **Methods:** #### `updatable_fields` ```python theme={null} updatable_fields(cls) -> set[str] ``` ### `BlockSchemaCreate` Data used by the Prefect REST API to create a block schema. ### `BlockDocumentCreate` Data used by the Prefect REST API to create a block document. **Methods:** #### `validate_name_is_present_if_not_anonymous` ```python theme={null} validate_name_is_present_if_not_anonymous(cls, values: dict[str, Any]) -> dict[str, Any] ``` ### `BlockDocumentUpdate` Data used by the Prefect REST API to update a block document. ### `BlockDocumentReferenceCreate` Data used to create block document reference. **Methods:** #### `validate_parent_and_ref_are_different` ```python theme={null} validate_parent_and_ref_are_different(cls, values) ``` ### `LogCreate` Data used by the Prefect REST API to create a log. ### `WorkPoolCreate` Data used by the Prefect REST API to create a work pool. ### `WorkPoolUpdate` Data used by the Prefect REST API to update a work pool. ### `WorkQueueCreate` Data used by the Prefect REST API to create a work queue. ### `WorkQueueUpdate` Data used by the Prefect REST API to update a work queue. ### `ArtifactCreate` Data used by the Prefect REST API to create an artifact. **Methods:** #### `from_result` ```python theme={null} from_result(cls, data: Any | dict[str, Any]) -> 'ArtifactCreate' ``` ### `ArtifactUpdate` Data used by the Prefect REST API to update an artifact. ### `VariableCreate` Data used by the Prefect REST API to create a Variable. ### `VariableUpdate` Data used by the Prefect REST API to update a Variable. # core Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-schemas-core # `prefect.server.schemas.core` Full schemas of Prefect REST API objects. ## Classes ### `Flow` An ORM representation of flow data. ### `FlowRunPolicy` Defines of how a flow run should retry. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `populate_deprecated_fields` ```python theme={null} populate_deprecated_fields(cls, values: dict[str, Any]) -> dict[str, Any] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `CreatedBy` ### `UpdatedBy` ### `ConcurrencyLimitStrategy` Enumeration of concurrency collision strategies. **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `ConcurrencyOptions` Class for storing the concurrency config in database. ### `FlowRun` An ORM representation of flow run data. **Methods:** #### `set_name` ```python theme={null} set_name(cls, name: str) -> str ``` ### `TaskRunPolicy` Defines of how a task run should retry. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `populate_deprecated_fields` ```python theme={null} populate_deprecated_fields(cls, values: dict[str, Any]) -> dict[str, Any] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `validate_configured_retry_delays` ```python theme={null} validate_configured_retry_delays(cls, v: int | float | list[int] | list[float] | None) -> int | float | list[int] | list[float] | None ``` #### `validate_jitter_factor` ```python theme={null} validate_jitter_factor(cls, v: float | None) -> float | None ``` ### `RunInput` Base class for classes that represent inputs to runs, which could include, constants, parameters, task runs or flow runs. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `TaskRunResult` Represents a task run result input to another task run. ### `FlowRunResult` ### `Parameter` Represents a parameter input to a task run. ### `Constant` Represents constant input value to a task run. ### `TaskRun` An ORM representation of task run data. **Methods:** #### `set_name` ```python theme={null} set_name(cls, name: str) -> str ``` #### `validate_cache_key` ```python theme={null} validate_cache_key(cls, cache_key: str) -> str ``` ### `DeploymentSchedule` **Methods:** #### `validate_max_scheduled_runs` ```python theme={null} validate_max_scheduled_runs(cls, v: int) -> int ``` ### `VersionInfo` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `Deployment` An ORM representation of deployment data. ### `ConcurrencyLimit` An ORM representation of a concurrency limit. ### `ConcurrencyLimitV2` An ORM representation of a v2 concurrency limit. ### `BlockType` An ORM representation of a block type ### `BlockSchema` An ORM representation of a block schema. ### `BlockSchemaReference` An ORM representation of a block schema reference. ### `BlockDocument` An ORM representation of a block document. **Methods:** #### `from_orm_model` ```python theme={null} from_orm_model(cls: type[Self], session: AsyncSession, orm_block_document: 'orm_models.ORMBlockDocument', include_secrets: bool = False) -> Self ``` #### `validate_name_is_present_if_not_anonymous` ```python theme={null} validate_name_is_present_if_not_anonymous(cls, values: dict[str, Any]) -> dict[str, Any] ``` ### `BlockDocumentReference` An ORM representation of a block document reference. **Methods:** #### `validate_parent_and_ref_are_different` ```python theme={null} validate_parent_and_ref_are_different(cls, values: dict[str, Any]) -> dict[str, Any] ``` ### `Configuration` An ORM representation of account info. ### `SavedSearchFilter` A filter for a saved search model. Intended for use by the Prefect UI. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `SavedSearch` An ORM representation of saved search data. Represents a set of filter criteria. ### `Log` An ORM representation of log data. ### `QueueFilter` Filter criteria definition for a work queue. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkQueue` An ORM representation of a work queue ### `WorkQueueHealthPolicy` **Methods:** #### `evaluate_health_status` ```python theme={null} evaluate_health_status(self, late_runs_count: int, last_polled: Optional[DateTime] = None) -> bool ``` Given empirical information about the state of the work queue, evaluate its health status. **Args:** * `late_runs`: the count of late runs for the work queue. * `last_polled`: the last time the work queue was polled, if available. **Returns:** * whether or not the work queue is healthy. #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkQueueStatusDetail` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `Agent` An ORM representation of an agent ### `WorkPoolStorageConfiguration` A representation of a work pool's storage configuration **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkPool` An ORM representation of a work pool **Methods:** #### `helpful_error_for_missing_default_queue_id` ```python theme={null} helpful_error_for_missing_default_queue_id(cls, v: UUID | None) -> UUID ``` #### `model_validate` ```python theme={null} model_validate(cls: Type[Self], obj: Any) -> Self ``` ### `Worker` An ORM representation of a worker ### `Artifact` **Methods:** #### `from_result` ```python theme={null} from_result(cls, data: Any | dict[str, Any]) -> 'Artifact' ``` #### `validate_metadata_length` ```python theme={null} validate_metadata_length(cls, v: dict[str, str]) -> dict[str, str] ``` ### `ArtifactCollection` ### `Variable` ### `FlowRunInput` ### `CsrfToken` # filters Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-schemas-filters # `prefect.server.schemas.filters` Schemas that define Prefect REST API filtering operations. Each filter schema includes logic for transforming itself into a SQL `where` clause. ## Classes ### `Operator` Operators for combining filter criteria. **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `PrefectFilterBaseModel` Base model for Prefect filters **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `PrefectOperatorFilterBaseModel` Base model for Prefect filters that combines criteria with a user-provided operator **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `FlowFilterId` Filter by `Flow.id`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `FlowFilterDeployment` Filter by flows by deployment **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` ### `FlowFilterName` Filter by `Flow.name`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `FlowFilterTags` Filter by `Flow.tags`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` ### `FlowFilter` Filter for flows. Only flows matching all criteria will be returned. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` ### `FlowRunFilterId` Filter by `FlowRun.id`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `FlowRunFilterName` Filter by `FlowRun.name`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `FlowRunFilterTags` Filter by `FlowRun.tags`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` ### `FlowRunFilterDeploymentId` Filter by `FlowRun.deployment_id`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` ### `FlowRunFilterWorkQueueName` Filter by `FlowRun.work_queue_name`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` ### `FlowRunFilterStateType` Filter by `FlowRun.state_type`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `FlowRunFilterStateName` Filter by `FlowRun.state_name`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `FlowRunFilterState` Filter by `FlowRun.state_type` and `FlowRun.state_name`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` ### `FlowRunFilterFlowVersion` Filter by `FlowRun.flow_version`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `FlowRunFilterStartTime` Filter by `FlowRun.start_time`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `FlowRunFilterEndTime` Filter by `FlowRun.end_time`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `FlowRunFilterExpectedStartTime` Filter by `FlowRun.expected_start_time`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `FlowRunFilterNextScheduledStartTime` Filter by `FlowRun.next_scheduled_start_time`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `FlowRunFilterParentFlowRunId` Filter for subflows of a given flow run **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` ### `FlowRunFilterParentTaskRunId` Filter by `FlowRun.parent_task_run_id`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` ### `FlowRunFilterIdempotencyKey` Filter by FlowRun.idempotency\_key. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `FlowRunFilterCreatedBy` Filter by `FlowRun.created_by`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` ### `FlowRunFilter` Filter flow runs. Only flow runs matching all criteria will be returned **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` #### `only_filters_on_id` ```python theme={null} only_filters_on_id(self) -> bool ``` ### `TaskRunFilterFlowRunId` Filter by `TaskRun.flow_run_id`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` ### `TaskRunFilterId` Filter by `TaskRun.id`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `TaskRunFilterName` Filter by `TaskRun.name`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `TaskRunFilterTags` Filter by `TaskRun.tags`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` ### `TaskRunFilterStateType` Filter by `TaskRun.state_type`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `TaskRunFilterStateName` Filter by `TaskRun.state_name`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `TaskRunFilterState` Filter by `TaskRun.type` and `TaskRun.name`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` ### `TaskRunFilterSubFlowRuns` Filter by `TaskRun.subflow_run`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `TaskRunFilterStartTime` Filter by `TaskRun.start_time`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `TaskRunFilterEndTime` Filter by `TaskRun.end_time`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `TaskRunFilterExpectedStartTime` Filter by `TaskRun.expected_start_time`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `TaskRunFilter` Filter task runs. Only task runs matching all criteria will be returned **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` ### `DeploymentFilterId` Filter by `Deployment.id`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `DeploymentFilterName` Filter by `Deployment.name`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `DeploymentOrFlowNameFilter` Filter by `Deployment.name` or `Flow.name` with a single input string for ilike filtering. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `DeploymentFilterPaused` Filter by `Deployment.paused`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `DeploymentFilterWorkQueueName` Filter by `Deployment.work_queue_name`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `DeploymentFilterConcurrencyLimit` DEPRECATED: Prefer `Deployment.concurrency_limit_id` over `Deployment.concurrency_limit`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `DeploymentFilterTags` Filter by `Deployment.tags`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` ### `DeploymentFilter` Filter for deployments. Only deployments matching all criteria will be returned. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` ### `DeploymentScheduleFilterActive` Filter by `DeploymentSchedule.active`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `DeploymentScheduleFilter` Filter for deployments. Only deployments matching all criteria will be returned. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` ### `LogFilterName` Filter by `Log.name`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `LogFilterLevel` Filter by `Log.level`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `LogFilterTimestamp` Filter by `Log.timestamp`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `LogFilterFlowRunId` Filter by `Log.flow_run_id`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `LogFilterTaskRunId` Filter by `Log.task_run_id`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `LogFilterTextSearch` Filter by text search across log content. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. #### `includes` ```python theme={null} includes(self, log: 'Log') -> bool ``` Check if this text filter includes the given log. ### `LogFilter` Filter logs. Only logs matching all criteria will be returned **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` ### `FilterSet` A collection of filters for common objects **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `BlockTypeFilterName` Filter by `BlockType.name` **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `BlockTypeFilterSlug` Filter by `BlockType.slug` **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `BlockTypeFilter` Filter BlockTypes **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `BlockSchemaFilterBlockTypeId` Filter by `BlockSchema.block_type_id`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `BlockSchemaFilterId` Filter by BlockSchema.id **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `BlockSchemaFilterCapabilities` Filter by `BlockSchema.capabilities` **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `BlockSchemaFilterVersion` Filter by `BlockSchema.capabilities` **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `BlockSchemaFilter` Filter BlockSchemas **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` ### `BlockDocumentFilterIsAnonymous` Filter by `BlockDocument.is_anonymous`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `BlockDocumentFilterBlockTypeId` Filter by `BlockDocument.block_type_id`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `BlockDocumentFilterId` Filter by `BlockDocument.id`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `BlockDocumentFilterName` Filter by `BlockDocument.name`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `BlockDocumentFilter` Filter BlockDocuments. Only BlockDocuments matching all criteria will be returned **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` ### `WorkQueueFilterId` Filter by `WorkQueue.id`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `WorkQueueFilterName` Filter by `WorkQueue.name`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `WorkQueueFilter` Filter work queues. Only work queues matching all criteria will be returned **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` ### `WorkPoolFilterId` Filter by `WorkPool.id`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `WorkPoolFilterName` Filter by `WorkPool.name`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `WorkPoolFilterType` Filter by `WorkPool.type`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `WorkPoolFilter` Filter work pools. Only work pools matching all criteria will be returned **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` ### `WorkerFilterWorkPoolId` Filter by `Worker.worker_config_id`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `WorkerFilterStatus` Filter by `Worker.status`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `WorkerFilterLastHeartbeatTime` Filter by `Worker.last_heartbeat_time`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `WorkerFilter` Filter by `Worker.last_heartbeat_time`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` ### `ArtifactFilterId` Filter by `Artifact.id`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `ArtifactFilterKey` Filter by `Artifact.key`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `ArtifactFilterFlowRunId` Filter by `Artifact.flow_run_id`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `ArtifactFilterTaskRunId` Filter by `Artifact.task_run_id`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `ArtifactFilterType` Filter by `Artifact.type`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `ArtifactFilter` Filter artifacts. Only artifacts matching all criteria will be returned **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` ### `ArtifactCollectionFilterLatestId` Filter by `ArtifactCollection.latest_id`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `ArtifactCollectionFilterKey` Filter by `ArtifactCollection.key`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `ArtifactCollectionFilterFlowRunId` Filter by `ArtifactCollection.flow_run_id`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `ArtifactCollectionFilterTaskRunId` Filter by `ArtifactCollection.task_run_id`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `ArtifactCollectionFilterType` Filter by `ArtifactCollection.type`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `ArtifactCollectionFilter` Filter artifact collections. Only artifact collections matching all criteria will be returned **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` ### `VariableFilterId` Filter by `Variable.id`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `VariableFilterName` Filter by `Variable.name`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` Generate SQL filter from provided filter parameters. If no filters parameters are available, return a TRUE filter. ### `VariableFilterTags` Filter by `Variable.tags`. **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` ### `VariableFilter` Filter variables. Only variables matching all criteria will be returned **Methods:** #### `as_sql_filter` ```python theme={null} as_sql_filter(self) -> sa.ColumnElement[bool] ``` # graph Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-schemas-graph # `prefect.server.schemas.graph` ## Classes ### `GraphState` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `GraphArtifact` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `Edge` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `Node` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `Graph` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. # internal Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-schemas-internal # `prefect.server.schemas.internal` Schemas for *internal* use within the Prefect server, but that would not be appropriate for use on the API itself. ## Classes ### `InternalWorkPoolUpdate` # responses Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-schemas-responses # `prefect.server.schemas.responses` Schemas for special responses from the Prefect REST API. ## Classes ### `SetStateStatus` Enumerates return statuses for setting run states. **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `StateAcceptDetails` Details associated with an ACCEPT state transition. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `StateRejectDetails` Details associated with a REJECT state transition. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `StateAbortDetails` Details associated with an ABORT state transition. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `StateWaitDetails` Details associated with a WAIT state transition. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `HistoryResponseState` Represents a single state's history over an interval. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `HistoryResponse` Represents a history of aggregation states over an interval **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `validate_timestamps` ```python theme={null} validate_timestamps(cls, values: dict) -> dict ``` ### `OrchestrationResult` A container for the output of state orchestration. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkerFlowRunResponse` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunResponse` **Methods:** #### `model_validate` ```python theme={null} model_validate(cls: Type[Self], obj: Any) -> Self ``` ### `TaskRunResponse` ### `DeploymentResponse` **Methods:** #### `model_validate` ```python theme={null} model_validate(cls: Type[Self], obj: Any) -> Self ``` ### `WorkPoolResponse` ### `WorkQueueResponse` **Methods:** #### `model_validate` ```python theme={null} model_validate(cls: Type[Self], obj: Any) -> Self ``` ### `WorkQueueWithStatus` Combines a work queue and its status details into a single object **Methods:** #### `model_validate` ```python theme={null} model_validate(cls: Type[Self], obj: Any) -> Self ``` ### `WorkerResponse` **Methods:** #### `model_validate` ```python theme={null} model_validate(cls: Type[Self], obj: Any) -> Self ``` ### `GlobalConcurrencyLimitResponse` A response object for global concurrency limits. ### `FlowPaginationResponse` ### `FlowRunPaginationResponse` ### `TaskRunPaginationResponse` ### `DeploymentPaginationResponse` ### `SchemaValuePropertyError` ### `SchemaValueIndexError` ### `SchemaValuesValidationResponse` ### `FlowRunBulkDeleteResponse` Response from bulk flow run deletion. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `DeploymentBulkDeleteResponse` Response from bulk deployment deletion. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowBulkDeleteResponse` Response from bulk flow deletion. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunOrchestrationResult` Per-run result for bulk state operations. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunBulkSetStateResponse` Response from bulk set state operation. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunCreateResult` Per-run result for bulk create operations. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunBulkCreateResponse` Response from bulk flow run creation. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `FlowRunSlotSummary` Summary of a flow run occupying a concurrency slot. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkQueueConcurrencyStatusDetail` Per-queue concurrency status with flow run details. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkPoolConcurrencyStatus` Paginated pool-level concurrency status with per-queue breakdown. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `WorkQueueConcurrencyStatus` Paginated queue-level concurrency status with flow run details. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. # schedules Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-schemas-schedules # `prefect.server.schemas.schedules` Schedule schemas ## Classes ### `IntervalSchedule` A schedule formed by adding `interval` increments to an `anchor_date`. If no `anchor_date` is supplied, the current UTC time is used. If a timezone-naive datetime is provided for `anchor_date`, it is assumed to be in the schedule's timezone (or UTC). Even if supplied with an IANA timezone, anchor dates are always stored as UTC offsets, so a `timezone` can be provided to determine localization behaviors like DST boundary handling. If none is provided it will be inferred from the anchor date. NOTE: If the `IntervalSchedule` `anchor_date` or `timezone` is provided in a DST-observing timezone, then the schedule will adjust itself appropriately. Intervals greater than 24 hours will follow DST conventions, while intervals of less than 24 hours will follow UTC intervals. For example, an hourly schedule will fire every UTC hour, even across DST boundaries. When clocks are set back, this will result in two runs that *appear* to both be scheduled for 1am local time, even though they are an hour apart in UTC time. For longer intervals, like a daily schedule, the interval schedule will adjust for DST boundaries so that the clock-hour remains constant. This means that a daily schedule that always fires at 9am will observe DST and continue to fire at 9am in the local time zone. **Args:** * `interval`: an interval to schedule on. * `anchor_date`: an anchor date to schedule increments against; if not provided, the current timestamp will be used. * `timezone`: a valid timezone string. **Methods:** #### `get_dates` ```python theme={null} get_dates(self, n: Optional[int] = None, start: Optional[datetime.datetime] = None, end: Optional[datetime.datetime] = None) -> List[DateTime] ``` Retrieves dates from the schedule. Up to 1,000 candidate dates are checked following the start date. **Args:** * `n`: The number of dates to generate * `start`: The first returned date will be on or after this date. Defaults to None. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone. * `end`: The maximum scheduled date to return. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone. **Returns:** * List\[DateTime]: A list of dates #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `validate_timezone` ```python theme={null} validate_timezone(self) ``` ### `CronSchedule` Cron schedule NOTE: If the timezone is a DST-observing one, then the schedule will adjust itself appropriately. Cron's rules for DST are based on schedule times, not intervals. This means that an hourly cron schedule will fire on every new schedule hour, not every elapsed hour; for example, when clocks are set back this will result in a two-hour pause as the schedule will fire *the first time* 1am is reached and *the first time* 2am is reached, 120 minutes later. Longer schedules, such as one that fires at 9am every morning, will automatically adjust for DST. **Args:** * `cron`: a valid cron string * `timezone`: a valid timezone string in IANA tzdata format (for example, America/New\_York). * `day_or`: Control how croniter handles `day` and `day_of_week` entries. Defaults to True, matching cron which connects those values using OR. If the switch is set to False, the values are connected using AND. This behaves like fcron and enables you to e.g. define a job that executes each 2nd friday of a month by setting the days of month and the weekday. **Methods:** #### `get_dates` ```python theme={null} get_dates(self, n: Optional[int] = None, start: Optional[datetime.datetime] = None, end: Optional[datetime.datetime] = None) -> List[DateTime] ``` Retrieves dates from the schedule. Up to 1,000 candidate dates are checked following the start date. **Args:** * `n`: The number of dates to generate * `start`: The first returned date will be on or after this date. Defaults to None. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone. * `end`: The maximum scheduled date to return. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone. **Returns:** * List\[DateTime]: A list of dates #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `valid_cron_string` ```python theme={null} valid_cron_string(cls, v: str) -> str ``` #### `validate_timezone` ```python theme={null} validate_timezone(self) ``` ### `RRuleSchedule` RRule schedule, based on the iCalendar standard ([RFC 5545](https://datatracker.ietf.org/doc/html/rfc5545)) as implemented in `dateutils.rrule`. RRules are appropriate for any kind of calendar-date manipulation, including irregular intervals, repetition, exclusions, week day or day-of-month adjustments, and more. Note that as a calendar-oriented standard, `RRuleSchedules` are sensitive to to the initial timezone provided. A 9am daily schedule with a daylight saving time-aware start date will maintain a local 9am time through DST boundaries; a 9am daily schedule with a UTC start date will maintain a 9am UTC time. **Args:** * `rrule`: a valid RRule string * `timezone`: a valid timezone string **Methods:** #### `from_rrule` ```python theme={null} from_rrule(cls, rrule: dateutil.rrule.rrule | dateutil.rrule.rruleset) -> 'RRuleSchedule' ``` #### `get_dates` ```python theme={null} get_dates(self, n: Optional[int] = None, start: datetime.datetime = None, end: datetime.datetime = None) -> List[DateTime] ``` Retrieves dates from the schedule. Up to 1,000 candidate dates are checked following the start date. **Args:** * `n`: The number of dates to generate * `start`: The first returned date will be on or after this date. Defaults to None. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone. * `end`: The maximum scheduled date to return. If a timezone-naive datetime is provided, it is assumed to be in the schedule's timezone. **Returns:** * List\[DateTime]: A list of dates #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. #### `to_rrule` ```python theme={null} to_rrule(self) -> dateutil.rrule.rrule ``` Since rrule doesn't properly serialize/deserialize timezones, we localize dates here #### `validate_rrule_str` ```python theme={null} validate_rrule_str(cls, v: str) -> str ``` # sorting Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-schemas-sorting # `prefect.server.schemas.sorting` Schemas for sorting Prefect REST API objects. ## Classes ### `FlowRunSort` Defines flow run sorting options. **Methods:** #### `as_sql_sort` ```python theme={null} as_sql_sort(self) -> Iterable[sa.ColumnElement[Any]] ``` Return an expression used to sort flow runs #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `TaskRunSort` Defines task run sorting options. **Methods:** #### `as_sql_sort` ```python theme={null} as_sql_sort(self) -> Iterable[sa.ColumnElement[Any]] ``` Return an expression used to sort task runs #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `LogSort` Defines log sorting options. **Methods:** #### `as_sql_sort` ```python theme={null} as_sql_sort(self) -> Iterable[sa.ColumnElement[Any]] ``` Return an expression used to sort task runs #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `FlowSort` Defines flow sorting options. **Methods:** #### `as_sql_sort` ```python theme={null} as_sql_sort(self) -> Iterable[sa.ColumnElement[Any]] ``` Return an expression used to sort task runs #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `DeploymentSort` Defines deployment sorting options. **Methods:** #### `as_sql_sort` ```python theme={null} as_sql_sort(self) -> Iterable[sa.ColumnElement[Any]] ``` Return an expression used to sort task runs #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `ArtifactSort` Defines artifact sorting options. **Methods:** #### `as_sql_sort` ```python theme={null} as_sql_sort(self) -> Iterable[sa.ColumnElement[Any]] ``` Return an expression used to sort task runs #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `ArtifactCollectionSort` Defines artifact collection sorting options. **Methods:** #### `as_sql_sort` ```python theme={null} as_sql_sort(self) -> Iterable[sa.ColumnElement[Any]] ``` Return an expression used to sort task runs #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `VariableSort` Defines variables sorting options. **Methods:** #### `as_sql_sort` ```python theme={null} as_sql_sort(self) -> Iterable[sa.ColumnElement[Any]] ``` Return an expression used to sort task runs #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `BlockDocumentSort` Defines block document sorting options. **Methods:** #### `as_sql_sort` ```python theme={null} as_sql_sort(self) -> Iterable[sa.ColumnElement[Any]] ``` Return an expression used to sort task runs #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` # states Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-schemas-states # `prefect.server.schemas.states` State schemas. ## Functions ### `Scheduled` ```python theme={null} Scheduled(scheduled_time: Optional[DateTime] = None, cls: type[_State] = State, **kwargs: Any) -> _State ``` Convenience function for creating `Scheduled` states. **Returns:** * a Scheduled state ### `Completed` ```python theme={null} Completed(cls: type[_State] = State, **kwargs: Any) -> _State ``` Convenience function for creating `Completed` states. **Returns:** * a Completed state ### `Running` ```python theme={null} Running(cls: type[_State] = State, **kwargs: Any) -> _State ``` Convenience function for creating `Running` states. **Returns:** * a Running state ### `Failed` ```python theme={null} Failed(cls: type[_State] = State, **kwargs: Any) -> _State ``` Convenience function for creating `Failed` states. **Returns:** * a Failed state ### `Crashed` ```python theme={null} Crashed(cls: type[_State] = State, **kwargs: Any) -> _State ``` Convenience function for creating `Crashed` states. **Returns:** * a Crashed state ### `Cancelling` ```python theme={null} Cancelling(cls: type[_State] = State, **kwargs: Any) -> _State ``` Convenience function for creating `Cancelling` states. **Returns:** * a Cancelling state ### `Cancelled` ```python theme={null} Cancelled(cls: type[_State] = State, **kwargs: Any) -> _State ``` Convenience function for creating `Cancelled` states. **Returns:** * a Cancelled state ### `Pending` ```python theme={null} Pending(cls: type[_State] = State, **kwargs: Any) -> _State ``` Convenience function for creating `Pending` states. **Returns:** * a Pending state ### `Paused` ```python theme={null} Paused(cls: type[_State] = State, timeout_seconds: Optional[int] = None, pause_expiration_time: Optional[DateTime] = None, reschedule: bool = False, pause_key: Optional[str] = None, **kwargs: Any) -> _State ``` Convenience function for creating `Paused` states. **Returns:** * a Paused state ### `Suspended` ```python theme={null} Suspended(cls: type[_State] = State, timeout_seconds: Optional[int] = None, pause_expiration_time: Optional[DateTime] = None, pause_key: Optional[str] = None, **kwargs: Any) -> _State ``` Convenience function for creating `Suspended` states. **Returns:** * a Suspended state ### `AwaitingRetry` ```python theme={null} AwaitingRetry(cls: type[_State] = State, scheduled_time: Optional[DateTime] = None, **kwargs: Any) -> _State ``` Convenience function for creating `AwaitingRetry` states. **Returns:** * an AwaitingRetry state ### `AwaitingConcurrencySlot` ```python theme={null} AwaitingConcurrencySlot(cls: type[_State] = State, scheduled_time: Optional[DateTime] = None, **kwargs: Any) -> _State ``` Convenience function for creating `AwaitingConcurrencySlot` states. **Returns:** * an AwaitingConcurrencySlot state ### `Submitting` ```python theme={null} Submitting(cls: type[_State] = State, **kwargs: Any) -> _State ``` Convenience function for creating `Submitting` states. **Returns:** * a Submitting state ### `InfrastructurePending` ```python theme={null} InfrastructurePending(cls: type[_State] = State, **kwargs: Any) -> _State ``` Convenience function for creating `InfrastructurePending` states. **Returns:** * an InfrastructurePending state ### `Retrying` ```python theme={null} Retrying(cls: type[_State] = State, **kwargs: Any) -> _State ``` Convenience function for creating `Retrying` states. **Returns:** * a Retrying state ### `Late` ```python theme={null} Late(cls: type[_State] = State, scheduled_time: Optional[DateTime] = None, **kwargs: Any) -> _State ``` Convenience function for creating `Late` states. **Returns:** * a Late state ## Classes ### `StateType` Enumeration of state types. **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `CountByState` **Methods:** #### `check_key` ```python theme={null} check_key(cls, value: Optional[Any], info: ValidationInfo) -> Optional[Any] ``` #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `StateDetails` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `StateBaseModel` **Methods:** #### `orm_dict` ```python theme={null} orm_dict(self, *args: Any, **kwargs: Any) -> dict[str, Any] ``` This method is used as a convenience method for constructing fixtues by first building a `State` schema object and converting it into an ORM-compatible format. Because the `data` field is not writable on ORM states, this method omits the `data` field entirely for the purposes of constructing an ORM model. If state data is required, an artifact must be created separately. ### `State` Represents the state of a run. **Methods:** #### `default_name_from_type` ```python theme={null} default_name_from_type(self) -> Self ``` If a name is not provided, use the type #### `default_scheduled_start_time` ```python theme={null} default_scheduled_start_time(self) -> Self ``` #### `fresh_copy` ```python theme={null} fresh_copy(self, **kwargs: Any) -> Self ``` Return a fresh copy of the state with a new ID. #### `from_orm_without_result` ```python theme={null} from_orm_without_result(cls, orm_state: Union['ORMFlowRunState', 'ORMTaskRunState'], with_data: Optional[Any] = None) -> Self ``` During orchestration, ORM states can be instantiated prior to inserting results into the artifact table and the `data` field will not be eagerly loaded. In these cases, sqlalchemy will attempt to lazily load the the relationship, which will fail when called within a synchronous pydantic method. This method will construct a `State` object from an ORM model without a loaded artifact and attach data passed using the `with_data` argument to the `data` field. #### `is_cancelled` ```python theme={null} is_cancelled(self) -> bool ``` #### `is_cancelling` ```python theme={null} is_cancelling(self) -> bool ``` #### `is_completed` ```python theme={null} is_completed(self) -> bool ``` #### `is_crashed` ```python theme={null} is_crashed(self) -> bool ``` #### `is_failed` ```python theme={null} is_failed(self) -> bool ``` #### `is_final` ```python theme={null} is_final(self) -> bool ``` #### `is_paused` ```python theme={null} is_paused(self) -> bool ``` #### `is_pending` ```python theme={null} is_pending(self) -> bool ``` #### `is_running` ```python theme={null} is_running(self) -> bool ``` #### `is_scheduled` ```python theme={null} is_scheduled(self) -> bool ``` #### `orm_dict` ```python theme={null} orm_dict(self, *args: Any, **kwargs: Any) -> dict[str, Any] ``` This method is used as a convenience method for constructing fixtues by first building a `State` schema object and converting it into an ORM-compatible format. Because the `data` field is not writable on ORM states, this method omits the `data` field entirely for the purposes of constructing an ORM model. If state data is required, an artifact must be created separately. #### `result` ```python theme={null} result(self, raise_on_failure: Literal[True] = ...) -> Any ``` #### `result` ```python theme={null} result(self, raise_on_failure: Literal[False] = False) -> Union[Any, Exception] ``` #### `result` ```python theme={null} result(self, raise_on_failure: bool = ...) -> Union[Any, Exception] ``` #### `result` ```python theme={null} result(self, raise_on_failure: bool = True) -> Union[Any, Exception] ``` #### `to_state_create` ```python theme={null} to_state_create(self) -> 'StateCreate' ``` # statuses Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-schemas-statuses # `prefect.server.schemas.statuses` ## Classes ### `WorkPoolStatus` Enumeration of work pool statuses. **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` #### `in_kebab_case` ```python theme={null} in_kebab_case(self) -> str ``` ### `WorkerStatus` Enumeration of worker statuses. **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `DeploymentStatus` Enumeration of deployment statuses. **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` #### `in_kebab_case` ```python theme={null} in_kebab_case(self) -> str ``` ### `WorkQueueStatus` Enumeration of work queue statuses. **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` #### `in_kebab_case` ```python theme={null} in_kebab_case(self) -> str ``` # ui Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-schemas-ui # `prefect.server.schemas.ui` Schemas for UI endpoints. ## Classes ### `UITaskRun` A task run with additional details for display in the UI. ### `UISettings` Runtime UI configuration returned by /ui-settings endpoint. # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-services-__init__ # `prefect.server.services` *This module is empty or contains only private/internal implementations.* # base Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-services-base # `prefect.server.services.base` ## Classes ### `Service` **Methods:** #### `all_services` ```python theme={null} all_services(cls) -> Sequence[type[Self]] ``` Get list of all service classes #### `enabled` ```python theme={null} enabled(cls) -> bool ``` Whether the service is enabled #### `enabled_services` ```python theme={null} enabled_services(cls) -> list[type[Self]] ``` Get list of enabled service classes #### `environment_variable_name` ```python theme={null} environment_variable_name(cls) -> str ``` #### `run_services` ```python theme={null} run_services(cls) -> NoReturn ``` Run enabled services until cancelled. #### `running` ```python theme={null} running(cls) -> AsyncGenerator[None, None] ``` A context manager that runs enabled services on entry and stops them on exit. #### `service_settings` ```python theme={null} service_settings(cls) -> ServicesBaseSetting ``` The Prefect setting that controls whether the service is enabled #### `start` ```python theme={null} start(self) -> NoReturn ``` Start running the service, which may run indefinitely #### `stop` ```python theme={null} stop(self) -> None ``` Stop the service ### `RunInEphemeralServers` A marker class for services that should run even when running an ephemeral server **Methods:** #### `all_services` ```python theme={null} all_services(cls) -> Sequence[type[Self]] ``` Get list of all service classes #### `enabled` ```python theme={null} enabled(cls) -> bool ``` Whether the service is enabled #### `enabled_services` ```python theme={null} enabled_services(cls) -> list[type[Self]] ``` Get list of enabled service classes #### `environment_variable_name` ```python theme={null} environment_variable_name(cls) -> str ``` #### `run_services` ```python theme={null} run_services(cls) -> NoReturn ``` Run enabled services until cancelled. #### `running` ```python theme={null} running(cls) -> AsyncGenerator[None, None] ``` A context manager that runs enabled services on entry and stops them on exit. #### `service_settings` ```python theme={null} service_settings(cls) -> ServicesBaseSetting ``` The Prefect setting that controls whether the service is enabled #### `start` ```python theme={null} start(self) -> NoReturn ``` Start running the service, which may run indefinitely #### `stop` ```python theme={null} stop(self) -> None ``` Stop the service ### `RunInWebservers` A marker class for services that should run when running a webserver **Methods:** #### `all_services` ```python theme={null} all_services(cls) -> Sequence[type[Self]] ``` Get list of all service classes #### `enabled` ```python theme={null} enabled(cls) -> bool ``` Whether the service is enabled #### `enabled_services` ```python theme={null} enabled_services(cls) -> list[type[Self]] ``` Get list of enabled service classes #### `environment_variable_name` ```python theme={null} environment_variable_name(cls) -> str ``` #### `run_services` ```python theme={null} run_services(cls) -> NoReturn ``` Run enabled services until cancelled. #### `running` ```python theme={null} running(cls) -> AsyncGenerator[None, None] ``` A context manager that runs enabled services on entry and stops them on exit. #### `service_settings` ```python theme={null} service_settings(cls) -> ServicesBaseSetting ``` The Prefect setting that controls whether the service is enabled #### `start` ```python theme={null} start(self) -> NoReturn ``` Start running the service, which may run indefinitely #### `stop` ```python theme={null} stop(self) -> None ``` Stop the service # cancellation_cleanup Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-services-cancellation_cleanup # `prefect.server.services.cancellation_cleanup` The CancellationCleanup service. Responsible for cancelling tasks and subflows that haven't finished. ## Functions ### `cancel_child_task_runs` ```python theme={null} cancel_child_task_runs(flow_run_id: Annotated[UUID, Logged]) -> None ``` Cancel child task runs of a cancelled flow run (docket task). ### `cancel_subflow_run` ```python theme={null} cancel_subflow_run(subflow_run_id: Annotated[UUID, Logged]) -> None ``` Cancel a subflow run whose parent flow run was cancelled (docket task). ### `monitor_cancelled_flow_runs` ```python theme={null} monitor_cancelled_flow_runs(docket: Docket = CurrentDocket(), db: PrefectDBInterface = Depends(provide_database_interface), perpetual: Perpetual = Perpetual(automatic=False, every=datetime.timedelta(seconds=get_current_settings().server.services.cancellation_cleanup.loop_seconds))) -> None ``` Monitor for cancelled flow runs and schedule child task cancellation. ### `monitor_subflow_runs` ```python theme={null} monitor_subflow_runs(docket: Docket = CurrentDocket(), db: PrefectDBInterface = Depends(provide_database_interface), perpetual: Perpetual = Perpetual(automatic=False, every=datetime.timedelta(seconds=get_current_settings().server.services.cancellation_cleanup.loop_seconds))) -> None ``` Monitor for subflow runs that need to be cancelled. # db_vacuum Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-services-db_vacuum # `prefect.server.services.db_vacuum` The database vacuum service. Two perpetual services schedule cleanup tasks independently, gated by the `enabled` set in `PREFECT_SERVER_SERVICES_DB_VACUUM_ENABLED` (default `["events"]`): 1. schedule\_vacuum\_tasks — Cleans up old flow runs and orphaned resources (logs, artifacts, artifact collections). Enabled when `"flow_runs"` is in the enabled set. 2. schedule\_event\_vacuum\_tasks — Cleans up old events, including any event types with per-type retention overrides. Enabled when `"events"` is in the enabled set **and** `event_persister.enabled` is true (the default), so that operators who disabled event processing are not surprised on upgrade. Runs in all server modes, including ephemeral. Per-event-type retention can be customised via `PREFECT_SERVER_SERVICES_DB_VACUUM_EVENT_RETENTION_OVERRIDES`. Event types not listed fall back to `server.events.retention_period`. Each task runs independently with its own error isolation and docket-managed retries. Deterministic keys prevent duplicate tasks from accumulating if a cycle overlaps with in-progress work. ## Functions ### `schedule_vacuum_tasks` ```python theme={null} schedule_vacuum_tasks(docket: Docket = CurrentDocket(), perpetual: Perpetual = Perpetual(automatic=False, every=timedelta(seconds=get_current_settings().server.services.db_vacuum.loop_seconds))) -> None ``` Schedule cleanup tasks for old flow runs and orphaned resources. Each task is enqueued with a deterministic key so that overlapping cycles (e.g. when cleanup takes longer than loop\_seconds) naturally deduplicate instead of piling up redundant work. Disabled by default because it permanently deletes flow runs. Enable via PREFECT\_SERVER\_SERVICES\_DB\_VACUUM\_ENABLED=true. ### `schedule_event_vacuum_tasks` ```python theme={null} schedule_event_vacuum_tasks(docket: Docket = CurrentDocket(), perpetual: Perpetual = Perpetual(automatic=False, every=timedelta(seconds=get_current_settings().server.services.db_vacuum.loop_seconds))) -> None ``` Schedule cleanup tasks for old events and heartbeat events. Enabled by default (`"events"` is in the default enabled set). Automatically disabled when the event persister service is disabled (PREFECT\_SERVER\_SERVICES\_EVENT\_PERSISTER\_ENABLED=false) so that operators who opted out of event processing are not surprised by trimming on upgrade. ### `vacuum_orphaned_logs` ```python theme={null} vacuum_orphaned_logs() -> None ``` Delete logs whose flow\_run\_id references a non-existent flow run. ### `vacuum_orphaned_artifacts` ```python theme={null} vacuum_orphaned_artifacts() -> None ``` Delete artifacts whose flow\_run\_id references a non-existent flow run. ### `vacuum_stale_artifact_collections` ```python theme={null} vacuum_stale_artifact_collections() -> None ``` Reconcile artifact collections whose latest\_id points to a deleted artifact. Re-points to the next latest version if one exists, otherwise deletes the collection row. ### `vacuum_old_flow_runs` ```python theme={null} vacuum_old_flow_runs() -> None ``` Delete old top-level terminal flow runs past the retention period. ### `vacuum_events_with_retention_overrides` ```python theme={null} vacuum_events_with_retention_overrides() -> None ``` Delete events whose types have per-type retention overrides. Iterates over all entries in `event_retention_overrides` and deletes events (and their resources) that are older than the configured retention for that type, capped by the global events retention period. ### `vacuum_old_events` ```python theme={null} vacuum_old_events() -> None ``` Delete all events and event resources past the general events retention period. # foreman Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-services-foreman # `prefect.server.services.foreman` The Foreman service. Monitors workers and marks stale resources as offline/not ready. ## Functions ### `monitor_worker_health` ```python theme={null} monitor_worker_health(perpetual: Perpetual = Perpetual(automatic=False, every=timedelta(seconds=get_current_settings().server.services.foreman.loop_seconds))) -> None ``` Monitor workers and mark stale resources as offline/not ready. Iterates over workers currently marked as online. Marks workers as offline if they have an old last\_heartbeat\_time. Marks work pools as not ready if they do not have any online workers and are currently marked as ready. Marks deployments as not ready if they have a last\_polled time that is older than the configured deployment last polled timeout. # late_runs Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-services-late_runs # `prefect.server.services.late_runs` The MarkLateRuns service. Responsible for putting flow runs in a Late state if they are not started on time. The threshold for a late run can be configured by changing `PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS`. ## Functions ### `mark_flow_run_late` ```python theme={null} mark_flow_run_late(flow_run_id: Annotated[UUID, Logged]) -> None ``` Mark a single flow run as late (docket task). ### `monitor_late_runs` ```python theme={null} monitor_late_runs(docket: Docket = CurrentDocket(), db: PrefectDBInterface = Depends(provide_database_interface), perpetual: Perpetual = Perpetual(automatic=False, every=timedelta(seconds=get_current_settings().server.services.late_runs.loop_seconds))) -> None ``` Monitor for late flow runs and schedule marking tasks. # pause_expirations Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-services-pause_expirations # `prefect.server.services.pause_expirations` The FailExpiredPauses service. Responsible for putting Paused flow runs in a Failed state if they are not resumed on time. ## Functions ### `fail_expired_pause` ```python theme={null} fail_expired_pause(flow_run_id: Annotated[UUID, Logged], pause_timeout: Annotated[str, Logged]) -> None ``` Mark a single expired paused flow run as failed (docket task). ### `monitor_expired_pauses` ```python theme={null} monitor_expired_pauses(docket: Docket = CurrentDocket(), db: PrefectDBInterface = Depends(provide_database_interface), perpetual: Perpetual = Perpetual(automatic=False, every=timedelta(seconds=get_current_settings().server.services.pause_expirations.loop_seconds))) -> None ``` Monitor for expired paused flow runs and schedule failure tasks. # perpetual_services Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-services-perpetual_services # `prefect.server.services.perpetual_services` Perpetual services are background services that run on a periodic schedule using docket. This module provides the registry and scheduling logic for perpetual services, using docket's Perpetual dependency for distributed, HA-aware task scheduling. ## Functions ### `perpetual_service` ```python theme={null} perpetual_service(enabled_getter: EnabledGetter, run_in_ephemeral: bool = False, run_in_webserver: bool = False) -> Callable[[F], F] ``` Decorator to register a perpetual service function. **Args:** * `enabled_getter`: A callable that returns whether the service is enabled. * `run_in_ephemeral`: If True, this service runs in ephemeral server mode. * `run_in_webserver`: If True, this service runs in webserver-only mode. ### `get_perpetual_services` ```python theme={null} get_perpetual_services(ephemeral: bool = False, webserver_only: bool = False) -> list[PerpetualServiceConfig] ``` Get perpetual services that should run in the current mode. **Args:** * `ephemeral`: If True, only return services marked with run\_in\_ephemeral. * `webserver_only`: If True, only return services marked with run\_in\_webserver. **Returns:** * List of perpetual service configurations to run. ### `get_enabled_perpetual_services` ```python theme={null} get_enabled_perpetual_services(ephemeral: bool = False, webserver_only: bool = False) -> list[PerpetualServiceConfig] ``` Get perpetual services that are enabled and should run in the current mode. **Args:** * `ephemeral`: If True, only return services marked with run\_in\_ephemeral. * `webserver_only`: If True, only return services marked with run\_in\_webserver. **Returns:** * List of enabled perpetual service configurations. ### `register_and_schedule_perpetual_services` ```python theme={null} register_and_schedule_perpetual_services(docket: Docket, ephemeral: bool = False, webserver_only: bool = False) -> None ``` Register enabled perpetual service functions with docket and schedule them. Disabled services are not registered at all, so they never run. **Args:** * `docket`: The docket instance to register functions with. * `ephemeral`: If True, only register services for ephemeral mode. * `webserver_only`: If True, only register services for webserver mode. ## Classes ### `PerpetualServiceConfig` Configuration for a perpetual service function. # repossessor Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-services-repossessor # `prefect.server.services.repossessor` The Repossessor service. Handles reconciliation of expired concurrency leases. ## Functions ### `revoke_expired_lease` ```python theme={null} revoke_expired_lease(lease_id: Annotated[UUID, Logged]) -> None ``` Revoke a single expired lease (docket task). ### `monitor_expired_leases` ```python theme={null} monitor_expired_leases(docket: Docket = CurrentDocket(), lease_storage: ConcurrencyLeaseStorage = Depends(get_concurrency_lease_storage), perpetual: Perpetual = Perpetual(automatic=False, every=timedelta(seconds=get_current_settings().server.services.repossessor.loop_seconds))) -> None ``` Monitor for expired leases and schedule revocation tasks. # scheduler Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-services-scheduler # `prefect.server.services.scheduler` The Scheduler service. This service schedules flow runs from deployments with active schedules. ## Functions ### `schedule_deployments` ```python theme={null} schedule_deployments(perpetual: Perpetual = Perpetual(automatic=False, every=timedelta(seconds=get_current_settings().server.services.scheduler.loop_seconds))) -> None ``` Main scheduler - schedules flow runs from deployments with active schedules. Schedule flow runs by: * Querying for deployments with active schedules * Generating the next set of flow runs based on each deployment's schedule * Inserting all scheduled flow runs into the database ### `schedule_recent_deployments` ```python theme={null} schedule_recent_deployments(perpetual: Perpetual = Perpetual(automatic=False, every=timedelta(seconds=get_current_settings().server.services.scheduler.recent_deployments_loop_seconds))) -> None ``` Recent deployments scheduler - schedules deployments that were updated very recently. This scheduler runs on a tight loop and ensures that runs from newly-created or updated deployments are rapidly scheduled without waiting for the main scheduler. Note that scheduling is idempotent, so it's okay for this scheduler to attempt to schedule the same deployments as the main scheduler. ## Classes ### `TryAgain` Internal control-flow exception used to retry the Scheduler's main loop # task_run_recorder Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-services-task_run_recorder # `prefect.server.services.task_run_recorder` ## Functions ### `task_run_from_event` ```python theme={null} task_run_from_event(event: ReceivedEvent) -> TaskRun ``` ### `db_recordable_task_run_from_event` ```python theme={null} db_recordable_task_run_from_event(event: ReceivedEvent) -> tuple[TaskRun, dict[str, Any]] ``` ### `record_task_run_event` ```python theme={null} record_task_run_event(event: ReceivedEvent, depth: int = 0) -> None ``` Record a single task run event in the database ### `record_bulk_task_run_events` ```python theme={null} record_bulk_task_run_events(events: list[ReceivedEvent]) -> None ``` Record multiple task run events in the database, taking advantage of bulk inserts. ### `handle_task_run_events` ```python theme={null} handle_task_run_events(events: list[ReceivedEvent], depth: int = 0) -> None ``` ### `record_lost_follower_task_run_events` ```python theme={null} record_lost_follower_task_run_events() -> None ``` ### `periodically_process_followers` ```python theme={null} periodically_process_followers(periodic_granularity: timedelta) -> NoReturn ``` Periodically process followers that are waiting on a leader event that never arrived ### `consumer` ```python theme={null} consumer(write_batch_size: int, flush_every: int, max_persist_retries: int = DEFAULT_PERSIST_MAX_RETRIES) -> AsyncGenerator[MessageHandler, None] ``` ## Classes ### `RetryableEvent` ### `TaskRunRecorder` Constructs task runs and states from client-emitted events **Methods:** #### `all_services` ```python theme={null} all_services(cls) -> Sequence[type[Self]] ``` Get list of all service classes #### `enabled` ```python theme={null} enabled(cls) -> bool ``` Whether the service is enabled #### `enabled_services` ```python theme={null} enabled_services(cls) -> list[type[Self]] ``` Get list of enabled service classes #### `environment_variable_name` ```python theme={null} environment_variable_name(cls) -> str ``` #### `run_services` ```python theme={null} run_services(cls) -> NoReturn ``` Run enabled services until cancelled. #### `running` ```python theme={null} running(cls) -> AsyncGenerator[None, None] ``` A context manager that runs enabled services on entry and stops them on exit. #### `service_settings` ```python theme={null} service_settings(cls) -> ServicesBaseSetting ``` #### `service_settings` ```python theme={null} service_settings(cls) -> ServicesBaseSetting ``` The Prefect setting that controls whether the service is enabled #### `start` ```python theme={null} start(self, max_persist_retries: int = DEFAULT_PERSIST_MAX_RETRIES) -> NoReturn ``` #### `start` ```python theme={null} start(self) -> NoReturn ``` Start running the service, which may run indefinitely #### `started_event` ```python theme={null} started_event(self) -> asyncio.Event ``` #### `started_event` ```python theme={null} started_event(self, value: asyncio.Event) -> None ``` #### `stop` ```python theme={null} stop(self) -> None ``` #### `stop` ```python theme={null} stop(self) -> None ``` Stop the service # telemetry Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-services-telemetry # `prefect.server.services.telemetry` The Telemetry service. Sends anonymous data to Prefect to help us improve. ## Functions ### `send_telemetry_heartbeat` ```python theme={null} send_telemetry_heartbeat(perpetual: Perpetual = Perpetual(automatic=True, every=timedelta(seconds=600))) -> None ``` Sends anonymous telemetry data to Prefect to help us improve. It can be toggled off with the PREFECT\_SERVER\_ANALYTICS\_ENABLED setting. # task_queue Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-task_queue # `prefect.server.task_queue` Implements an in-memory task queue for delivering background task runs to TaskWorkers. ## Classes ### `TaskQueue` **Methods:** #### `configure_task_key` ```python theme={null} configure_task_key(cls, task_key: str, scheduled_size: Optional[int] = None, retry_size: Optional[int] = None) -> None ``` #### `enqueue` ```python theme={null} enqueue(cls, task_run: schemas.core.TaskRun) -> None ``` #### `for_key` ```python theme={null} for_key(cls, task_key: str) -> Self ``` #### `get` ```python theme={null} get(self) -> schemas.core.TaskRun ``` #### `get_nowait` ```python theme={null} get_nowait(self) -> schemas.core.TaskRun ``` #### `put` ```python theme={null} put(self, task_run: schemas.core.TaskRun) -> None ``` #### `reset` ```python theme={null} reset(cls) -> None ``` A unit testing utility to reset the state of the task queues subsystem #### `retry` ```python theme={null} retry(self, task_run: schemas.core.TaskRun) -> None ``` ### `MultiQueue` A queue that can pull tasks from from any of a number of task queues **Methods:** #### `get` ```python theme={null} get(self) -> schemas.core.TaskRun ``` Gets the next task\_run from any of the given queues # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-utilities-__init__ # `prefect.server.utilities` *This module is empty or contains only private/internal implementations.* # database Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-utilities-database # `prefect.server.utilities.database` Utilities for interacting with Prefect REST API database and ORM layer. Prefect supports both SQLite and Postgres. Many of these utilities allow the Prefect REST API to seamlessly switch between the two. ## Functions ### `db_injector` ```python theme={null} db_injector(func: Union[_DBMethod[T, P, R], _DBFunction[P, R]]) -> Union[_Method[T, P, R], _Function[P, R]] ``` ### `generate_uuid_postgresql` ```python theme={null} generate_uuid_postgresql(element: GenerateUUID, compiler: SQLCompiler, **kwargs: Any) -> str ``` Generates a random UUID in Postgres; requires the pgcrypto extension. ### `generate_uuid_sqlite` ```python theme={null} generate_uuid_sqlite(element: GenerateUUID, compiler: SQLCompiler, **kwargs: Any) -> str ``` Generates a random UUID in other databases (SQLite) by concatenating bytes in a way that approximates a UUID hex representation. This is sufficient for our purposes of having a random client-generated ID that is compatible with a UUID spec. ### `bindparams_from_clause` ```python theme={null} bindparams_from_clause(query: sa.ClauseElement) -> dict[str, sa.BindParameter[Any]] ``` Retrieve all non-anonymous bind parameters defined in a SQL clause ### `datetime_or_interval_add_postgresql` ```python theme={null} datetime_or_interval_add_postgresql(element: Union[date_add, interval_add, date_diff], compiler: SQLCompiler, **kwargs: Any) -> str ``` ### `date_diff_seconds_postgresql` ```python theme={null} date_diff_seconds_postgresql(element: date_diff_seconds, compiler: SQLCompiler, **kwargs: Any) -> str ``` ### `current_timestamp_sqlite` ```python theme={null} current_timestamp_sqlite(element: functions.now, compiler: SQLCompiler, **kwargs: Any) -> str ``` Generates the current timestamp for SQLite ### `date_add_sqlite` ```python theme={null} date_add_sqlite(element: date_add, compiler: SQLCompiler, **kwargs: Any) -> str ``` ### `interval_add_sqlite` ```python theme={null} interval_add_sqlite(element: interval_add, compiler: SQLCompiler, **kwargs: Any) -> str ``` ### `date_diff_sqlite` ```python theme={null} date_diff_sqlite(element: date_diff, compiler: SQLCompiler, **kwargs: Any) -> str ``` ### `date_diff_seconds_sqlite` ```python theme={null} date_diff_seconds_sqlite(element: date_diff_seconds, compiler: SQLCompiler, **kwargs: Any) -> str ``` ### `sqlite_json_operators` ```python theme={null} sqlite_json_operators(element: sa.BinaryExpression[Any], compiler: SQLCompiler, override_operator: Optional[OperatorType] = None, **kwargs: Any) -> str ``` Intercept the PostgreSQL-only JSON / JSONB operators and translate them to SQLite ### `sqlite_greatest_as_max` ```python theme={null} sqlite_greatest_as_max(element: greatest[Any], compiler: SQLCompiler, **kwargs: Any) -> str ``` ### `sqlite_least_as_min` ```python theme={null} sqlite_least_as_min(element: least[Any], compiler: SQLCompiler, **kwargs: Any) -> str ``` ### `get_dialect` ```python theme={null} get_dialect(obj: Union[str, Session, sa.Engine]) -> type[sa.Dialect] ``` Get the dialect of a session, engine, or connection url. Primary use case is figuring out whether the Prefect REST API is communicating with SQLite or Postgres. ## Classes ### `GenerateUUID` Platform-independent UUID default generator. Note the actual functionality for this class is specified in the `compiles`-decorated functions below ### `Timestamp` TypeDecorator that ensures that timestamps have a timezone. For SQLite, all timestamps are converted to UTC (since they are stored as naive timestamps without timezones) and recovered as UTC. **Methods:** #### `load_dialect_impl` ```python theme={null} load_dialect_impl(self, dialect: sa.Dialect) -> TypeEngine[Any] ``` #### `process_bind_param` ```python theme={null} process_bind_param(self, value: Optional[datetime.datetime], dialect: sa.Dialect) -> Optional[datetime.datetime] ``` #### `process_result_value` ```python theme={null} process_result_value(self, value: Optional[datetime.datetime], dialect: sa.Dialect) -> Optional[datetime.datetime] ``` ### `UUID` Platform-independent UUID type. Uses PostgreSQL's UUID type, otherwise uses CHAR(36), storing as stringified hex values with hyphens. **Methods:** #### `load_dialect_impl` ```python theme={null} load_dialect_impl(self, dialect: sa.Dialect) -> TypeEngine[Any] ``` #### `process_bind_param` ```python theme={null} process_bind_param(self, value: Optional[Union[str, uuid.UUID]], dialect: sa.Dialect) -> Optional[str] ``` #### `process_result_value` ```python theme={null} process_result_value(self, value: Optional[Union[str, uuid.UUID]], dialect: sa.Dialect) -> Optional[uuid.UUID] ``` ### `JSON` JSON type that returns SQLAlchemy's dialect-specific JSON types, where possible. Uses generic JSON otherwise. The "base" type is postgresql.JSONB to expose useful methods prior to SQL compilation **Methods:** #### `load_dialect_impl` ```python theme={null} load_dialect_impl(self, dialect: sa.Dialect) -> TypeEngine[Any] ``` #### `process_bind_param` ```python theme={null} process_bind_param(self, value: Optional[Any], dialect: sa.Dialect) -> Optional[Any] ``` Prepares the given value to be used as a JSON field in a parameter binding ### `Pydantic` A pydantic type that converts inserted parameters to json and converts read values to the pydantic type. **Methods:** #### `process_bind_param` ```python theme={null} process_bind_param(self, value: Optional[T], dialect: sa.Dialect) -> Optional[str] ``` #### `process_result_value` ```python theme={null} process_result_value(self, value: Optional[Any], dialect: sa.Dialect) -> Optional[T] ``` ### `date_add` Platform-independent way to add a timestamp and an interval ### `interval_add` Platform-independent way to add two intervals. ### `date_diff` Platform-independent difference of two timestamps. Computes d1 - d2. ### `date_diff_seconds` Platform-independent calculation of the number of seconds between two timestamps or from 'now' ### `greatest` ### `least` # encryption Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-utilities-encryption # `prefect.server.utilities.encryption` Encryption utilities ## Functions ### `encrypt_fernet` ```python theme={null} encrypt_fernet(session: AsyncSession, data: Mapping[str, Any]) -> str ``` ### `decrypt_fernet` ```python theme={null} decrypt_fernet(session: AsyncSession, data: str) -> dict[str, Any] ``` # http Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-utilities-http # `prefect.server.utilities.http` ## Functions ### `should_redact_header` ```python theme={null} should_redact_header(key: str) -> bool ``` Indicates whether an HTTP header is sensitive or noisy and should be redacted from events and templates. # leasing Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-utilities-leasing # `prefect.server.utilities.leasing` ## Classes ### `ResourceLease` ### `LeaseStorage` **Methods:** #### `create_lease` ```python theme={null} create_lease(self, resource_ids: list[UUID], ttl: timedelta, metadata: T | None = None) -> ResourceLease[T] ``` Create a new resource lease. **Args:** * `resource_ids`: The IDs of the resources that the lease is associated with. * `ttl`: How long the lease should initially be held for. * `metadata`: Additional metadata associated with the lease. **Returns:** * A ResourceLease object representing the lease. #### `read_expired_lease_ids` ```python theme={null} read_expired_lease_ids(self, limit: int = 100) -> list[UUID] ``` Read the IDs of expired leases. **Args:** * `limit`: The maximum number of expired leases to read. **Returns:** * A list of UUIDs representing the expired leases. #### `read_lease` ```python theme={null} read_lease(self, lease_id: UUID) -> ResourceLease[T] | None ``` Read a resource lease. **Args:** * `lease_id`: The ID of the lease to read. **Returns:** * A ResourceLease object representing the lease, or None if not found. #### `renew_lease` ```python theme={null} renew_lease(self, lease_id: UUID, ttl: timedelta) -> bool | None ``` Renew a resource lease. **Args:** * `lease_id`: The ID of the lease to renew. * `ttl`: The new amount of time the lease should be held for. **Returns:** * True if the lease was successfully renewed, False if the lease * does not exist or has already expired. None may be returned by * legacy implementations for backwards compatibility (treated as success). #### `revoke_lease` ```python theme={null} revoke_lease(self, lease_id: UUID) -> None ``` Release a resource lease by removing it from list of active leases. **Args:** * `lease_id`: The ID of the lease to release. # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-utilities-messaging-__init__ # `prefect.server.utilities.messaging` ## Functions ### `create_cache` ```python theme={null} create_cache() -> Cache ``` Creates a new cache with the applications default settings. **Returns:** * a new Cache instance ### `create_publisher` ```python theme={null} create_publisher(topic: str, cache: Optional[Cache] = None, deduplicate_by: Optional[str] = None) -> Publisher ``` Creates a new publisher with the applications default settings. Args: topic: the topic to publish to Returns: a new Consumer instance ### `ephemeral_subscription` ```python theme={null} ephemeral_subscription(topic: str) -> AsyncGenerator[Mapping[str, Any], Any] ``` Creates an ephemeral subscription to the given source, removing it when the context exits. ### `create_consumer` ```python theme={null} create_consumer(topic: str, **kwargs: Any) -> Consumer ``` Creates a new consumer with the applications default settings. Args: topic: the topic to consume from Returns: a new Consumer instance ## Classes ### `Message` A protocol representing a message sent to a message broker. **Methods:** #### `attributes` ```python theme={null} attributes(self) -> Mapping[str, Any] ``` #### `data` ```python theme={null} data(self) -> Union[str, bytes] ``` ### `Cache` **Methods:** #### `clear_recently_seen_messages` ```python theme={null} clear_recently_seen_messages(self) -> None ``` #### `forget_duplicates` ```python theme={null} forget_duplicates(self, attribute: str, messages: Iterable[Message]) -> None ``` #### `without_duplicates` ```python theme={null} without_duplicates(self, attribute: str, messages: Iterable[M]) -> list[M] ``` ### `Publisher` **Methods:** #### `publish_data` ```python theme={null} publish_data(self, data: bytes, attributes: Mapping[str, str]) -> None ``` ### `CapturedMessage` ### `CapturingPublisher` **Methods:** #### `publish_data` ```python theme={null} publish_data(self, data: bytes, attributes: Mapping[str, str]) -> None ``` #### `publish_data` ```python theme={null} publish_data(self, data: bytes, attributes: Mapping[str, str]) -> None ``` ### `StopConsumer` Exception to raise to stop a consumer. ### `Consumer` Abstract base class for consumers that receive messages from a message broker and call a handler function for each message received. **Methods:** #### `cleanup` ```python theme={null} cleanup(self) -> None ``` Cleanup resources when the consumer is stopped. Override this method in subclasses that need to perform cleanup, such as unsubscribing from topics or closing connections. The default implementation is a no-op, which is appropriate for consumers that don't need explicit cleanup. #### `run` ```python theme={null} run(self, handler: MessageHandler) -> None ``` Runs the consumer (indefinitely) ### `CacheModule` ### `BrokerModule` # memory Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-utilities-messaging-memory # `prefect.server.utilities.messaging.memory` ## Functions ### `log_metrics_periodically` ```python theme={null} log_metrics_periodically(interval: float = 2.0) -> None ``` ### `update_metric` ```python theme={null} update_metric(topic: str, key: str, amount: int = 1) -> None ``` ### `break_topic` ```python theme={null} break_topic() ``` ### `ephemeral_subscription` ```python theme={null} ephemeral_subscription(topic: str) -> AsyncGenerator[Mapping[str, Any], None] ``` ## Classes ### `MemoryMessage` ### `Subscription` A subscription to a topic. Messages are delivered to the subscription's queue and retried up to a maximum number of times. If a message cannot be delivered after the maximum number of retries it is moved to the dead letter queue. The dead letter queue is a directory of JSON files containing the serialized message. Messages remain in the dead letter queue until they are removed manually. **Attributes:** * `topic`: The topic that the subscription receives messages from. * `max_retries`: The maximum number of times a message will be retried for this subscription. * `dead_letter_queue_path`: The path to the dead letter queue folder. **Methods:** #### `deliver` ```python theme={null} deliver(self, message: MemoryMessage) -> None ``` Deliver a message to the subscription's queue. **Args:** * `message`: The message to deliver. #### `get` ```python theme={null} get(self) -> MemoryMessage ``` Get a message from the subscription's queue. #### `retry` ```python theme={null} retry(self, message: MemoryMessage) -> None ``` Place a message back on the retry queue. If the message has retried more than the maximum number of times it is moved to the dead letter queue. **Args:** * `message`: The message to retry. #### `send_to_dead_letter_queue` ```python theme={null} send_to_dead_letter_queue(self, message: MemoryMessage) -> None ``` Send a message to the dead letter queue. The dead letter queue is a directory of JSON files containing the serialized messages. **Args:** * `message`: The message to send to the dead letter queue. ### `Topic` **Methods:** #### `by_name` ```python theme={null} by_name(cls, name: str) -> 'Topic' ``` #### `clear` ```python theme={null} clear(self) -> None ``` #### `clear_all` ```python theme={null} clear_all(cls) -> None ``` #### `publish` ```python theme={null} publish(self, message: MemoryMessage) -> None ``` #### `subscribe` ```python theme={null} subscribe(self, **subscription_kwargs: Any) -> Subscription ``` #### `unsubscribe` ```python theme={null} unsubscribe(self, subscription: Subscription) -> None ``` ### `Cache` **Methods:** #### `clear_recently_seen_messages` ```python theme={null} clear_recently_seen_messages(self) -> None ``` #### `forget_duplicates` ```python theme={null} forget_duplicates(self, attribute: str, messages: Iterable[M]) -> None ``` #### `without_duplicates` ```python theme={null} without_duplicates(self, attribute: str, messages: Iterable[M]) -> list[M] ``` ### `Publisher` **Methods:** #### `publish_data` ```python theme={null} publish_data(self, data: bytes, attributes: Mapping[str, str]) -> None ``` ### `Consumer` **Methods:** #### `cleanup` ```python theme={null} cleanup(self) -> None ``` Cleanup resources by unsubscribing from the topic. This should be called when the consumer is no longer needed to prevent memory leaks from orphaned subscriptions. #### `run` ```python theme={null} run(self, handler: MessageHandler) -> None ``` # names Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-utilities-names # `prefect.server.utilities.names` This module is deprecated. Use `prefect.utilities.names` instead. # postgres_listener Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-utilities-postgres_listener # `prefect.server.utilities.postgres_listener` ## Functions ### `get_pg_notify_connection` ```python theme={null} get_pg_notify_connection() -> Connection | None ``` Establishes and returns a raw asyncpg connection for LISTEN/NOTIFY. Returns None if not a PostgreSQL connection URL. ### `pg_listen` ```python theme={null} pg_listen(connection: Connection, channel_name: str, heartbeat_interval: float = 5.0) -> AsyncGenerator[str, None] ``` Listens to a specific Postgres channel and yields payloads. Manages adding and removing the listener on the given connection. # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-utilities-schemas-__init__ # `prefect.server.utilities.schemas` *This module is empty or contains only private/internal implementations.* # bases Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-utilities-schemas-bases # `prefect.server.utilities.schemas.bases` ## Functions ### `get_class_fields_only` ```python theme={null} get_class_fields_only(model: type[BaseModel]) -> set[str] ``` Gets all the field names defined on the model class but not any parent classes. Any fields that are on the parent but redefined on the subclass are included. ## Classes ### `PrefectDescriptorBase` A base class for descriptor objects used with PrefectBaseModel Pydantic needs to be told about any kind of non-standard descriptor objects used on a model, in order for these not to be treated as a field type instead. This base class is registered as an ignored type with PrefectBaseModel and any classes that inherit from it will also be ignored. This allows such descriptors to be used as properties, methods or other bound descriptor use cases. ### `PrefectBaseModel` A base pydantic.BaseModel for all Prefect schemas and pydantic models. As the basis for most Prefect schemas, this base model ignores extra fields that are passed to it at instantiation. Because adding new fields to API payloads is not considered a breaking change, this ensures that any Prefect client loading data from a server running a possibly-newer version of Prefect will be able to process those new fields gracefully. **Methods:** #### `model_dump_for_orm` ```python theme={null} model_dump_for_orm(self) -> dict[str, Any] ``` Prefect extension to `BaseModel.model_dump`. Generate a Python dictionary representation of the model suitable for passing to SQLAlchemy model constructors, `INSERT` statements, etc. The critical difference here is that this method will return any nested BaseModel objects as `BaseModel` instances, rather than serialized Python dictionaries. Accepts the standard Pydantic `model_dump` arguments, except for `mode` (which is always "python"), `round_trip`, and `warnings`. Usage docs: [https://docs.pydantic.dev/2.6/concepts/serialization/#modelmodel\_dump](https://docs.pydantic.dev/2.6/concepts/serialization/#modelmodel_dump) **Args:** * `include`: A list of fields to include in the output. * `exclude`: A list of fields to exclude from the output. * `by_alias`: Whether to use the field's alias in the dictionary key if defined. * `exclude_unset`: Whether to exclude fields that have not been explicitly set. * `exclude_defaults`: Whether to exclude fields that are set to their default value. * `exclude_none`: Whether to exclude fields that have a value of `None`. **Returns:** * A dictionary representation of the model, suitable for passing * to SQLAlchemy model constructors, INSERT statements, etc. #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `IDBaseModel` A PrefectBaseModel with an auto-generated UUID ID value. The ID is reset on copy() and not included in equality comparisons. **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. ### `TimeSeriesBaseModel` A PrefectBaseModel with a time-oriented UUIDv7 ID value. Used for models that operate like timeseries, such as runs, states, and logs. ### `ORMBaseModel` A PrefectBaseModel with an auto-generated UUID ID value and created / updated timestamps, intended for compatibility with our standard ORM models. The ID, created, and updated fields are reset on copy() and not included in equality comparisons. ### `ActionBaseModel` **Methods:** #### `model_validate_list` ```python theme={null} model_validate_list(cls, obj: Any) -> list[Self] ``` #### `reset_fields` ```python theme={null} reset_fields(self: Self) -> Self ``` Reset the fields of the model that are in the `_reset_fields` set. **Returns:** * A new instance of the model with the reset fields. # serializers Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-utilities-schemas-serializers # `prefect.server.utilities.schemas.serializers` ## Functions ### `orjson_dumps` ```python theme={null} orjson_dumps(v: Any) -> str ``` Utility for dumping a value to JSON using orjson. orjson.dumps returns bytes, to match standard json.dumps we need to decode. ### `orjson_dumps_extra_compatible` ```python theme={null} orjson_dumps_extra_compatible(v: Any) -> str ``` Utility for dumping a value to JSON using orjson, but allows for 1. non-string keys: this is helpful for situations like pandas dataframes, which can result in non-string keys 2. numpy types: for serializing numpy arrays orjson.dumps returns bytes, to match standard json.dumps we need to decode. # server Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-utilities-server # `prefect.server.utilities.server` Utilities for the Prefect REST API server. ## Functions ### `method_paths_from_routes` ```python theme={null} method_paths_from_routes(routes: Sequence[BaseRoute]) -> set[str] ``` Generate a set of strings describing the given routes in the format: \ \ For example, "GET /logs/" ## Classes ### `PrefectAPIRoute` A FastAPIRoute class which attaches an async stack to requests that exits before a response is returned. Requests already have `request.scope['fastapi_astack']` which is an async stack for the full scope of the request. This stack is used for managing contexts of FastAPI dependencies. If we want to close a dependency before the request is complete (i.e. before returning a response to the user), we need a stack with a different scope. This extension adds this stack at `request.state.response_scoped_stack`. **Methods:** #### `get_route_handler` ```python theme={null} get_route_handler(self) -> Callable[[Request], Coroutine[Any, Any, Response]] ``` ### `PrefectRouter` A base class for Prefect REST API routers. **Methods:** #### `add_api_route` ```python theme={null} add_api_route(self, path: str, endpoint: Callable[..., Any], **kwargs: Any) -> None ``` Add an API route. For routes that return content and have not specified a `response_model`, use return type annotation to infer the response model. For routes that return No-Content status codes, explicitly set a `response_class` to ensure nothing is returned in the response body. # subscriptions Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-utilities-subscriptions # `prefect.server.utilities.subscriptions` ## Functions ### `accept_prefect_socket` ```python theme={null} accept_prefect_socket(websocket: WebSocket) -> Optional[WebSocket] ``` ### `still_connected` ```python theme={null} still_connected(websocket: WebSocket) -> bool ``` Checks that a client websocket still seems to be connected during a period where the server is expected to be sending messages. # text_search_parser Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-utilities-text_search_parser # `prefect.server.utilities.text_search_parser` Text search query parser Parses text search queries according to the following syntax: * Space-separated terms → OR logic (include) * Prefix with `-` or `!` → Exclude term * Prefix with `+` → Required term (AND logic, future) * Quote phrases → Match exact phrase * Backslash escapes → Allow quotes within phrases (") * Case-insensitive, substring matching * 200 character limit ## Functions ### `parse_text_search_query` ```python theme={null} parse_text_search_query(query: str) -> TextSearchQuery ``` Parse a text search query string into structured components **Args:** * `query`: The query string to parse **Returns:** * TextSearchQuery with parsed include/exclude/required terms ## Classes ### `TextSearchQuery` Parsed text search query structure # user_templates Source: https://docs.prefect.io/v3/api-ref/python/prefect-server-utilities-user_templates # `prefect.server.utilities.user_templates` Utilities to support safely rendering user-supplied templates ## Functions ### `register_user_template_filters` ```python theme={null} register_user_template_filters(filters: dict[str, Any]) -> None ``` Register additional filters that will be available to user templates ### `validate_user_template` ```python theme={null} validate_user_template(template: str) -> None ``` ### `matching_types_in_templates` ```python theme={null} matching_types_in_templates(templates: list[str], types: set[str]) -> list[str] ``` ### `maybe_template` ```python theme={null} maybe_template(possible: str) -> bool ``` ### `render_user_template` ```python theme={null} render_user_template(template: str, context: dict[str, Any]) -> str ``` ### `render_user_template_sync` ```python theme={null} render_user_template_sync(template: str, context: dict[str, Any]) -> str ``` ## Classes ### `UserTemplateEnvironment` ### `TemplateSecurityError` Raised when extended validation of a template fails. ### `TemplateRenderError` Raised when a user-supplied template fails to render. # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-__init__ # `prefect.settings` Prefect settings are defined using `BaseSettings` from `pydantic_settings`. `BaseSettings` can load setting values from system environment variables and each additionally specified `env_file`. The recommended user-facing way to access Prefect settings at this time is to import specific setting objects directly, like `from prefect.settings import PREFECT_API_URL; print(PREFECT_API_URL.value())`. Importantly, we replace the `callback` mechanism for updating settings with an "after" model\_validator that updates dependent settings. After [https://github.com/pydantic/pydantic/issues/9789](https://github.com/pydantic/pydantic/issues/9789) is resolved, we will be able to define context-aware defaults for settings, at which point we will not need to use the "after" model\_validator. # base Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-base # `prefect.settings.base` ## Functions ### `build_settings_config` ```python theme={null} build_settings_config(path: tuple[str, ...] = tuple(), frozen: bool = False) -> PrefectSettingsConfigDict ``` ## Classes ### `PrefectBaseSettings` **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. ### `PrefectSettingsConfigDict` Configuration for the behavior of Prefect settings models. # constants Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-constants # `prefect.settings.constants` *This module is empty or contains only private/internal implementations.* # context Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-context # `prefect.settings.context` ## Functions ### `get_current_settings` ```python theme={null} get_current_settings() -> Settings ``` Returns a settings object populated with values from the current settings context or, if no settings context is active, the environment. ### `temporary_settings` ```python theme={null} temporary_settings(updates: Optional[Mapping['Setting', Any]] = None, set_defaults: Optional[Mapping['Setting', Any]] = None, restore_defaults: Optional[Iterable['Setting']] = None) -> Generator[Settings, None, None] ``` Temporarily override the current settings by entering a new profile. See `Settings.copy_with_update` for details on different argument behavior. Examples: ```python theme={null} from prefect.settings import PREFECT_API_URL with temporary_settings(updates={PREFECT_API_URL: "foo"}): assert PREFECT_API_URL.value() == "foo" with temporary_settings(set_defaults={PREFECT_API_URL: "bar"}): assert PREFECT_API_URL.value() == "foo" with temporary_settings(restore_defaults={PREFECT_API_URL}): assert PREFECT_API_URL.value() is None with temporary_settings(set_defaults={PREFECT_API_URL: "bar"}) assert PREFECT_API_URL.value() == "bar" assert PREFECT_API_URL.value() is None ``` # legacy Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-legacy # `prefect.settings.legacy` ## Classes ### `Setting` Mimics the old Setting object for compatibility with existing code. **Methods:** #### `default` ```python theme={null} default(self) -> Any ``` #### `is_secret` ```python theme={null} is_secret(self) -> bool ``` #### `name` ```python theme={null} name(self) -> str ``` #### `value` ```python theme={null} value(self: Self) -> Any ``` #### `value_from` ```python theme={null} value_from(self: Self, settings: Settings) -> Any ``` # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-models-__init__ # `prefect.settings.models` *This module is empty or contains only private/internal implementations.* # api Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-models-api # `prefect.settings.models.api` ## Classes ### `APISettings` Settings for interacting with the Prefect API **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # cli Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-models-cli # `prefect.settings.models.cli` ## Classes ### `CLISettings` Settings for controlling CLI behavior **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # client Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-models-client # `prefect.settings.models.client` ## Classes ### `ClientMetricsSettings` Settings for controlling metrics reporting from the client **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. ### `ClientSettings` Settings for controlling API client behavior **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # cloud Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-models-cloud # `prefect.settings.models.cloud` ## Functions ### `default_cloud_ui_url` ```python theme={null} default_cloud_ui_url(settings: 'CloudSettings') -> Optional[str] ``` ## Classes ### `CloudSettings` Settings for interacting with Prefect Cloud **Methods:** #### `post_hoc_settings` ```python theme={null} post_hoc_settings(self) -> Self ``` refactor on resolution of [https://github.com/pydantic/pydantic/issues/9789](https://github.com/pydantic/pydantic/issues/9789) we should not be modifying **pydantic\_fields\_set** directly, but until we can define dependencies between defaults in a first-class way, we need clean up post-hoc default assignments to keep set/unset fields correct after instantiation. #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # deployments Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-models-deployments # `prefect.settings.models.deployments` ## Classes ### `DeploymentsSettings` Settings for configuring deployments defaults **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # events Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-models-events # `prefect.settings.models.events` ## Classes ### `EventsSettings` Settings for controlling events behavior **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # experiments Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-models-experiments # `prefect.settings.models.experiments` ## Classes ### `PluginsSettings` Settings for configuring the experimental plugin system **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. ### `ExperimentsSettings` Settings for configuring experimental features **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # flows Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-models-flows # `prefect.settings.models.flows` ## Classes ### `FlowsSettings` Settings for controlling flow behavior **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # internal Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-models-internal # `prefect.settings.models.internal` ## Classes ### `InternalSettings` **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # logging Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-models-logging # `prefect.settings.models.logging` ## Functions ### `max_log_size_smaller_than_batch_size` ```python theme={null} max_log_size_smaller_than_batch_size(values: dict[str, Any]) -> dict[str, Any] ``` Validator for settings asserting the batch size and match log size are compatible ## Classes ### `LoggingToAPISettings` Settings for controlling logging to the API **Methods:** #### `emit_warnings` ```python theme={null} emit_warnings(self) -> Self ``` Emits warnings for misconfiguration of logging settings. #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. ### `LoggingSettings` Settings for controlling logging behavior **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # results Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-models-results # `prefect.settings.models.results` ## Classes ### `ResultsSettings` Settings for controlling result storage behavior **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # root Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-models-root # `prefect.settings.models.root` ## Functions ### `canonical_environment_prefix` ```python theme={null} canonical_environment_prefix(settings: 'Settings') -> str ``` ## Classes ### `Settings` Settings for Prefect using Pydantic settings. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings](https://docs.pydantic.dev/latest/concepts/pydantic_settings) **Methods:** #### `connected_to_cloud` ```python theme={null} connected_to_cloud(self) -> bool ``` True when the API URL points at the configured Prefect Cloud API. #### `copy_with_update` ```python theme={null} copy_with_update(self: Self, updates: Optional[Mapping['Setting', Any]] = None, set_defaults: Optional[Mapping['Setting', Any]] = None, restore_defaults: Optional[Iterable['Setting']] = None) -> Self ``` Create a new Settings object with validation. **Args:** * `updates`: A mapping of settings to new values. Existing values for the given settings will be overridden. * `set_defaults`: A mapping of settings to new default values. Existing values for the given settings will only be overridden if they were not set. * `restore_defaults`: An iterable of settings to restore to their default values. **Returns:** * A new Settings object. #### `emit_warnings` ```python theme={null} emit_warnings(self) -> Self ``` More post-hoc validation of settings, including warnings for misconfigurations. #### `hash_key` ```python theme={null} hash_key(self) -> str ``` Return a hash key for the settings object. This is needed since some settings may be unhashable, like lists. #### `post_hoc_settings` ```python theme={null} post_hoc_settings(self) -> Self ``` Handle remaining complex default assignments that aren't yet migrated to dependent settings. With Pydantic 2.10's dependent settings feature, we've migrated simple path-based defaults to use default\_factory. The remaining items here require access to the full Settings instance or have complex interdependencies that will be migrated in future PRs. #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # runner Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-models-runner # `prefect.settings.models.runner` ## Classes ### `RunnerServerSettings` Settings for controlling runner server behavior **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. ### `RunnerSettings` Settings for controlling runner behavior **Methods:** #### `heartbeat_frequency` ```python theme={null} heartbeat_frequency(self) -> Optional[int] ``` Deprecated: Use flows.heartbeat\_frequency instead. #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-models-server-__init__ # `prefect.settings.models.server` *This module is empty or contains only private/internal implementations.* # api Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-models-server-api # `prefect.settings.models.server.api` ## Classes ### `ServerAPISettings` Settings for controlling API server behavior **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # concurrency Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-models-server-concurrency # `prefect.settings.models.server.concurrency` ## Classes ### `ServerConcurrencySettings` **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # database Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-models-server-database # `prefect.settings.models.server.database` ## Functions ### `warn_on_database_password_value_without_usage` ```python theme={null} warn_on_database_password_value_without_usage(settings: ServerDatabaseSettings) -> None ``` Validator for settings warning if the database password is set but not used. ## Classes ### `SQLAlchemyTLSSettings` Settings for controlling SQLAlchemy mTLS context when using a PostgreSQL database. **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. ### `SQLAlchemyConnectArgsSettings` Settings for controlling SQLAlchemy connection behavior; note that these settings only take effect when using a PostgreSQL database. **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. ### `SQLAlchemySettings` Settings for controlling SQLAlchemy behavior; note that these settings only take effect when using a PostgreSQL database. **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. ### `ServerDatabaseSettings` Settings for controlling server database behavior **Methods:** #### `emit_warnings` ```python theme={null} emit_warnings(self) -> Self ``` More post-hoc validation of settings, including warnings for misconfigurations. #### `set_deprecated_sqlalchemy_settings_on_child_model_and_warn` ```python theme={null} set_deprecated_sqlalchemy_settings_on_child_model_and_warn(cls, values: dict[str, Any]) -> dict[str, Any] ``` Set deprecated settings on the child model. #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # deployments Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-models-server-deployments # `prefect.settings.models.server.deployments` ## Classes ### `ServerDeploymentsSettings` **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # docket Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-models-server-docket # `prefect.settings.models.server.docket` ## Classes ### `ServerDocketSettings` Settings for controlling Docket behavior **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # ephemeral Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-models-server-ephemeral # `prefect.settings.models.server.ephemeral` ## Classes ### `ServerEphemeralSettings` Settings for controlling ephemeral server behavior **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # events Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-models-server-events # `prefect.settings.models.server.events` ## Classes ### `ServerEventsSettings` Settings for controlling behavior of the events subsystem **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # flow_run_graph Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-models-server-flow_run_graph # `prefect.settings.models.server.flow_run_graph` ## Classes ### `ServerFlowRunGraphSettings` Settings for controlling behavior of the flow run graph **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # logs Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-models-server-logs # `prefect.settings.models.server.logs` ## Classes ### `ServerLogsSettings` Settings for controlling behavior of the logs subsystem **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # root Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-models-server-root # `prefect.settings.models.server.root` ## Classes ### `ServerSettings` Settings for controlling server behavior **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # services Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-models-server-services # `prefect.settings.models.server.services` ## Classes ### `ServicesBaseSetting` **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. ### `ServerServicesCancellationCleanupSettings` Settings for controlling the cancellation cleanup service ### `ServerServicesDBVacuumSettings` Settings for controlling the database vacuum service **Methods:** #### `enabled_vacuum_types` ```python theme={null} enabled_vacuum_types(self) -> set[str] ``` Resolve `enabled` to a concrete set of vacuum type strings. Handles legacy boolean values: * `True` → `{"events", "flow_runs"}` * `False` → `{"events"}` (preserves old default) * `None` → `set()` ### `ServerServicesEventPersisterSettings` Settings for controlling the event persister service ### `ServerServicesEventLoggerSettings` Settings for controlling the event logger service ### `ServerServicesForemanSettings` Settings for controlling the foreman service ### `ServerServicesLateRunsSettings` Settings for controlling the late runs service ### `ServerServicesSchedulerSettings` Settings for controlling the scheduler service ### `ServerServicesPauseExpirationsSettings` Settings for controlling the pause expiration service ### `ServerServicesRepossessorSettings` Settings for controlling the repossessor service ### `ServerServicesTaskRunRecorderSettings` Settings for controlling the task run recorder service ### `ServerServicesTriggersSettings` Settings for controlling the triggers service ### `ServerServicesSettings` Settings for controlling server services **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # tasks Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-models-server-tasks # `prefect.settings.models.server.tasks` ## Classes ### `ServerTasksSchedulingSettings` Settings for controlling server-side behavior related to task scheduling **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. ### `ServerTasksSettings` Settings for controlling server-side behavior related to tasks **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # ui Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-models-server-ui # `prefect.settings.models.server.ui` ## Classes ### `ServerUISettings` **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # tasks Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-models-tasks # `prefect.settings.models.tasks` ## Classes ### `TasksRunnerSettings` **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. ### `TasksSchedulingSettings` **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. ### `TasksSettings` **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # telemetry Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-models-telemetry # `prefect.settings.models.telemetry` ## Classes ### `TelemetrySettings` Settings for configuring Prefect telemetry **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # testing Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-models-testing # `prefect.settings.models.testing` ## Classes ### `TestingSettings` **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # worker Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-models-worker # `prefect.settings.models.worker` ## Classes ### `WorkerWebserverSettings` **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. ### `WorkerSettings` **Methods:** #### `settings_customise_sources` ```python theme={null} settings_customise_sources(cls, settings_cls: type[BaseSettings], init_settings: PydanticBaseSettingsSource, env_settings: PydanticBaseSettingsSource, dotenv_settings: PydanticBaseSettingsSource, file_secret_settings: PydanticBaseSettingsSource) -> tuple[PydanticBaseSettingsSource, ...] ``` Define an order for Prefect settings sources. The order of the returned callables decides the priority of inputs; first item is the highest priority. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) #### `to_environment_variables` ```python theme={null} to_environment_variables(self, exclude_unset: bool = False, include_secrets: bool = True, include_aliases: bool = False) -> dict[str, str] ``` Convert the settings object to a dictionary of environment variables. # profiles Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-profiles # `prefect.settings.profiles` ## Functions ### `load_profiles` ```python theme={null} load_profiles(include_defaults: bool = True) -> ProfilesCollection ``` Load profiles from the current profile path. Optionally include profiles from the default profile path. ### `load_current_profile` ```python theme={null} load_current_profile() -> Profile ``` Load the current profile from the default and current profile paths. This will *not* include settings from the current settings context. Only settings that have been persisted to the profiles file will be saved. ### `save_profiles` ```python theme={null} save_profiles(profiles: ProfilesCollection) -> None ``` Writes all non-default profiles to the current profiles path. ### `load_profile` ```python theme={null} load_profile(name: str) -> Profile ``` Load a single profile by name. ### `update_current_profile` ```python theme={null} update_current_profile(settings: dict[str | Setting, Any]) -> Profile ``` Update the persisted data for the profile currently in-use. If the profile does not exist in the profiles file, it will be created. Given settings will be merged with the existing settings as described in `ProfilesCollection.update_profile`. **Returns:** * The new profile. ## Classes ### `Profile` A user profile containing settings. **Methods:** #### `to_environment_variables` ```python theme={null} to_environment_variables(self) -> dict[str, str] ``` Convert the profile settings to a dictionary of environment variables. #### `validate_settings` ```python theme={null} validate_settings(self) -> None ``` Validate all settings in this profile by creating a partial Settings object with the nested structure properly constructed using accessor paths. ### `ProfilesCollection` " A utility class for working with a collection of profiles. Profiles in the collection must have unique names. The collection may store the name of the active profile. **Methods:** #### `active_profile` ```python theme={null} active_profile(self) -> Profile | None ``` Retrieve the active profile in this collection. #### `add_profile` ```python theme={null} add_profile(self, profile: Profile) -> None ``` Add a profile to the collection. If the profile name already exists, an exception will be raised. #### `items` ```python theme={null} items(self) -> list[tuple[str, Profile]] ``` #### `names` ```python theme={null} names(self) -> set[str] ``` Return a set of profile names in this collection. #### `remove_profile` ```python theme={null} remove_profile(self, name: str) -> None ``` Remove a profile from the collection. #### `set_active` ```python theme={null} set_active(self, name: str | None, check: bool = True) -> None ``` Set the active profile name in the collection. A null value may be passed to indicate that this collection does not determine the active profile. #### `to_dict` ```python theme={null} to_dict(self) -> dict[str, Any] ``` Convert to a dictionary suitable for writing to disk. #### `update_profile` ```python theme={null} update_profile(self, name: str, settings: dict[Setting, Any], source: Path | None = None) -> Profile ``` Add a profile to the collection or update the existing on if the name is already present in this collection. If updating an existing profile, the settings will be merged. Settings can be dropped from the existing profile by setting them to `None` in the new profile. Returns the new profile object. #### `without_profile_source` ```python theme={null} without_profile_source(self, path: Path | None) -> 'ProfilesCollection' ``` Remove profiles that were loaded from a given path. Returns a new collection. # sources Source: https://docs.prefect.io/v3/api-ref/python/prefect-settings-sources # `prefect.settings.sources` ## Classes ### `EnvFilterSettingsSource` Custom pydantic settings source to filter out specific environment variables. All validation aliases are loaded from environment variables by default. We use `AliasPath` to maintain the ability set fields via model initialization, but those shouldn't be loaded from environment variables. This loader allows use to say which environment variables should be ignored. ### `FilteredDotEnvSettingsSource` ### `ProfileSettingsTomlLoader` Custom pydantic settings source to load profile settings from a toml file. See [https://docs.pydantic.dev/latest/concepts/pydantic\_settings/#customise-settings-sources](https://docs.pydantic.dev/latest/concepts/pydantic_settings/#customise-settings-sources) **Methods:** #### `get_field_value` ```python theme={null} get_field_value(self, field: FieldInfo, field_name: str) -> Tuple[Any, str, bool] ``` Concrete implementation to get the field value from the profile settings ### `TomlConfigSettingsSourceBase` **Methods:** #### `get_field_value` ```python theme={null} get_field_value(self, field: FieldInfo, field_name: str) -> tuple[Any, str, bool] ``` Concrete implementation to get the field value from toml data #### `prepare_field_value` ```python theme={null} prepare_field_value(self, field_name: str, field: FieldInfo, value: Any, value_is_complex: bool) -> Any ``` Override to skip JSON decoding for dict values already parsed from TOML. ### `PrefectTomlConfigSettingsSource` Custom pydantic settings source to load settings from a prefect.toml file **Methods:** #### `get_field_value` ```python theme={null} get_field_value(self, field: FieldInfo, field_name: str) -> tuple[Any, str, bool] ``` Concrete implementation to get the field value from toml data #### `prepare_field_value` ```python theme={null} prepare_field_value(self, field_name: str, field: FieldInfo, value: Any, value_is_complex: bool) -> Any ``` Override to skip JSON decoding for dict values already parsed from TOML. ### `PyprojectTomlConfigSettingsSource` Custom pydantic settings source to load settings from a pyproject.toml file **Methods:** #### `get_field_value` ```python theme={null} get_field_value(self, field: FieldInfo, field_name: str) -> tuple[Any, str, bool] ``` Concrete implementation to get the field value from toml data #### `prepare_field_value` ```python theme={null} prepare_field_value(self, field_name: str, field: FieldInfo, value: Any, value_is_complex: bool) -> Any ``` Override to skip JSON decoding for dict values already parsed from TOML. # states Source: https://docs.prefect.io/v3/api-ref/python/prefect-states # `prefect.states` ## Functions ### `to_state_create` ```python theme={null} to_state_create(state: State) -> 'StateCreate' ``` Convert the state to a `StateCreate` type which can be used to set the state of a run in the API. This method will drop this state's `data` if it is not a result type. Only results should be sent to the API. Other data is only available locally. ### `get_state_result` ```python theme={null} get_state_result(state: 'State[R]', raise_on_failure: bool = True, retry_result_failure: bool = True) -> 'R' ``` Get the result from a state. See `State.result()` ### `format_exception` ```python theme={null} format_exception(exc: BaseException, tb: TracebackType = None) -> str ``` ### `exception_to_crashed_state` ```python theme={null} exception_to_crashed_state(exc: BaseException, result_store: Optional['ResultStore'] = None) -> State ``` Takes an exception that occurs *outside* of user code and converts it to a 'Crash' exception with a 'Crashed' state. ### `exception_to_failed_state` ```python theme={null} exception_to_failed_state(exc: Optional[BaseException] = None, result_store: Optional['ResultStore'] = None, write_result: bool = False, **kwargs: Any) -> State[BaseException] ``` Convenience function for creating `Failed` states from exceptions ### `return_value_to_state` ```python theme={null} return_value_to_state(retval: 'R', result_store: 'ResultStore', key: Optional[str] = None, expiration: Optional[datetime.datetime] = None, write_result: bool = False) -> 'State[R]' ``` Given a return value from a user's function, create a `State` the run should be placed in. * If data is returned, we create a 'COMPLETED' state with the data * If a single, manually created state is returned, we use that state as given (manual creation is determined by the lack of ids) * If an upstream state or iterable of upstream states is returned, we apply the aggregate rule The aggregate rule says that given multiple states we will determine the final state such that: * If any states are not COMPLETED the final state is FAILED * If all of the states are COMPLETED the final state is COMPLETED * The states will be placed in the final state `data` attribute Callers should resolve all futures into states before passing return values to this function. ### `aget_state_exception` ```python theme={null} aget_state_exception(state: State) -> BaseException ``` Get the exception from a state asynchronously. If not given a FAILED or CRASHED state, this raise a value error. If the state result is a state, its exception will be returned. If the state result is an iterable of states, the exception of the first failure will be returned. If the state result is a string, a wrapper exception will be returned with the string as the message. If the state result is null, a wrapper exception will be returned with the state message attached. If the state result is not of a known type, a `TypeError` will be returned. When a wrapper exception is returned, the type will be: * `FailedRun` if the state type is FAILED. * `CrashedRun` if the state type is CRASHED. * `CancelledRun` if the state type is CANCELLED. ### `get_state_exception` ```python theme={null} get_state_exception(state: State) -> BaseException ``` Get the exception from a state. If not given a FAILED or CRASHED state, this raise a value error. If the state result is a state, its exception will be returned. If the state result is an iterable of states, the exception of the first failure will be returned. If the state result is a string, a wrapper exception will be returned with the string as the message. If the state result is null, a wrapper exception will be returned with the state message attached. If the state result is not of a known type, a `TypeError` will be returned. When a wrapper exception is returned, the type will be: * `FailedRun` if the state type is FAILED. * `CrashedRun` if the state type is CRASHED. * `CancelledRun` if the state type is CANCELLED. ### `araise_state_exception` ```python theme={null} araise_state_exception(state: State) -> None ``` Given a FAILED or CRASHED state, raise the contained exception asynchronously. ### `raise_state_exception` ```python theme={null} raise_state_exception(state: State) -> None ``` Given a FAILED or CRASHED state, raise the contained exception. ### `is_state_iterable` ```python theme={null} is_state_iterable(obj: Any) -> TypeGuard[Iterable[State]] ``` Check if a the given object is an iterable of states types Supported iterables are: * set * list * tuple Other iterables will return `False` even if they contain states. ### `Scheduled` ```python theme={null} Scheduled(cls: Type['State[R]'] = State, scheduled_time: Optional[datetime.datetime] = None, **kwargs: Any) -> 'State[R]' ``` Convenience function for creating `Scheduled` states. **Returns:** * a Scheduled state ### `Completed` ```python theme={null} Completed(cls: Type['State[R]'] = State, **kwargs: Any) -> 'State[R]' ``` Convenience function for creating `Completed` states. **Returns:** * a Completed state ### `Running` ```python theme={null} Running(cls: Type['State[R]'] = State, **kwargs: Any) -> 'State[R]' ``` Convenience function for creating `Running` states. **Returns:** * a Running state ### `Failed` ```python theme={null} Failed(cls: Type['State[R]'] = State, **kwargs: Any) -> 'State[R]' ``` Convenience function for creating `Failed` states. **Returns:** * a Failed state ### `Crashed` ```python theme={null} Crashed(cls: Type['State[R]'] = State, **kwargs: Any) -> 'State[R]' ``` Convenience function for creating `Crashed` states. **Returns:** * a Crashed state ### `Cancelling` ```python theme={null} Cancelling(cls: Type['State[R]'] = State, **kwargs: Any) -> 'State[R]' ``` Convenience function for creating `Cancelling` states. **Returns:** * a Cancelling state ### `Cancelled` ```python theme={null} Cancelled(cls: Type['State[R]'] = State, **kwargs: Any) -> 'State[R]' ``` Convenience function for creating `Cancelled` states. **Returns:** * a Cancelled state ### `Pending` ```python theme={null} Pending(cls: Type['State[R]'] = State, **kwargs: Any) -> 'State[R]' ``` Convenience function for creating `Pending` states. **Returns:** * a Pending state ### `Paused` ```python theme={null} Paused(cls: Type['State[R]'] = State, timeout_seconds: Optional[int] = None, pause_expiration_time: Optional[datetime.datetime] = None, reschedule: bool = False, pause_key: Optional[str] = None, **kwargs: Any) -> 'State[R]' ``` Convenience function for creating `Paused` states. **Returns:** * a Paused state ### `Suspended` ```python theme={null} Suspended(cls: Type['State[R]'] = State, timeout_seconds: Optional[int] = None, pause_expiration_time: Optional[datetime.datetime] = None, pause_key: Optional[str] = None, **kwargs: Any) -> 'State[R]' ``` Convenience function for creating `Suspended` states. **Returns:** * a Suspended state ### `AwaitingRetry` ```python theme={null} AwaitingRetry(cls: Type['State[R]'] = State, scheduled_time: Optional[datetime.datetime] = None, **kwargs: Any) -> 'State[R]' ``` Convenience function for creating `AwaitingRetry` states. **Returns:** * an AwaitingRetry state ### `AwaitingConcurrencySlot` ```python theme={null} AwaitingConcurrencySlot(cls: Type['State[R]'] = State, scheduled_time: Optional[datetime.datetime] = None, **kwargs: Any) -> 'State[R]' ``` Convenience function for creating `AwaitingConcurrencySlot` states. **Returns:** * an AwaitingConcurrencySlot state ### `Submitting` ```python theme={null} Submitting(cls: Type['State[R]'] = State, **kwargs: Any) -> 'State[R]' ``` Convenience function for creating `Submitting` states. **Returns:** * a Submitting state ### `InfrastructurePending` ```python theme={null} InfrastructurePending(cls: Type['State[R]'] = State, **kwargs: Any) -> 'State[R]' ``` Convenience function for creating `InfrastructurePending` states. **Returns:** * an InfrastructurePending state ### `Retrying` ```python theme={null} Retrying(cls: Type['State[R]'] = State, **kwargs: Any) -> 'State[R]' ``` Convenience function for creating `Retrying` states. **Returns:** * a Retrying state ### `Late` ```python theme={null} Late(cls: Type['State[R]'] = State, scheduled_time: Optional[datetime.datetime] = None, **kwargs: Any) -> 'State[R]' ``` Convenience function for creating `Late` states. **Returns:** * a Late state ## Classes ### `StateGroup` **Methods:** #### `all_completed` ```python theme={null} all_completed(self) -> bool ``` #### `all_final` ```python theme={null} all_final(self) -> bool ``` #### `any_cancelled` ```python theme={null} any_cancelled(self) -> bool ``` #### `any_failed` ```python theme={null} any_failed(self) -> bool ``` #### `any_paused` ```python theme={null} any_paused(self) -> bool ``` #### `counts_message` ```python theme={null} counts_message(self) -> str ``` #### `fail_count` ```python theme={null} fail_count(self) -> int ``` # task_engine Source: https://docs.prefect.io/v3/api-ref/python/prefect-task_engine # `prefect.task_engine` ## Functions ### `run_task_sync` ```python theme={null} run_task_sync(task: 'Task[P, R]', task_run_id: Optional[UUID] = None, task_run: Optional[TaskRun] = None, parameters: Optional[dict[str, Any]] = None, wait_for: Optional['OneOrManyFutureOrResult[Any]'] = None, return_type: Literal['state', 'result'] = 'result', dependencies: Optional[dict[str, set[RunInput]]] = None, context: Optional[dict[str, Any]] = None) -> Union[R, State, None] ``` ### `run_task_async` ```python theme={null} run_task_async(task: 'Task[P, R]', task_run_id: Optional[UUID] = None, task_run: Optional[TaskRun] = None, parameters: Optional[dict[str, Any]] = None, wait_for: Optional['OneOrManyFutureOrResult[Any]'] = None, return_type: Literal['state', 'result'] = 'result', dependencies: Optional[dict[str, set[RunInput]]] = None, context: Optional[dict[str, Any]] = None) -> Union[R, State, None] ``` ### `run_generator_task_sync` ```python theme={null} run_generator_task_sync(task: 'Task[P, R]', task_run_id: Optional[UUID] = None, task_run: Optional[TaskRun] = None, parameters: Optional[dict[str, Any]] = None, wait_for: Optional['OneOrManyFutureOrResult[Any]'] = None, return_type: Literal['state', 'result'] = 'result', dependencies: Optional[dict[str, set[RunInput]]] = None, context: Optional[dict[str, Any]] = None) -> Generator[R, None, None] ``` ### `run_generator_task_async` ```python theme={null} run_generator_task_async(task: 'Task[P, R]', task_run_id: Optional[UUID] = None, task_run: Optional[TaskRun] = None, parameters: Optional[dict[str, Any]] = None, wait_for: Optional['OneOrManyFutureOrResult[Any]'] = None, return_type: Literal['state', 'result'] = 'result', dependencies: Optional[dict[str, set[RunInput]]] = None, context: Optional[dict[str, Any]] = None) -> AsyncGenerator[R, None] ``` ### `run_task` ```python theme={null} run_task(task: 'Task[P, Union[R, Coroutine[Any, Any, R]]]', task_run_id: Optional[UUID] = None, task_run: Optional[TaskRun] = None, parameters: Optional[dict[str, Any]] = None, wait_for: Optional['OneOrManyFutureOrResult[Any]'] = None, return_type: Literal['state', 'result'] = 'result', dependencies: Optional[dict[str, set[RunInput]]] = None, context: Optional[dict[str, Any]] = None) -> Union[R, State, None, Coroutine[Any, Any, Union[R, State, None]]] ``` Runs the provided task. **Args:** * `task`: The task to run * `task_run_id`: The ID of the task run; if not provided, a new task run will be created * `task_run`: The task run object; if not provided, a new task run will be created * `parameters`: The parameters to pass to the task * `wait_for`: A list of futures to wait for before running the task * `return_type`: The return type to return; either "state" or "result" * `dependencies`: A dictionary of task run inputs to use for dependency tracking * `context`: A dictionary containing the context to use for the task run; only required if the task is running on in a remote environment **Returns:** * The result of the task run ## Classes ### `TaskRunTimeoutError` Raised when a task run exceeds its timeout. ### `BaseTaskRunEngine` **Methods:** #### `compute_transaction_key` ```python theme={null} compute_transaction_key(self) -> Optional[str] ``` #### `handle_rollback` ```python theme={null} handle_rollback(self, txn: Transaction) -> None ``` #### `is_cancelled` ```python theme={null} is_cancelled(self) -> bool ``` #### `is_running` ```python theme={null} is_running(self) -> bool ``` Whether or not the engine is currently running a task. #### `log_finished_message` ```python theme={null} log_finished_message(self) -> None ``` #### `record_terminal_state_timing` ```python theme={null} record_terminal_state_timing(self, state: State) -> None ``` #### `state` ```python theme={null} state(self) -> State ``` ### `SyncTaskRunEngine` **Methods:** #### `asset_context` ```python theme={null} asset_context(self) ``` #### `begin_run` ```python theme={null} begin_run(self) -> None ``` #### `call_hooks` ```python theme={null} call_hooks(self, state: Optional[State] = None) -> None ``` #### `call_task_fn` ```python theme={null} call_task_fn(self, transaction: Transaction) -> Union[ResultRecord[Any], None, Coroutine[Any, Any, R], R] ``` Convenience method to call the task function. Returns a coroutine if the task is async. #### `can_retry` ```python theme={null} can_retry(self, exc_or_state: Exception | State[R]) -> bool ``` #### `client` ```python theme={null} client(self) -> SyncPrefectClient ``` #### `handle_crash` ```python theme={null} handle_crash(self, exc: BaseException) -> None ``` #### `handle_exception` ```python theme={null} handle_exception(self, exc: Exception) -> None ``` #### `handle_retry` ```python theme={null} handle_retry(self, exc_or_state: Exception | State[R]) -> bool ``` Handle any task run retries. * If the task has retries left, and the retry condition is met, set the task to retrying and return True. * If the task has a retry delay, place in AwaitingRetry state with a delayed scheduled time. * If the task has no retries left, or the retry condition is not met, return False. #### `handle_success` ```python theme={null} handle_success(self, result: R, transaction: Transaction) -> Union[ResultRecord[R], None, Coroutine[Any, Any, R], R] ``` #### `handle_timeout` ```python theme={null} handle_timeout(self, exc: TimeoutError) -> None ``` #### `initialize_run` ```python theme={null} initialize_run(self, task_run_id: Optional[UUID] = None, dependencies: Optional[dict[str, set[RunInput]]] = None) -> Generator[Self, Any, Any] ``` Enters a client context and creates a task run if needed. #### `result` ```python theme={null} result(self, raise_on_failure: bool = True) -> 'Union[R, State, None]' ``` #### `run_context` ```python theme={null} run_context(self) ``` #### `set_state` ```python theme={null} set_state(self, state: State[R], force: bool = False) -> State[R] ``` #### `setup_run_context` ```python theme={null} setup_run_context(self, client: Optional[SyncPrefectClient] = None) ``` #### `start` ```python theme={null} start(self, task_run_id: Optional[UUID] = None, dependencies: Optional[dict[str, set[RunInput]]] = None) -> Generator[None, None, None] ``` #### `transaction_context` ```python theme={null} transaction_context(self) -> Generator[Transaction, None, None] ``` #### `wait_until_ready` ```python theme={null} wait_until_ready(self) -> None ``` Sync version: Waits until the scheduled time (if its the future), then enters Running. ### `AsyncTaskRunEngine` **Methods:** #### `asset_context` ```python theme={null} asset_context(self) ``` #### `begin_run` ```python theme={null} begin_run(self) -> None ``` #### `call_hooks` ```python theme={null} call_hooks(self, state: Optional[State] = None) -> None ``` #### `call_task_fn` ```python theme={null} call_task_fn(self, transaction: AsyncTransaction) -> Union[ResultRecord[Any], None, Coroutine[Any, Any, R], R] ``` Convenience method to call the task function. Returns a coroutine if the task is async. #### `can_retry` ```python theme={null} can_retry(self, exc_or_state: Exception | State[R]) -> bool ``` #### `client` ```python theme={null} client(self) -> PrefectClient ``` #### `handle_crash` ```python theme={null} handle_crash(self, exc: BaseException) -> None ``` #### `handle_exception` ```python theme={null} handle_exception(self, exc: Exception) -> None ``` #### `handle_retry` ```python theme={null} handle_retry(self, exc_or_state: Exception | State[R]) -> bool ``` Handle any task run retries. * If the task has retries left, and the retry condition is met, set the task to retrying and return True. * If the task has a retry delay, place in AwaitingRetry state with a delayed scheduled time. * If the task has no retries left, or the retry condition is not met, return False. #### `handle_success` ```python theme={null} handle_success(self, result: R, transaction: AsyncTransaction) -> Union[ResultRecord[R], None, Coroutine[Any, Any, R], R] ``` #### `handle_timeout` ```python theme={null} handle_timeout(self, exc: TimeoutError) -> None ``` #### `initialize_run` ```python theme={null} initialize_run(self, task_run_id: Optional[UUID] = None, dependencies: Optional[dict[str, set[RunInput]]] = None) -> AsyncGenerator[Self, Any] ``` Enters a client context and creates a task run if needed. #### `result` ```python theme={null} result(self, raise_on_failure: bool = True) -> 'Union[R, State, None]' ``` #### `run_context` ```python theme={null} run_context(self) ``` #### `set_state` ```python theme={null} set_state(self, state: State, force: bool = False) -> State ``` #### `setup_run_context` ```python theme={null} setup_run_context(self, client: Optional[PrefectClient] = None) ``` #### `start` ```python theme={null} start(self, task_run_id: Optional[UUID] = None, dependencies: Optional[dict[str, set[RunInput]]] = None) -> AsyncGenerator[None, None] ``` #### `transaction_context` ```python theme={null} transaction_context(self) -> AsyncGenerator[AsyncTransaction, None] ``` #### `wait_until_ready` ```python theme={null} wait_until_ready(self) -> None ``` Waits until the scheduled time (if its the future), then enters Running. # task_runners Source: https://docs.prefect.io/v3/api-ref/python/prefect-task_runners # `prefect.task_runners` ## Classes ### `TaskRunner` Abstract base class for task runners. A task runner is responsible for submitting tasks to the task run engine running in an execution environment. Submitted tasks are non-blocking and return a future object that can be used to wait for the task to complete and retrieve the result. Task runners are context managers and should be used in a `with` block to ensure proper cleanup of resources. **Methods:** #### `duplicate` ```python theme={null} duplicate(self) -> Self ``` Return a new instance of this task runner with the same configuration. #### `map` ```python theme={null} map(self, task: 'Task[P, R | CoroutineType[Any, Any, R]]', parameters: dict[str, Any | unmapped[Any] | allow_failure[Any]], wait_for: Iterable[PrefectFuture[R]] | None = None) -> PrefectFutureList[F] ``` Submit multiple tasks to the task run engine. **Args:** * `task`: The task to submit. * `parameters`: The parameters to use when running the task. * `wait_for`: A list of futures that the task depends on. **Returns:** * An iterable of future objects that can be used to wait for the tasks to * complete and retrieve the results. #### `name` ```python theme={null} name(self) -> str ``` The name of this task runner #### `submit` ```python theme={null} submit(self, task: 'Task[P, CoroutineType[Any, Any, R]]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None, dependencies: dict[str, set[RunInput]] | None = None) -> F ``` #### `submit` ```python theme={null} submit(self, task: 'Task[Any, R]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None, dependencies: dict[str, set[RunInput]] | None = None) -> F ``` #### `submit` ```python theme={null} submit(self, task: 'Task[P, R | CoroutineType[Any, Any, R]]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None, dependencies: dict[str, set[RunInput]] | None = None) -> F ``` ### `ThreadPoolTaskRunner` A task runner that executes tasks in a separate thread pool. **Attributes:** * `max_workers`: The maximum number of threads to use for executing tasks. Defaults to `PREFECT_TASK_RUNNER_THREAD_POOL_MAX_WORKERS` or `sys.maxsize`. **Examples:** Use a thread pool task runner with a flow: ```python theme={null} from prefect import flow, task from prefect.task_runners import ThreadPoolTaskRunner @task def some_io_bound_task(x: int) -> int: # making a query to a database, reading a file, etc. return x * 2 @flow(task_runner=ThreadPoolTaskRunner(max_workers=3)) # use at most 3 threads at a time def my_io_bound_flow(): futures = [] for i in range(10): future = some_io_bound_task.submit(i * 100) futures.append(future) return [future.result() for future in futures] ``` Use a thread pool task runner as a context manager: ```python theme={null} from prefect.task_runners import ThreadPoolTaskRunner @task def some_io_bound_task(x: int) -> int: # making a query to a database, reading a file, etc. return x * 2 # Use the runner directly with ThreadPoolTaskRunner(max_workers=2) as runner: future1 = runner.submit(some_io_bound_task, {"x": 1}) future2 = runner.submit(some_io_bound_task, {"x": 2}) result1 = future1.result() # 2 result2 = future2.result() # 4 ``` Configure max workers via settings: ```python theme={null} # Set via environment variable # export PREFECT_TASK_RUNNER_THREAD_POOL_MAX_WORKERS=8 from prefect import flow from prefect.task_runners import ThreadPoolTaskRunner @flow(task_runner=ThreadPoolTaskRunner()) # Uses 8 workers from setting def my_flow(): ... ``` **Methods:** #### `cancel_all` ```python theme={null} cancel_all(self) -> None ``` #### `duplicate` ```python theme={null} duplicate(self) -> 'ThreadPoolTaskRunner[R]' ``` #### `map` ```python theme={null} map(self, task: 'Task[P, CoroutineType[Any, Any, R]]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None) -> PrefectFutureList[PrefectConcurrentFuture[R]] ``` #### `map` ```python theme={null} map(self, task: 'Task[Any, R]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None) -> PrefectFutureList[PrefectConcurrentFuture[R]] ``` #### `map` ```python theme={null} map(self, task: 'Task[P, R | CoroutineType[Any, Any, R]]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None) -> PrefectFutureList[PrefectConcurrentFuture[R]] ``` #### `submit` ```python theme={null} submit(self, task: 'Task[P, CoroutineType[Any, Any, R]]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None, dependencies: dict[str, set[RunInput]] | None = None) -> PrefectConcurrentFuture[R] ``` #### `submit` ```python theme={null} submit(self, task: 'Task[Any, R]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None, dependencies: dict[str, set[RunInput]] | None = None) -> PrefectConcurrentFuture[R] ``` #### `submit` ```python theme={null} submit(self, task: 'Task[P, R | CoroutineType[Any, Any, R]]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None, dependencies: dict[str, set[RunInput]] | None = None) -> PrefectConcurrentFuture[R] ``` Submit a task to the task run engine running in a separate thread. **Args:** * `task`: The task to submit. * `parameters`: The parameters to use when running the task. * `wait_for`: A list of futures that the task depends on. **Returns:** * A future object that can be used to wait for the task to complete and * retrieve the result. ### `ProcessPoolTaskRunner` A task runner that executes tasks in a separate process pool. This task runner uses `ProcessPoolExecutor` to run tasks in separate processes, providing true parallelism for CPU-bound tasks and process isolation. Tasks are executed with proper context propagation and error handling. **Attributes:** * `max_workers`: The maximum number of processes to use for executing tasks. Defaults to `multiprocessing.cpu_count()` if `PREFECT_TASKS_RUNNER_PROCESS_POOL_MAX_WORKERS` is not set. **Examples:** Use a process pool task runner with a flow: ```python theme={null} from prefect import flow, task from prefect.task_runners import ProcessPoolTaskRunner @task def compute_heavy_task(n: int) -> int: # CPU-intensive computation that benefits from process isolation return sum(i ** 2 for i in range(n)) @flow(task_runner=ProcessPoolTaskRunner(max_workers=4)) def my_flow(): futures = [] for i in range(10): future = compute_heavy_task.submit(i * 1000) futures.append(future) return [future.result() for future in futures] ``` Use a process pool task runner as a context manager: ```python theme={null} from prefect.task_runners import ProcessPoolTaskRunner @task def my_task(x: int) -> int: return x * 2 # Use the runner directly with ProcessPoolTaskRunner(max_workers=2) as runner: future1 = runner.submit(my_task, {"x": 1}) future2 = runner.submit(my_task, {"x": 2}) result1 = future1.result() # 2 result2 = future2.result() # 4 ``` Configure max workers via settings: ```python theme={null} # Set via environment variable # export PREFECT_TASKS_RUNNER_PROCESS_POOL_MAX_WORKERS=8 from prefect import flow from prefect.task_runners import ProcessPoolTaskRunner @flow(task_runner=ProcessPoolTaskRunner()) # Uses 8 workers from setting def my_flow(): ... ``` **Methods:** #### `cancel_all` ```python theme={null} cancel_all(self) -> None ``` #### `duplicate` ```python theme={null} duplicate(self) -> Self ``` #### `map` ```python theme={null} map(self, task: 'Task[P, CoroutineType[Any, Any, R]]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None) -> PrefectFutureList[PrefectConcurrentFuture[R]] ``` #### `map` ```python theme={null} map(self, task: 'Task[Any, R]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None) -> PrefectFutureList[PrefectConcurrentFuture[R]] ``` #### `map` ```python theme={null} map(self, task: 'Task[P, R | CoroutineType[Any, Any, R]]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None) -> PrefectFutureList[PrefectConcurrentFuture[R]] ``` #### `set_subprocess_message_processor_factories` ```python theme={null} set_subprocess_message_processor_factories(self, subprocess_message_processor_factories: Iterable[_SubprocessMessageProcessorFactory] | None = None) -> None ``` #### `submit` ```python theme={null} submit(self, task: 'Task[P, CoroutineType[Any, Any, R]]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None, dependencies: dict[str, set[RunInput]] | None = None) -> PrefectConcurrentFuture[R] ``` #### `submit` ```python theme={null} submit(self, task: 'Task[Any, R]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None, dependencies: dict[str, set[RunInput]] | None = None) -> PrefectConcurrentFuture[R] ``` #### `submit` ```python theme={null} submit(self, task: 'Task[P, R | CoroutineType[Any, Any, R]]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None, dependencies: dict[str, set[RunInput]] | None = None) -> PrefectConcurrentFuture[R] ``` Submit a task to the task run engine running in a separate process. **Args:** * `task`: The task to submit. * `parameters`: The parameters to use when running the task. * `wait_for`: A list of futures that the task depends on. * `dependencies`: A dictionary of dependencies for the task. **Returns:** * A future object that can be used to wait for the task to complete and * retrieve the result. #### `subprocess_message_processor_factories` ```python theme={null} subprocess_message_processor_factories(self) -> tuple[_SubprocessMessageProcessorFactory, ...] ``` #### `subprocess_message_processor_factories` ```python theme={null} subprocess_message_processor_factories(self, subprocess_message_processor_factories: Iterable[_SubprocessMessageProcessorFactory] | None = None) -> None ``` ### `PrefectTaskRunner` **Methods:** #### `duplicate` ```python theme={null} duplicate(self) -> 'PrefectTaskRunner[R]' ``` #### `map` ```python theme={null} map(self, task: 'Task[P, CoroutineType[Any, Any, R]]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None) -> PrefectFutureList[PrefectDistributedFuture[R]] ``` #### `map` ```python theme={null} map(self, task: 'Task[Any, R]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None) -> PrefectFutureList[PrefectDistributedFuture[R]] ``` #### `map` ```python theme={null} map(self, task: 'Task[P, R | CoroutineType[Any, Any, R]]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None) -> PrefectFutureList[PrefectDistributedFuture[R]] ``` #### `submit` ```python theme={null} submit(self, task: 'Task[P, CoroutineType[Any, Any, R]]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None, dependencies: dict[str, set[RunInput]] | None = None) -> PrefectDistributedFuture[R] ``` #### `submit` ```python theme={null} submit(self, task: 'Task[Any, R]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None, dependencies: dict[str, set[RunInput]] | None = None) -> PrefectDistributedFuture[R] ``` #### `submit` ```python theme={null} submit(self, task: 'Task[P, R | CoroutineType[Any, Any, R]]', parameters: dict[str, Any], wait_for: Iterable[PrefectFuture[Any]] | None = None, dependencies: dict[str, set[RunInput]] | None = None) -> PrefectDistributedFuture[R] ``` Submit a task to the task run engine running in a separate thread. **Args:** * `task`: The task to submit. * `parameters`: The parameters to use when running the task. * `wait_for`: A list of futures that the task depends on. **Returns:** * A future object that can be used to wait for the task to complete and * retrieve the result. # task_runs Source: https://docs.prefect.io/v3/api-ref/python/prefect-task_runs # `prefect.task_runs` ## Classes ### `TaskRunWaiter` A service used for waiting for a task run to finish. This service listens for task run events and provides a way to wait for a specific task run to finish. This is useful for waiting for a task run to finish before continuing execution. The service is a singleton and must be started before use. The service will automatically start when the first instance is created. A single websocket connection is used to listen for task run events. The service can be used to wait for a task run to finish by calling `TaskRunWaiter.wait_for_task_run` with the task run ID to wait for. The method will return when the task run has finished or the timeout has elapsed. The service will automatically stop when the Python process exits or when the global loop thread is stopped. Example: ```python theme={null} import asyncio from uuid import uuid4 from prefect import task from prefect.task_engine import run_task_async from prefect.task_runs import TaskRunWaiter @task async def test_task(): await asyncio.sleep(5) print("Done!") async def main(): task_run_id = uuid4() asyncio.create_task(run_task_async(task=test_task, task_run_id=task_run_id)) await TaskRunWaiter.wait_for_task_run(task_run_id) print("Task run finished") if __name__ == "__main__": asyncio.run(main()) ``` **Methods:** #### `add_done_callback` ```python theme={null} add_done_callback(cls, task_run_id: uuid.UUID, callback: Callable[[], None]) -> None ``` Add a callback to be called when a task run finishes. **Args:** * `task_run_id`: The ID of the task run to wait for. * `callback`: The callback to call when the task run finishes. #### `instance` ```python theme={null} instance(cls) -> Self ``` Get the singleton instance of TaskRunWaiter. #### `start` ```python theme={null} start(self) -> None ``` Start the TaskRunWaiter service. #### `stop` ```python theme={null} stop(self) -> None ``` Stop the TaskRunWaiter service. #### `wait_for_task_run` ```python theme={null} wait_for_task_run(cls, task_run_id: uuid.UUID, timeout: Optional[float] = None) -> Optional[State[Any]] ``` Wait for a task run to finish and return its final state. Note this relies on a websocket connection to receive events from the server and will not work with an ephemeral server. **Args:** * `task_run_id`: The ID of the task run to wait for. * `timeout`: The maximum time to wait for the task run to finish. Defaults to None. **Returns:** * The final state of the task run if available, None otherwise. # task_worker Source: https://docs.prefect.io/v3/api-ref/python/prefect-task_worker # `prefect.task_worker` ## Functions ### `should_try_to_read_parameters` ```python theme={null} should_try_to_read_parameters(task: Task[P, R], task_run: TaskRun) -> bool ``` Determines whether a task run should read parameters from the result store. ### `create_status_server` ```python theme={null} create_status_server(task_worker: TaskWorker) -> FastAPI ``` ### `aserve` ```python theme={null} aserve(*tasks: Task[P, R]) -> None ``` Serve the provided tasks so that their runs may be submitted to and executed in the engine. Tasks do not need to be within a flow run context to be submitted. You must `.submit` the same task object that you pass to `serve`. **Args:** * `- tasks`: A list of tasks to serve. When a scheduled task run is found for a given task, the task run will be submitted to the engine for execution. * `- limit`: The maximum number of tasks that can be run concurrently. Defaults to 10. Pass `None` to remove the limit. * `- status_server_port`: An optional port on which to start an HTTP server exposing status information about the task worker. If not provided, no status server will run. * `- timeout`: If provided, the task worker will exit after the given number of seconds. Defaults to None, meaning the task worker will run indefinitely. ### `serve` ```python theme={null} serve(*tasks: Task[P, R]) -> None ``` Serve the provided tasks so that their runs may be submitted to and executed in the engine. Tasks do not need to be within a flow run context to be submitted. You must `.submit` the same task object that you pass to `serve`. **Args:** * `- tasks`: A list of tasks to serve. When a scheduled task run is found for a given task, the task run will be submitted to the engine for execution. * `- limit`: The maximum number of tasks that can be run concurrently. Defaults to 10. Pass `None` to remove the limit. * `- status_server_port`: An optional port on which to start an HTTP server exposing status information about the task worker. If not provided, no status server will run. * `- timeout`: If provided, the task worker will exit after the given number of seconds. Defaults to None, meaning the task worker will run indefinitely. ### `store_parameters` ```python theme={null} store_parameters(result_store: ResultStore, identifier: UUID, parameters: dict[str, Any]) -> None ``` Store parameters for a task run in the result store. **Args:** * `result_store`: The result store to store the parameters in. * `identifier`: The identifier of the task run. * `parameters`: The parameters to store. ### `read_parameters` ```python theme={null} read_parameters(result_store: ResultStore, identifier: UUID) -> dict[str, Any] ``` Read parameters for a task run from the result store. **Args:** * `result_store`: The result store to read the parameters from. * `identifier`: The identifier of the task run. **Returns:** * The parameters for the task run. ## Classes ### `StopTaskWorker` Raised when the task worker is stopped. ### `TaskWorker` This class is responsible for serving tasks that may be executed in the background by a task runner via the traditional engine machinery. When `start()` is called, the task worker will open a websocket connection to a server-side queue of scheduled task runs. When a scheduled task run is found, the scheduled task run is submitted to the engine for execution with a minimal `EngineContext` so that the task run can be governed by orchestration rules. **Args:** * `- tasks`: A list of tasks to serve. These tasks will be submitted to the engine when a scheduled task run is found. * `- limit`: The maximum number of tasks that can be run concurrently. Defaults to 10. Pass `None` to remove the limit. **Methods:** #### `astart` ```python theme={null} astart(self, timeout: Optional[float] = None) -> None ``` Starts a task worker, which runs the tasks provided in the constructor. **Args:** * `timeout`: If provided, the task worker will exit after the given number of seconds. Defaults to None, meaning the task worker will run indefinitely. #### `astop` ```python theme={null} astop(self) -> None ``` Stops the task worker's polling cycle. #### `available_tasks` ```python theme={null} available_tasks(self) -> Optional[int] ``` #### `client_id` ```python theme={null} client_id(self) -> str ``` #### `current_tasks` ```python theme={null} current_tasks(self) -> Optional[int] ``` #### `execute_task_run` ```python theme={null} execute_task_run(self, task_run: TaskRun) -> None ``` Execute a task run in the task worker. #### `handle_sigterm` ```python theme={null} handle_sigterm(self, signum: int, frame: object) -> None ``` Shuts down the task worker when a SIGTERM is received. #### `limit` ```python theme={null} limit(self) -> Optional[int] ``` #### `start` ```python theme={null} start(self, timeout: Optional[float] = None) -> None ``` Starts a task worker, which runs the tasks provided in the constructor. **Args:** * `timeout`: If provided, the task worker will exit after the given number of seconds. Defaults to None, meaning the task worker will run indefinitely. #### `started` ```python theme={null} started(self) -> bool ``` #### `started_at` ```python theme={null} started_at(self) -> Optional[DateTime] ``` #### `stop` ```python theme={null} stop(self) -> None ``` Stops the task worker's polling cycle. # tasks Source: https://docs.prefect.io/v3/api-ref/python/prefect-tasks # `prefect.tasks` Module containing the base workflow task class and decorator - for most use cases, using the `@task` decorator is preferred. ## Functions ### `task_input_hash` ```python theme={null} task_input_hash(context: 'TaskRunContext', arguments: dict[str, Any]) -> Optional[str] ``` A task cache key implementation which hashes all inputs to the task using a JSON or cloudpickle serializer. If any arguments are not JSON serializable, the pickle serializer is used as a fallback. If cloudpickle fails, this will return a null key indicating that a cache key could not be generated for the given inputs. **Args:** * `context`: the active `TaskRunContext` * `arguments`: a dictionary of arguments to be passed to the underlying task **Returns:** * a string hash if hashing succeeded, else `None` ### `exponential_backoff` ```python theme={null} exponential_backoff(backoff_factor: float) -> Callable[[int], list[float]] ``` A task retry backoff utility that configures exponential backoff for task retries. The exponential backoff design matches the urllib3 implementation. **Args:** * `backoff_factor`: the base delay for the first retry, subsequent retries will increase the delay time by powers of 2. **Returns:** * a callable that can be passed to the task constructor ### `task` ```python theme={null} task(__fn: Optional[Callable[P, R]] = None) ``` Decorator to designate a function as a task in a Prefect workflow. This decorator may be used for asynchronous or synchronous functions. **Args:** * `name`: An optional name for the task; if not provided, the name will be inferred from the given function. * `description`: An optional string description for the task. * `tags`: An optional set of tags to be associated with runs of this task. These tags are combined with any tags defined by a `prefect.tags` context at task runtime. * `version`: An optional string specifying the version of this task definition * `cache_key_fn`: An optional callable that, given the task run context and call parameters, generates a string key; if the key matches a previous completed state, that state result will be restored instead of running the task again. * `cache_expiration`: An optional amount of time indicating how long cached states for this task should be restorable; if not provided, cached states will never expire. * `task_run_name`: An optional name to distinguish runs of this task; this name can be provided as a string template with the task's keyword arguments as variables, or a function that returns a string. * `retries`: An optional number of times to retry on task run failure * `retry_delay_seconds`: Optionally configures how long to wait before retrying the task after failure. This is only applicable if `retries` is nonzero. This setting can either be a number of seconds, a list of retry delays, or a callable that, given the total number of retries, generates a list of retry delays. If a number of seconds, that delay will be applied to all retries. If a list, each retry will wait for the corresponding delay before retrying. When passing a callable or a list, the number of configured retry delays cannot exceed 50. * `retry_jitter_factor`: An optional factor that defines the factor to which a retry can be jittered in order to avoid a "thundering herd". * `persist_result`: A toggle indicating whether the result of this task should be persisted to result storage. Defaults to `None`, which indicates that the global default should be used (which is `True` by default). * `result_storage`: An optional block to use to persist the result of this task. This can be either a saved block instance or a string reference (e.g., "local-file-system/my-storage"). Block instances must have `.save()` called first since decorators execute at import time. String references are resolved at runtime and recommended for testing scenarios. Defaults to the value set in the flow the task is called in. * `result_storage_key`: An optional key to store the result in storage at when persisted. Defaults to a unique identifier. * `result_serializer`: An optional serializer to use to serialize the result of this task for persistence. Defaults to the value set in the flow the task is called in. * `timeout_seconds`: An optional number of seconds indicating a maximum runtime for the task. If the task exceeds this runtime, it will be marked as failed. * `log_prints`: If set, `print` statements in the task will be redirected to the Prefect logger for the task run. Defaults to `None`, which indicates that the value from the flow should be used. * `refresh_cache`: If set, cached results for the cache key are not used. Defaults to `None`, which indicates that a cached result from a previous execution with matching cache key is used. * `on_failure`: An optional list of callables to run when the task enters a failed state. * `on_completion`: An optional list of callables to run when the task enters a completed state. * `retry_condition_fn`: An optional callable run when a task run returns a Failed state. Should return `True` if the task should continue to its retry policy (e.g. `retries=3`), and `False` if the task should end as failed. Defaults to `None`, indicating the task should always continue to its retry policy. * `viz_return_value`: An optional value to return when the task dependency tree is visualized. * `asset_deps`: An optional list of upstream assets that this task depends on. **Returns:** * A callable `Task` object which, when called, will submit the task for execution. **Examples:** Define a simple task ```python theme={null} @task def add(x, y): return x + y ``` Define an async task ```python theme={null} @task async def add(x, y): return x + y ``` Define a task with tags and a description ```python theme={null} @task(tags={"a", "b"}, description="This task is empty but its my first!") def my_task(): pass ``` Define a task with a custom name ```python theme={null} @task(name="The Ultimate Task") def my_task(): pass ``` Define a task that retries 3 times with a 5 second delay between attempts ```python theme={null} from random import randint @task(retries=3, retry_delay_seconds=5) def my_task(): x = randint(0, 5) if x >= 3: # Make a task that fails sometimes raise ValueError("Retry me please!") return x ``` Define a task that is cached for a day based on its inputs ```python theme={null} from prefect.tasks import task_input_hash from datetime import timedelta @task(cache_key_fn=task_input_hash, cache_expiration=timedelta(days=1)) def my_task(): return "hello" ``` ## Classes ### `TaskRunNameCallbackWithParameters` **Methods:** #### `is_callback_with_parameters` ```python theme={null} is_callback_with_parameters(cls, callable: Callable[..., str]) -> TypeIs[Self] ``` ### `TaskOptions` A TypedDict representing all available task configuration options. This can be used with `Unpack` to provide type hints for \*\*kwargs. ### `Task` A Prefect task definition. Wraps a function with an entrypoint to the Prefect engine. Calling this class within a flow function creates a new task run. To preserve the input and output types, we use the generic type variables P and R for "Parameters" and "Returns" respectively. **Args:** * `fn`: The function defining the task. * `name`: An optional name for the task; if not provided, the name will be inferred from the given function. * `description`: An optional string description for the task. * `tags`: An optional set of tags to be associated with runs of this task. These tags are combined with any tags defined by a `prefect.tags` context at task runtime. * `version`: An optional string specifying the version of this task definition * `cache_policy`: A cache policy that determines the level of caching for this task * `cache_key_fn`: An optional callable that, given the task run context and call parameters, generates a string key; if the key matches a previous completed state, that state result will be restored instead of running the task again. * `cache_expiration`: An optional amount of time indicating how long cached states for this task should be restorable; if not provided, cached states will never expire. * `task_run_name`: An optional name to distinguish runs of this task; this name can be provided as a string template with the task's keyword arguments as variables, or a function that returns a string. * `retries`: An optional number of times to retry on task run failure. * `retry_delay_seconds`: Optionally configures how long to wait before retrying the task after failure. This is only applicable if `retries` is nonzero. This setting can either be a number of seconds, a list of retry delays, or a callable that, given the total number of retries, generates a list of retry delays. If a number of seconds, that delay will be applied to all retries. If a list, each retry will wait for the corresponding delay before retrying. When passing a callable or a list, the number of configured retry delays cannot exceed 50. * `retry_jitter_factor`: An optional factor that defines the factor to which a retry can be jittered in order to avoid a "thundering herd". * `persist_result`: A toggle indicating whether the result of this task should be persisted to result storage. Defaults to `None`, which indicates that the global default should be used (which is `True` by default). * `result_storage`: An optional block to use to persist the result of this task. This can be either a saved block instance or a string reference (e.g., "local-file-system/my-storage"). Block instances must have `.save()` called first since decorators execute at import time. String references are resolved at runtime and recommended for testing scenarios. Defaults to the value set in the flow the task is called in. * `result_storage_key`: An optional key to store the result in storage at when persisted. Defaults to a unique identifier. * `result_serializer`: An optional serializer to use to serialize the result of this task for persistence. Defaults to the value set in the flow the task is called in. * `timeout_seconds`: An optional number of seconds indicating a maximum runtime for the task. If the task exceeds this runtime, it will be marked as failed. * `log_prints`: If set, `print` statements in the task will be redirected to the Prefect logger for the task run. Defaults to `None`, which indicates that the value from the flow should be used. * `refresh_cache`: If set, cached results for the cache key are not used. Defaults to `None`, which indicates that a cached result from a previous execution with matching cache key is used. * `on_failure`: An optional list of callables to run when the task enters a failed state. * `on_completion`: An optional list of callables to run when the task enters a completed state. * `on_commit`: An optional list of callables to run when the task's idempotency record is committed. * `on_rollback`: An optional list of callables to run when the task rolls back. * `retry_condition_fn`: An optional callable run when a task run returns a Failed state. Should return `True` if the task should continue to its retry policy (e.g. `retries=3`), and `False` if the task should end as failed. Defaults to `None`, indicating the task should always continue to its retry policy. * `viz_return_value`: An optional value to return when the task dependency tree is visualized. * `asset_deps`: An optional list of upstream assets that this task depends on. **Methods:** #### `apply_async` ```python theme={null} apply_async(self, args: Optional[tuple[Any, ...]] = None, kwargs: Optional[dict[str, Any]] = None, wait_for: Optional[Iterable[PrefectFuture[R]]] = None, dependencies: Optional[dict[str, set[RunInput]]] = None) -> PrefectDistributedFuture[R] ``` Create a pending task run for a task worker to execute. **Args:** * `args`: Arguments to run the task with * `kwargs`: Keyword arguments to run the task with **Returns:** * A PrefectDistributedFuture object representing the pending task run Examples: Define a task ```python theme={null} from prefect import task @task def my_task(name: str = "world"): return f"hello {name}" ``` Create a pending task run for the task ```python theme={null} from prefect import flow @flow def my_flow(): my_task.apply_async(("marvin",)) ``` Wait for a task to finish ```python theme={null} @flow def my_flow(): my_task.apply_async(("marvin",)).wait() ``` ```python theme={null} @flow def my_flow(): print(my_task.apply_async(("marvin",)).result()) my_flow() # hello marvin ``` TODO: Enforce ordering between tasks that do not exchange data ```python theme={null} @task def task_1(): pass @task def task_2(): pass @flow def my_flow(): x = task_1.apply_async() # task 2 will wait for task_1 to complete y = task_2.apply_async(wait_for=[x]) ``` #### `aserve` ```python theme={null} aserve(self) -> NoReturn ``` Serve the task using the provided task runner. This method is used to establish a websocket connection with the Prefect server and listen for submitted task runs to execute. This is the async version of serve(). **Args:** * `task_runner`: The task runner to use for serving the task. If not provided, the default task runner will be used. **Examples:** Serve a task using the default task runner in an async context ```python theme={null} @task def my_task(): return 1 await my_task.aserve() ``` #### `create_local_run` ```python theme={null} create_local_run(self, client: Optional['PrefectClient'] = None, id: Optional[UUID] = None, parameters: Optional[dict[str, Any]] = None, flow_run_context: Optional[FlowRunContext] = None, parent_task_run_context: Optional[TaskRunContext] = None, wait_for: Optional[OneOrManyFutureOrResult[Any]] = None, extra_task_inputs: Optional[dict[str, set[RunInput]]] = None, deferred: bool = False) -> TaskRun ``` #### `create_run` ```python theme={null} create_run(self, client: Optional['PrefectClient'] = None, id: Optional[UUID] = None, parameters: Optional[dict[str, Any]] = None, flow_run_context: Optional[FlowRunContext] = None, parent_task_run_context: Optional[TaskRunContext] = None, wait_for: Optional[OneOrManyFutureOrResult[Any]] = None, extra_task_inputs: Optional[dict[str, set[RunInput]]] = None, deferred: bool = False) -> TaskRun ``` #### `delay` ```python theme={null} delay(self, *args: P.args, **kwargs: P.kwargs) -> PrefectDistributedFuture[R] ``` An alias for `apply_async` with simpler calling semantics. Avoids having to use explicit "args" and "kwargs" arguments. Arguments will pass through as-is to the task. Examples: Define a task ```python theme={null} from prefect import task @task def my_task(name: str = "world"): return f"hello {name}" ``` Create a pending task run for the task ```python theme={null} from prefect import flow @flow def my_flow(): my_task.delay("marvin") ``` Wait for a task to finish ```python theme={null} @flow def my_flow(): my_task.delay("marvin").wait() ``` Use the result from a task in a flow ```python theme={null} @flow def my_flow(): print(my_task.delay("marvin").result()) my_flow() # hello marvin ``` #### `isclassmethod` ```python theme={null} isclassmethod(self) -> bool ``` #### `ismethod` ```python theme={null} ismethod(self) -> bool ``` #### `isstaticmethod` ```python theme={null} isstaticmethod(self) -> bool ``` #### `map` ```python theme={null} map(self: 'Task[P, R]', *args: Any, **kwargs: Any) -> list[State[R]] ``` #### `map` ```python theme={null} map(self: 'Task[P, R]', *args: Any, **kwargs: Any) -> PrefectFutureList[R] ``` #### `map` ```python theme={null} map(self: 'Task[P, R]', *args: Any, **kwargs: Any) -> list[State[R]] ``` #### `map` ```python theme={null} map(self: 'Task[P, R]', *args: Any, **kwargs: Any) -> PrefectFutureList[R] ``` #### `map` ```python theme={null} map(self: 'Task[P, Coroutine[Any, Any, R]]', *args: Any, **kwargs: Any) -> list[State[R]] ``` #### `map` ```python theme={null} map(self: 'Task[P, Coroutine[Any, Any, R]]', *args: Any, **kwargs: Any) -> PrefectFutureList[R] ``` #### `map` ```python theme={null} map(self, *args: Any, **kwargs: Any) -> Union[list[State[R]], PrefectFutureList[R]] ``` Submit a mapped run of the task to a worker. Must be called within a flow run context. Will return a list of futures that should be waited on before exiting the flow context to ensure all mapped tasks have completed. Must be called with at least one iterable and all iterables must be the same length. Any arguments that are not iterable will be treated as a static value and each task run will receive the same value. Will create as many task runs as the length of the iterable(s) in the backing API and submit the task runs to the flow's task runner. This call blocks if given a future as input while the future is resolved. It also blocks while the tasks are being submitted, once they are submitted, the flow function will continue executing. This method is always synchronous, even if the underlying user function is asynchronous. **Args:** * `*args`: Iterable and static arguments to run the tasks with * `return_state`: Return a list of Prefect States that wrap the results of each task run. * `wait_for`: Upstream task futures to wait for before starting the task * `**kwargs`: Keyword iterable arguments to run the task with **Returns:** * A list of futures allowing asynchronous access to the state of the * tasks Examples: Define a task ```python theme={null} from prefect import task @task def my_task(x): return x + 1 ``` Create mapped tasks ```python theme={null} from prefect import flow @flow def my_flow(): return my_task.map([1, 2, 3]) ``` Wait for all mapped tasks to finish ```python theme={null} @flow def my_flow(): futures = my_task.map([1, 2, 3]) futures.wait(): # Now all of the mapped tasks have finished my_task(10) ``` Use the result from mapped tasks in a flow ```python theme={null} @flow def my_flow(): futures = my_task.map([1, 2, 3]) for x in futures.result(): print(x) my_flow() # 2 # 3 # 4 ``` Enforce ordering between tasks that do not exchange data ```python theme={null} @task def task_1(x): pass @task def task_2(y): pass @flow def my_flow(): x = task_1.submit() # task 2 will wait for task_1 to complete y = task_2.map([1, 2, 3], wait_for=[x]) return y ``` Use a non-iterable input as a constant across mapped tasks ```python theme={null} @task def display(prefix, item): print(prefix, item) @flow def my_flow(): return display.map("Check it out: ", [1, 2, 3]) my_flow() # Check it out: 1 # Check it out: 2 # Check it out: 3 ``` Use `unmapped` to treat an iterable argument as a constant ```python theme={null} from prefect import unmapped @task def add_n_to_items(items, n): return [item + n for item in items] @flow def my_flow(): return add_n_to_items.map(unmapped([10, 20]), n=[1, 2, 3]) my_flow() # [[11, 21], [12, 22], [13, 23]] ``` #### `on_commit` ```python theme={null} on_commit(self, fn: Callable[['Transaction'], None]) -> Callable[['Transaction'], None] ``` #### `on_completion` ```python theme={null} on_completion(self, fn: StateHookCallable) -> StateHookCallable ``` #### `on_failure` ```python theme={null} on_failure(self, fn: StateHookCallable) -> StateHookCallable ``` #### `on_rollback` ```python theme={null} on_rollback(self, fn: Callable[['Transaction'], None]) -> Callable[['Transaction'], None] ``` #### `on_running` ```python theme={null} on_running(self, fn: StateHookCallable) -> StateHookCallable ``` #### `serve` ```python theme={null} serve(self) -> NoReturn ``` Serve the task using the provided task runner. This method is used to establish a websocket connection with the Prefect server and listen for submitted task runs to execute. **Args:** * `task_runner`: The task runner to use for serving the task. If not provided, the default task runner will be used. **Examples:** Serve a task using the default task runner ```python theme={null} @task def my_task(): return 1 my_task.serve() ``` #### `submit` ```python theme={null} submit(self: 'Task[P, R]', *args: P.args, **kwargs: P.kwargs) -> PrefectFuture[R] ``` #### `submit` ```python theme={null} submit(self: 'Task[P, Coroutine[Any, Any, R]]', *args: P.args, **kwargs: P.kwargs) -> PrefectFuture[R] ``` #### `submit` ```python theme={null} submit(self: 'Task[P, R]', *args: P.args, **kwargs: P.kwargs) -> PrefectFuture[R] ``` #### `submit` ```python theme={null} submit(self: 'Task[P, Coroutine[Any, Any, R]]', *args: P.args, **kwargs: P.kwargs) -> State[R] ``` #### `submit` ```python theme={null} submit(self: 'Task[P, R]', *args: P.args, **kwargs: P.kwargs) -> State[R] ``` #### `submit` ```python theme={null} submit(self: 'Union[Task[P, R], Task[P, Coroutine[Any, Any, R]]]', *args: Any, **kwargs: Any) ``` Submit a run of the task to the engine. Will create a new task run in the backing API and submit the task to the flow's task runner. This call only blocks execution while the task is being submitted, once it is submitted, the flow function will continue executing. This method is always synchronous, even if the underlying user function is asynchronous. **Args:** * `*args`: Arguments to run the task with * `return_state`: Return the result of the flow run wrapped in a Prefect State. * `wait_for`: Upstream task futures to wait for before starting the task * `**kwargs`: Keyword arguments to run the task with **Returns:** * If `return_state` is False a future allowing asynchronous access to the state of the task * If `return_state` is True a future wrapped in a Prefect State allowing asynchronous access to the state of the task Examples: Define a task ```python theme={null} from prefect import task @task def my_task(): return "hello" ``` Run a task in a flow ```python theme={null} from prefect import flow @flow def my_flow(): my_task.submit() ``` Wait for a task to finish ```python theme={null} @flow def my_flow(): my_task.submit().wait() ``` Use the result from a task in a flow ```python theme={null} @flow def my_flow(): print(my_task.submit().result()) my_flow() # hello ``` Run an async task in an async flow ```python theme={null} @task async def my_async_task(): pass @flow async def my_flow(): my_async_task.submit() ``` Run a sync task in an async flow ```python theme={null} @flow async def my_flow(): my_task.submit() ``` Enforce ordering between tasks that do not exchange data ```python theme={null} @task def task_1(): pass @task def task_2(): pass @flow def my_flow(): x = task_1.submit() # task 2 will wait for task_1 to complete y = task_2.submit(wait_for=[x]) ``` #### `with_options` ```python theme={null} with_options(self) -> 'Task[P, R]' ``` Create a new task from the current object, updating provided options. **Args:** * `name`: A new name for the task. * `description`: A new description for the task. * `tags`: A new set of tags for the task. If given, existing tags are ignored, not merged. * `cache_key_fn`: A new cache key function for the task. * `cache_expiration`: A new cache expiration time for the task. * `task_run_name`: An optional name to distinguish runs of this task; this name can be provided as a string template with the task's keyword arguments as variables, or a function that returns a string. * `retries`: A new number of times to retry on task run failure. * `retry_delay_seconds`: Optionally configures how long to wait before retrying the task after failure. This is only applicable if `retries` is nonzero. This setting can either be a number of seconds, a list of retry delays, or a callable that, given the total number of retries, generates a list of retry delays. If a number of seconds, that delay will be applied to all retries. If a list, each retry will wait for the corresponding delay before retrying. When passing a callable or a list, the number of configured retry delays cannot exceed 50. * `retry_jitter_factor`: An optional factor that defines the factor to which a retry can be jittered in order to avoid a "thundering herd". * `persist_result`: A new option for enabling or disabling result persistence. * `result_storage`: A new storage type to use for results. * `result_serializer`: A new serializer to use for results. * `result_storage_key`: A new key for the persisted result to be stored at. * `timeout_seconds`: A new maximum time for the task to complete in seconds. * `log_prints`: A new option for enabling or disabling redirection of `print` statements. * `refresh_cache`: A new option for enabling or disabling cache refresh. * `on_completion`: A new list of callables to run when the task enters a completed state. * `on_failure`: A new list of callables to run when the task enters a failed state. * `retry_condition_fn`: An optional callable run when a task run returns a Failed state. Should return `True` if the task should continue to its retry policy, and `False` if the task should end as failed. Defaults to `None`, indicating the task should always continue to its retry policy. * `viz_return_value`: An optional value to return when the task dependency tree is visualized. **Returns:** * A new `Task` instance. Examples: Create a new task from an existing task and update the name: ```python theme={null} @task(name="My task") def my_task(): return 1 new_task = my_task.with_options(name="My new task") ``` Create a new task from an existing task and update the retry settings: ```python theme={null} from random import randint @task(retries=1, retry_delay_seconds=5) def my_task(): x = randint(0, 5) if x >= 3: # Make a task that fails sometimes raise ValueError("Retry me please!") return x new_task = my_task.with_options(retries=5, retry_delay_seconds=2) ``` Use a task with updated options within a flow: ```python theme={null} @task(name="My task") def my_task(): return 1 @flow my_flow(): new_task = my_task.with_options(name="My new task") new_task() ``` ### `MaterializingTask` A task that materializes Assets. **Args:** * `assets`: List of Assets that this task materializes (can be str or Asset) * `materialized_by`: An optional tool that materialized the asset e.g. "dbt" or "spark" * `**task_kwargs`: All other Task arguments **Methods:** #### `with_options` ```python theme={null} with_options(self, assets: Optional[Sequence[Union[str, Asset]]] = None, **task_kwargs: Unpack[TaskOptions]) -> 'MaterializingTask[P, R]' ``` # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-telemetry-__init__ # `prefect.telemetry` *This module is empty or contains only private/internal implementations.* # run_telemetry Source: https://docs.prefect.io/v3/api-ref/python/prefect-telemetry-run_telemetry # `prefect.telemetry.run_telemetry` ## Classes ### `OTELSetter` A setter for OpenTelemetry that supports Prefect's custom labels. **Methods:** #### `set` ```python theme={null} set(self, carrier: KeyValueLabels, key: str, value: str) -> None ``` ### `RunTelemetry` A class for managing the telemetry of runs. **Methods:** #### `async_start_span` ```python theme={null} async_start_span(self, run: FlowOrTaskRun, client: PrefectClient, parameters: dict[str, Any] | None = None) -> Span | None ``` #### `end_span_on_failure` ```python theme={null} end_span_on_failure(self, terminal_message: str | None = None) -> None ``` End a span for a run on failure. #### `end_span_on_success` ```python theme={null} end_span_on_success(self) -> None ``` End a span for a run on success. #### `record_exception` ```python theme={null} record_exception(self, exc: BaseException) -> None ``` Record an exception on a span. #### `start_span` ```python theme={null} start_span(self, run: FlowOrTaskRun, client: SyncPrefectClient, parameters: dict[str, Any] | None = None) -> Span | None ``` #### `traceparent_from_span` ```python theme={null} traceparent_from_span(span: Span) -> str | None ``` #### `update_run_name` ```python theme={null} update_run_name(self, name: str) -> None ``` Update the name of the run. #### `update_state` ```python theme={null} update_state(self, new_state: State) -> None ``` Update a span with the state of a run. # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-testing-__init__ # `prefect.testing` *This module is empty or contains only private/internal implementations.* # cli Source: https://docs.prefect.io/v3/api-ref/python/prefect-testing-cli # `prefect.testing.cli` ## Functions ### `check_contains` ```python theme={null} check_contains(cli_result: CycloptsResult, content: str, should_contain: bool) -> None ``` Utility function to see if content is or is not in a CLI result. **Args:** * `should_contain`: if True, checks that content is in cli\_result, if False, checks that content is not in cli\_result ### `invoke_and_assert` ```python theme={null} invoke_and_assert(command: str | list[str], user_input: str | None = None, prompts_and_responses: list[tuple[str, str] | tuple[str, str, str]] | None = None, expected_output: str | None = None, expected_output_contains: str | Iterable[str] | None = None, expected_output_does_not_contain: str | Iterable[str] | None = None, expected_line_count: int | None = None, expected_code: int | None = 0, echo: bool = True, temp_dir: str | None = None) -> CycloptsResult ``` Test utility for the Prefect CLI application. Uses CycloptsCliRunner for in-process invocation with proper I/O isolation. **Args:** * `command`: Command-line arguments (string or list of strings). * `user_input`: Simulated stdin for interactive commands. * `prompts_and_responses`: List of (prompt, response\[, selected\_option]) tuples for interactive commands. * `expected_output`: Assert exact match with CLI output. * `expected_output_contains`: Assert CLI output contains this string or each string in the iterable. * `expected_output_does_not_contain`: Assert CLI output does not contain this string or any string in the iterable. * `expected_line_count`: Assert the number of output lines. * `expected_code`: Expected exit code (default 0). * `echo`: Print CLI output for debugging (default True). * `temp_dir`: Run the command in this directory. ### `temporary_console_width` ```python theme={null} temporary_console_width(console: Console, width: int) ``` ## Classes ### `CycloptsResult` Result of a cyclopts CLI invocation. Compatible with typer's Result so existing invoke\_and\_assert callers can work with either runner without changes. ### `CycloptsCliRunner` In-process test runner for the cyclopts CLI. Analogous to Click's CliRunner: captures stdout/stderr, simulates stdin, emulates a TTY for Rich Console interactive mode, and isolates global state between invocations. Design principles: * Use a TTY-emulating StringIO as sys.stdout so that Rich Console instances (which resolve sys.stdout dynamically via their `file` property) write to our capture buffer AND report is\_interactive=True. * Redirect sys.stdin to a StringIO for prompt input. * Save and restore all mutated global state (sys.stdout/stderr/stdin, os.environ\["COLUMNS"], the cyclopts app's global console) in a try/finally block. * Catch SystemExit to extract exit codes without terminating the process. Not thread-safe (mutates interpreter globals), but safe with pytest-xdist which forks separate worker processes. **Methods:** #### `invoke` ```python theme={null} invoke(self, args: str | list[str], input: str | None = None) -> CycloptsResult ``` Invoke the cyclopts CLI with the given arguments. **Args:** * `args`: Command-line arguments (e.g. \["config", "view"]). * `input`: Simulated stdin content for interactive prompts. **Returns:** * CycloptsResult with captured stdout, stderr, exit\_code, and * any exception that occurred. # docker Source: https://docs.prefect.io/v3/api-ref/python/prefect-testing-docker # `prefect.testing.docker` ## Functions ### `capture_builders` ```python theme={null} capture_builders() -> Generator[list[ImageBuilder], None, None] ``` Captures any instances of ImageBuilder created while this context is active # fixtures Source: https://docs.prefect.io/v3/api-ref/python/prefect-testing-fixtures # `prefect.testing.fixtures` ## Functions ### `add_prefect_loggers_to_caplog` ```python theme={null} add_prefect_loggers_to_caplog(caplog: pytest.LogCaptureFixture) -> Generator[None, None, None] ``` ### `is_port_in_use` ```python theme={null} is_port_in_use(port: int) -> bool ``` ### `hosted_api_server` ```python theme={null} hosted_api_server(unused_tcp_port_factory: Callable[[], int], test_database_connection_url: Optional[str]) -> AsyncGenerator[str, None] ``` Runs an instance of the Prefect API server in a subprocess instead of the using the ephemeral application. Uses the same database as the rest of the tests. ### `use_hosted_api_server` ```python theme={null} use_hosted_api_server(hosted_api_server: str) -> Generator[str, None, None] ``` Sets `PREFECT_API_URL` to the test session's hosted API endpoint. ### `disable_hosted_api_server` ```python theme={null} disable_hosted_api_server() -> Generator[None, None, None] ``` Disables the hosted API server by setting `PREFECT_API_URL` to `None`. ### `enable_ephemeral_server` ```python theme={null} enable_ephemeral_server(disable_hosted_api_server: None) -> Generator[None, None, None] ``` Enables the ephemeral server by setting `PREFECT_SERVER_ALLOW_EPHEMERAL_MODE` to `True`. ### `mock_anyio_sleep` ```python theme={null} mock_anyio_sleep(monkeypatch: pytest.MonkeyPatch) -> Generator[Callable[[float], None], None, None] ``` Mock sleep used to not actually sleep but to set the current time to now + sleep delay seconds while still yielding to other tasks in the event loop. Provides "assert\_sleeps\_for" context manager which asserts a sleep time occurred within the context while using the actual runtime of the context as a tolerance. ### `recorder` ```python theme={null} recorder() -> Recorder ``` ### `puppeteer` ```python theme={null} puppeteer() -> Puppeteer ``` ### `events_server` ```python theme={null} events_server(unused_tcp_port: int, recorder: Recorder, puppeteer: Puppeteer) -> AsyncGenerator[Server, None] ``` ### `events_api_url` ```python theme={null} events_api_url(events_server: Server, unused_tcp_port: int) -> str ``` ### `events_cloud_api_url` ```python theme={null} events_cloud_api_url(events_server: Server, unused_tcp_port: int) -> str ``` ### `mock_should_emit_events` ```python theme={null} mock_should_emit_events(monkeypatch: pytest.MonkeyPatch) -> mock.Mock ``` ### `asserting_events_worker` ```python theme={null} asserting_events_worker(monkeypatch: pytest.MonkeyPatch) -> Generator[EventsWorker, None, None] ``` ### `asserting_and_emitting_events_worker` ```python theme={null} asserting_and_emitting_events_worker(monkeypatch: pytest.MonkeyPatch) -> Generator[EventsWorker, None, None] ``` ### `events_pipeline` ```python theme={null} events_pipeline(asserting_events_worker: EventsWorker) -> AsyncGenerator[EventsPipeline, None] ``` ### `emitting_events_pipeline` ```python theme={null} emitting_events_pipeline(asserting_and_emitting_events_worker: EventsWorker) -> AsyncGenerator[EventsPipeline, None] ``` ### `reset_worker_events` ```python theme={null} reset_worker_events(asserting_events_worker: EventsWorker) -> Generator[None, None, None] ``` ## Classes ### `Recorder` ### `Puppeteer` # standard_test_suites Source: https://docs.prefect.io/v3/api-ref/python/prefect-testing-standard_test_suites # `prefect.testing.standard_test_suites` *This module is empty or contains only private/internal implementations.* # utilities Source: https://docs.prefect.io/v3/api-ref/python/prefect-testing-utilities # `prefect.testing.utilities` Internal utilities for tests. ## Functions ### `exceptions_equal` ```python theme={null} exceptions_equal(a: Exception, b: Exception) -> bool ``` Exceptions cannot be compared by `==`. They can be compared using `is` but this will fail if the exception is serialized/deserialized so this utility does its best to assert equality using the type and args used to initialize the exception ### `kubernetes_environments_equal` ```python theme={null} kubernetes_environments_equal(actual: list[dict[str, str]], expected: list[dict[str, str]] | dict[str, str]) -> bool ``` ### `assert_does_not_warn` ```python theme={null} assert_does_not_warn(ignore_warnings: list[type[Warning]] | None = None) -> Generator[None, None, None] ``` Converts warnings to errors within this context to assert warnings are not raised, except for those specified in ignore\_warnings. Parameters: * ignore\_warnings: List of warning types to ignore. Example: \[DeprecationWarning, UserWarning] ### `prefect_test_harness` ```python theme={null} prefect_test_harness(server_startup_timeout: int | None = 30) ``` Temporarily run flows against a local SQLite database for testing. **Args:** * `server_startup_timeout`: The maximum time to wait for the server to start. Defaults to 30 seconds. If set to `None`, the value of `PREFECT_SERVER_EPHEMERAL_STARTUP_TIMEOUT_SECONDS` will be used. **Examples:** ```python theme={null} from prefect import flow from prefect.testing.utilities import prefect_test_harness @flow def my_flow(): return 'Done!' with prefect_test_harness(): assert my_flow() == 'Done!' # run against temporary db ``` ### `get_most_recent_flow_run` ```python theme={null} get_most_recent_flow_run(client: 'PrefectClient | None' = None, flow_name: str | None = None) -> 'FlowRun' ``` ### `assert_blocks_equal` ```python theme={null} assert_blocks_equal(found: Block, expected: Block, exclude_private: bool = True, **kwargs: Any) -> None ``` ### `assert_uses_result_serializer` ```python theme={null} assert_uses_result_serializer(state: State, serializer: str | Serializer, client: 'PrefectClient') -> None ``` ### `assert_uses_result_storage` ```python theme={null} assert_uses_result_storage(state: State, storage: 'str | ReadableFileSystem', client: 'PrefectClient') -> None ``` ### `a_test_step` ```python theme={null} a_test_step(**kwargs: Any) -> dict[str, Any] ``` ### `b_test_step` ```python theme={null} b_test_step(**kwargs: Any) -> dict[str, Any] ``` # transactions Source: https://docs.prefect.io/v3/api-ref/python/prefect-transactions # `prefect.transactions` ## Functions ### `get_transaction` ```python theme={null} get_transaction() -> BaseTransaction | None ``` ### `transaction` ```python theme={null} transaction(key: str | None = None, store: ResultStore | None = None, commit_mode: CommitMode | None = None, isolation_level: IsolationLevel | None = None, overwrite: bool = False, write_on_commit: bool = True, logger: logging.Logger | LoggingAdapter | None = None) -> Generator[Transaction, None, None] ``` A context manager for opening and managing a transaction. **Args:** * `- key`: An identifier to use for the transaction * `- store`: The store to use for persisting the transaction result. If not provided, a default store will be used based on the current run context. * `- commit_mode`: The commit mode controlling when the transaction and child transactions are committed * `- overwrite`: Whether to overwrite an existing transaction record in the store * `- write_on_commit`: Whether to write the result to the store on commit. If not provided, will default will be determined by the current run context. If no run context is available, the value of `PREFECT_RESULTS_PERSIST_BY_DEFAULT` will be used. ### `atransaction` ```python theme={null} atransaction(key: str | None = None, store: ResultStore | None = None, commit_mode: CommitMode | None = None, isolation_level: IsolationLevel | None = None, overwrite: bool = False, write_on_commit: bool = True, logger: logging.Logger | LoggingAdapter | None = None) -> AsyncGenerator[AsyncTransaction, None] ``` An asynchronous context manager for opening and managing an asynchronous transaction. **Args:** * `- key`: An identifier to use for the transaction * `- store`: The store to use for persisting the transaction result. If not provided, a default store will be used based on the current run context. * `- commit_mode`: The commit mode controlling when the transaction and child transactions are committed * `- overwrite`: Whether to overwrite an existing transaction record in the store * `- write_on_commit`: Whether to write the result to the store on commit. If not provided, the default will be determined by the current run context. If no run context is available, the value of `PREFECT_RESULTS_PERSIST_BY_DEFAULT` will be used. ## Classes ### `IsolationLevel` **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `CommitMode` **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `TransactionState` **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `BaseTransaction` A base model for transaction state. **Methods:** #### `add_child` ```python theme={null} add_child(self, transaction: Self) -> None ``` #### `get` ```python theme={null} get(self, name: str, default: Any = NotSet) -> Any ``` Get a stored value from the transaction. Child transactions will return values from their parents unless a value with the same name is set in the child transaction. Direct changes to returned values will not update the stored value. To update the stored value, use the `set` method. **Args:** * `name`: The name of the value to get * `default`: The default value to return if the value is not found **Returns:** * The value from the transaction **Examples:** Get a value from the transaction: ```python theme={null} with transaction() as txn: txn.set("key", "value") ... assert txn.get("key") == "value" ``` Get a value from a parent transaction: ```python theme={null} with transaction() as parent: parent.set("key", "parent_value") with transaction() as child: assert child.get("key") == "parent_value" ``` Update a stored value: ```python theme={null} with transaction() as txn: txn.set("key", [1, 2, 3]) value = txn.get("key") value.append(4) # Stored value is not updated until `.set` is called assert value == [1, 2, 3, 4] assert txn.get("key") == [1, 2, 3] txn.set("key", value) assert txn.get("key") == [1, 2, 3, 4] ``` #### `get` ```python theme={null} get(cls: type[Self]) -> Optional[Self] ``` Get the current context instance #### `get_active` ```python theme={null} get_active(cls: Type[Self]) -> Optional[Self] ``` #### `get_parent` ```python theme={null} get_parent(self) -> Self | None ``` #### `is_active` ```python theme={null} is_active(self) -> bool ``` #### `is_committed` ```python theme={null} is_committed(self) -> bool ``` #### `is_pending` ```python theme={null} is_pending(self) -> bool ``` #### `is_rolled_back` ```python theme={null} is_rolled_back(self) -> bool ``` #### `is_staged` ```python theme={null} is_staged(self) -> bool ``` #### `model_copy` ```python theme={null} model_copy(self: Self) -> Self ``` Duplicate the context model, optionally choosing which fields to include, exclude, or change. **Attributes:** * `include`: Fields to include in new model. * `exclude`: Fields to exclude from new model, as with values this takes precedence over include. * `update`: Values to change/add in the new model. Note: the data is not validated before creating the new model - you should trust this data. * `deep`: Set to `True` to make a deep copy of the model. **Returns:** * A new model instance. #### `prepare_transaction` ```python theme={null} prepare_transaction(self) -> None ``` Helper method to prepare transaction state and validate configuration. #### `serialize` ```python theme={null} serialize(self, include_secrets: bool = True) -> dict[str, Any] ``` Serialize the context model to a dictionary that can be pickled with cloudpickle. #### `set` ```python theme={null} set(self, name: str, value: Any) -> None ``` Set a stored value in the transaction. **Args:** * `name`: The name of the value to set * `value`: The value to set **Examples:** Set a value for use later in the transaction: ```python theme={null} with transaction() as txn: txn.set("key", "value") ... assert txn.get("key") == "value" ``` #### `stage` ```python theme={null} stage(self, value: Any, on_rollback_hooks: Optional[list[Callable[..., Any]]] = None, on_commit_hooks: Optional[list[Callable[..., Any]]] = None) -> None ``` Stage a value to be committed later. ### `Transaction` A model representing the state of a transaction. **Methods:** #### `add_child` ```python theme={null} add_child(self, transaction: Self) -> None ``` #### `begin` ```python theme={null} begin(self) -> None ``` #### `commit` ```python theme={null} commit(self) -> bool ``` #### `get` ```python theme={null} get(self, name: str, default: Any = NotSet) -> Any ``` Get a stored value from the transaction. Child transactions will return values from their parents unless a value with the same name is set in the child transaction. Direct changes to returned values will not update the stored value. To update the stored value, use the `set` method. **Args:** * `name`: The name of the value to get * `default`: The default value to return if the value is not found **Returns:** * The value from the transaction **Examples:** Get a value from the transaction: ```python theme={null} with transaction() as txn: txn.set("key", "value") ... assert txn.get("key") == "value" ``` Get a value from a parent transaction: ```python theme={null} with transaction() as parent: parent.set("key", "parent_value") with transaction() as child: assert child.get("key") == "parent_value" ``` Update a stored value: ```python theme={null} with transaction() as txn: txn.set("key", [1, 2, 3]) value = txn.get("key") value.append(4) # Stored value is not updated until `.set` is called assert value == [1, 2, 3, 4] assert txn.get("key") == [1, 2, 3] txn.set("key", value) assert txn.get("key") == [1, 2, 3, 4] ``` #### `get_active` ```python theme={null} get_active(cls: Type[Self]) -> Optional[Self] ``` #### `get_parent` ```python theme={null} get_parent(self) -> Self | None ``` #### `is_active` ```python theme={null} is_active(self) -> bool ``` #### `is_committed` ```python theme={null} is_committed(self) -> bool ``` #### `is_pending` ```python theme={null} is_pending(self) -> bool ``` #### `is_rolled_back` ```python theme={null} is_rolled_back(self) -> bool ``` #### `is_staged` ```python theme={null} is_staged(self) -> bool ``` #### `prepare_transaction` ```python theme={null} prepare_transaction(self) -> None ``` Helper method to prepare transaction state and validate configuration. #### `read` ```python theme={null} read(self) -> ResultRecord[Any] | None ``` #### `reset` ```python theme={null} reset(self) -> None ``` #### `rollback` ```python theme={null} rollback(self) -> bool ``` #### `run_hook` ```python theme={null} run_hook(self, hook: Callable[..., Any], hook_type: str) -> None ``` #### `set` ```python theme={null} set(self, name: str, value: Any) -> None ``` Set a stored value in the transaction. **Args:** * `name`: The name of the value to set * `value`: The value to set **Examples:** Set a value for use later in the transaction: ```python theme={null} with transaction() as txn: txn.set("key", "value") ... assert txn.get("key") == "value" ``` #### `stage` ```python theme={null} stage(self, value: Any, on_rollback_hooks: Optional[list[Callable[..., Any]]] = None, on_commit_hooks: Optional[list[Callable[..., Any]]] = None) -> None ``` Stage a value to be committed later. ### `AsyncTransaction` A model representing the state of an asynchronous transaction. **Methods:** #### `add_child` ```python theme={null} add_child(self, transaction: Self) -> None ``` #### `begin` ```python theme={null} begin(self) -> None ``` #### `commit` ```python theme={null} commit(self) -> bool ``` #### `get` ```python theme={null} get(self, name: str, default: Any = NotSet) -> Any ``` Get a stored value from the transaction. Child transactions will return values from their parents unless a value with the same name is set in the child transaction. Direct changes to returned values will not update the stored value. To update the stored value, use the `set` method. **Args:** * `name`: The name of the value to get * `default`: The default value to return if the value is not found **Returns:** * The value from the transaction **Examples:** Get a value from the transaction: ```python theme={null} with transaction() as txn: txn.set("key", "value") ... assert txn.get("key") == "value" ``` Get a value from a parent transaction: ```python theme={null} with transaction() as parent: parent.set("key", "parent_value") with transaction() as child: assert child.get("key") == "parent_value" ``` Update a stored value: ```python theme={null} with transaction() as txn: txn.set("key", [1, 2, 3]) value = txn.get("key") value.append(4) # Stored value is not updated until `.set` is called assert value == [1, 2, 3, 4] assert txn.get("key") == [1, 2, 3] txn.set("key", value) assert txn.get("key") == [1, 2, 3, 4] ``` #### `get_active` ```python theme={null} get_active(cls: Type[Self]) -> Optional[Self] ``` #### `get_parent` ```python theme={null} get_parent(self) -> Self | None ``` #### `is_active` ```python theme={null} is_active(self) -> bool ``` #### `is_committed` ```python theme={null} is_committed(self) -> bool ``` #### `is_pending` ```python theme={null} is_pending(self) -> bool ``` #### `is_rolled_back` ```python theme={null} is_rolled_back(self) -> bool ``` #### `is_staged` ```python theme={null} is_staged(self) -> bool ``` #### `prepare_transaction` ```python theme={null} prepare_transaction(self) -> None ``` Helper method to prepare transaction state and validate configuration. #### `read` ```python theme={null} read(self) -> ResultRecord[Any] | None ``` #### `reset` ```python theme={null} reset(self) -> None ``` #### `rollback` ```python theme={null} rollback(self) -> bool ``` #### `run_hook` ```python theme={null} run_hook(self, hook: Callable[..., Any], hook_type: str) -> None ``` #### `set` ```python theme={null} set(self, name: str, value: Any) -> None ``` Set a stored value in the transaction. **Args:** * `name`: The name of the value to set * `value`: The value to set **Examples:** Set a value for use later in the transaction: ```python theme={null} with transaction() as txn: txn.set("key", "value") ... assert txn.get("key") == "value" ``` #### `stage` ```python theme={null} stage(self, value: Any, on_rollback_hooks: Optional[list[Callable[..., Any]]] = None, on_commit_hooks: Optional[list[Callable[..., Any]]] = None) -> None ``` Stage a value to be committed later. # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-types-__init__ # `prefect.types` ## Functions ### `check_variable_value` ```python theme={null} check_variable_value(value: object) -> object ``` ### `cast_none_to_empty_dict` ```python theme={null} cast_none_to_empty_dict(value: Any) -> dict[str, Any] ``` ### `validate_set_T_from_delim_string` ```python theme={null} validate_set_T_from_delim_string(value: Union[str, T, set[T], None], type_: Any, delim: str | None = None) -> set[T] ``` "no-info" before validator useful in scooping env vars e.g. `PREFECT_CLIENT_RETRY_EXTRA_CODES=429,502,503` -> `{429, 502, 503}` e.g. `PREFECT_CLIENT_RETRY_EXTRA_CODES=429` -> `{429}` ### `parse_retry_delay_input` ```python theme={null} parse_retry_delay_input(value: Any) -> Any ``` Parses various inputs (string, int, float, list) into a format suitable for TaskRetryDelaySeconds (int, float, list\[float], or None). Handles comma-separated strings for lists of delays. ### `convert_none_to_empty_dict` ```python theme={null} convert_none_to_empty_dict(v: Optional[KeyValueLabels]) -> KeyValueLabels ``` ## Classes ### `SecretDict` # entrypoint Source: https://docs.prefect.io/v3/api-ref/python/prefect-types-entrypoint # `prefect.types.entrypoint` ## Classes ### `EntrypointType` Enum representing a entrypoint type. File path entrypoints are in the format: `path/to/file.py:function_name`. Module path entrypoints are in the format: `path.to.module.function_name`. # names Source: https://docs.prefect.io/v3/api-ref/python/prefect-types-names # `prefect.types.names` ## Functions ### `raise_on_name_alphanumeric_dashes_only` ```python theme={null} raise_on_name_alphanumeric_dashes_only(value: str | None, field_name: str = 'value') -> str | None ``` ### `raise_on_name_alphanumeric_underscores_only` ```python theme={null} raise_on_name_alphanumeric_underscores_only(value: str | None, field_name: str = 'value') -> str | None ``` ### `raise_on_name_alphanumeric_dashes_underscores_only` ```python theme={null} raise_on_name_alphanumeric_dashes_underscores_only(value: str, field_name: str = 'value') -> str ``` ### `non_emptyish` ```python theme={null} non_emptyish(value: str) -> str ``` ### `validate_uri` ```python theme={null} validate_uri(value: str) -> str ``` Validate that a string is a valid URI with lowercase protocol. ### `validate_valid_asset_key` ```python theme={null} validate_valid_asset_key(value: str) -> str ``` Validate asset key with character restrictions and length limit. # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-utilities-__init__ # `prefect.utilities` *This module is empty or contains only private/internal implementations.* # annotations Source: https://docs.prefect.io/v3/api-ref/python/prefect-utilities-annotations # `prefect.utilities.annotations` ## Classes ### `BaseAnnotation` Base class for Prefect annotation types. Inherits from `tuple` for unpacking support in other tools. **Methods:** #### `rewrap` ```python theme={null} rewrap(self, value: T) -> Self ``` #### `unwrap` ```python theme={null} unwrap(self) -> T ``` ### `unmapped` Wrapper for iterables. Indicates that this input should be sent as-is to all runs created during a mapping operation instead of being split. ### `allow_failure` Wrapper for states or futures. Indicates that the upstream run for this input can be failed. Generally, Prefect will not allow a downstream run to start if any of its inputs are failed. This annotation allows you to opt into receiving a failed input downstream. If the input is from a failed run, the attached exception will be passed to your function. ### `quote` Simple wrapper to mark an expression as a different type so it will not be coerced by Prefect. For example, if you want to return a state from a flow without having the flow assume that state. quote will also instruct prefect to ignore introspection of the wrapped object when passed as flow or task parameter. Parameter introspection can be a significant performance hit when the object is a large collection, e.g. a large dictionary or DataFrame, and each element needs to be visited. This will disable task dependency tracking for the wrapped object, but likely will increase performance. ``` @task def my_task(df): ... @flow def my_flow(): my_task(quote(df)) ``` **Methods:** #### `unquote` ```python theme={null} unquote(self) -> T ``` ### `opaque` Wrapper for task inputs that resolves the top-level value but prevents recursive traversal into its contents. When a `PrefectFuture` (or `State`) is wrapped with `opaque`, Prefect will wait for the future and return its result, but will **not** walk into the resolved object looking for nested futures, states, or task-run inputs. This avoids the expensive CPU-bound traversal that `visit_collection` performs on large results (big dicts, DataFrames, etc.) while still preserving the ergonomic `.submit()` chaining pattern. Semantics compared with other annotations: * **No annotation** — resolve *and* recursively traverse (default). * `quote` — do **not** resolve, do **not** traverse. * `opaque` — resolve the top-level value, but do **not** traverse into its contents. ### `Quote` ### `NotSet` Singleton to distinguish `None` from a value that is not provided by the user. ### `freeze` Wrapper for parameters in deployments. Indicates that this parameter should be frozen in the UI and not editable when creating flow runs from this deployment. Example: ```python theme={null} @flow def my_flow(customer_id: str): # flow logic deployment = my_flow.deploy(parameters={"customer_id": freeze("customer123")}) ``` **Methods:** #### `unfreeze` ```python theme={null} unfreeze(self) -> T ``` Return the unwrapped value. # asyncutils Source: https://docs.prefect.io/v3/api-ref/python/prefect-utilities-asyncutils # `prefect.utilities.asyncutils` Utilities for interoperability with async functions and workers from various contexts. ## Functions ### `get_thread_limiter` ```python theme={null} get_thread_limiter() -> anyio.CapacityLimiter ``` ### `is_async_fn` ```python theme={null} is_async_fn(func: _SyncOrAsyncCallable[P, R]) -> TypeGuard[Callable[P, Coroutine[Any, Any, Any]]] ``` Returns `True` if a function returns a coroutine. See [https://github.com/microsoft/pyright/issues/2142](https://github.com/microsoft/pyright/issues/2142) for an example use ### `is_async_gen_fn` ```python theme={null} is_async_gen_fn(func: Callable[P, Any]) -> TypeGuard[Callable[P, AsyncGenerator[Any, Any]]] ``` Returns `True` if a function is an async generator. ### `create_task` ```python theme={null} create_task(coroutine: Coroutine[Any, Any, R]) -> asyncio.Task[R] ``` Replacement for asyncio.create\_task that will ensure that tasks aren't garbage collected before they complete. Allows for "fire and forget" behavior in which tasks can be created and the application can move on. Tasks can also be awaited normally. See [https://docs.python.org/3/library/asyncio-task.html#asyncio.create\_task](https://docs.python.org/3/library/asyncio-task.html#asyncio.create_task) for details (and essentially this implementation) ### `run_coro_as_sync` ```python theme={null} run_coro_as_sync(coroutine: Coroutine[Any, Any, R]) -> Optional[R] ``` Runs a coroutine from a synchronous context, as if it were a synchronous function. The coroutine is scheduled to run in the "run sync" event loop, which is running in its own thread and is started the first time it is needed. This allows us to share objects like async httpx clients among all coroutines running in the loop. If run\_sync is called from within the run\_sync loop, it will run the coroutine in a new thread, because otherwise a deadlock would occur. Note that this behavior should not appear anywhere in the Prefect codebase or in user code. **Args:** * `coroutine`: The coroutine to be run as a synchronous function. * `force_new_thread`: If True, the coroutine will always be run in a new thread. Defaults to False. * `wait_for_result`: If True, the function will wait for the coroutine to complete and return the result. If False, the function will submit the coroutine to the "run sync" event loop and return immediately, where it will eventually be run. Defaults to True. **Returns:** * The result of the coroutine if wait\_for\_result is True, otherwise None. ### `run_sync_in_worker_thread` ```python theme={null} run_sync_in_worker_thread(__fn: Callable[P, R], *args: P.args, **kwargs: P.kwargs) -> R ``` Runs a sync function in a new worker thread so that the main thread's event loop is not blocked. Unlike the anyio function, this defaults to a cancellable thread and does not allow passing arguments to the anyio function so users can pass kwargs to their function. Note that cancellation of threads will not result in interrupted computation, the thread may continue running — the outcome will just be ignored. ### `call_with_mark` ```python theme={null} call_with_mark(call: Callable[..., R]) -> R ``` ### `run_async_from_worker_thread` ```python theme={null} run_async_from_worker_thread(__fn: Callable[P, Awaitable[R]], *args: P.args, **kwargs: P.kwargs) -> R ``` Runs an async function in the main thread's event loop, blocking the worker thread until completion ### `run_async_in_new_loop` ```python theme={null} run_async_in_new_loop(__fn: Callable[P, Awaitable[R]], *args: P.args, **kwargs: P.kwargs) -> R ``` ### `mark_as_worker_thread` ```python theme={null} mark_as_worker_thread() -> None ``` ### `in_async_worker_thread` ```python theme={null} in_async_worker_thread() -> bool ``` ### `in_async_main_thread` ```python theme={null} in_async_main_thread() -> bool ``` ### `sync_compatible` ```python theme={null} sync_compatible(async_fn: Callable[P, Coroutine[Any, Any, R]]) -> Callable[P, Union[R, Coroutine[Any, Any, R]]] ``` Converts an async function into a dual async and sync function. When the returned function is called, we will attempt to determine the best way to enter the async function. * If in a thread with a running event loop, we will return the coroutine for the caller to await. This is normal async behavior. * If in a blocking worker thread with access to an event loop in another thread, we will submit the async method to the event loop. * If we cannot find an event loop, we will create a new one and run the async method then tear down the loop. Note: Type checkers will infer functions decorated with `@sync_compatible` are synchronous. If you want to use the decorated function in an async context, you will need to ignore the types and "cast" the return type to a coroutine. For example: ``` python result: Coroutine = sync_compatible(my_async_function)(arg1, arg2) # type: ignore ``` ### `asyncnullcontext` ```python theme={null} asyncnullcontext(value: Optional[R] = None, *args: Any, **kwargs: Any) -> AsyncGenerator[Any, Optional[R]] ``` ### `sync` ```python theme={null} sync(__async_fn: Callable[P, Awaitable[T]], *args: P.args, **kwargs: P.kwargs) -> T ``` Call an async function from a synchronous context. Block until completion. If in an asynchronous context, we will run the code in a separate loop instead of failing but a warning will be displayed since this is not recommended. ### `add_event_loop_shutdown_callback` ```python theme={null} add_event_loop_shutdown_callback(coroutine_fn: Callable[[], Awaitable[Any]]) -> None ``` Adds a callback to the given callable on event loop closure. The callable must be a coroutine function. It will be awaited when the current event loop is shutting down. Requires use of `asyncio.run()` which waits for async generator shutdown by default or explicit call of `asyncio.shutdown_asyncgens()`. If the application is entered with `asyncio.run_until_complete()` and the user calls `asyncio.close()` without the generator shutdown call, this will not trigger callbacks. asyncio does not provided *any* other way to clean up a resource when the event loop is about to close. ### `create_gather_task_group` ```python theme={null} create_gather_task_group() -> GatherTaskGroup ``` Create a new task group that gathers results ### `gather` ```python theme={null} gather(*calls: Callable[[], Coroutine[Any, Any, T]]) -> list[T] ``` Run calls concurrently and gather their results. Unlike `asyncio.gather` this expects to receive *callables* not *coroutines*. This matches `anyio` semantics. ## Classes ### `GatherIncomplete` Used to indicate retrieving gather results before completion ### `GatherTaskGroup` A task group that gathers results. AnyIO does not include `gather` support. This class extends the `TaskGroup` interface to allow simple gathering. See [https://github.com/agronholm/anyio/issues/100](https://github.com/agronholm/anyio/issues/100) This class should be instantiated with `create_gather_task_group`. **Methods:** #### `get_result` ```python theme={null} get_result(self, key: UUID) -> Any ``` #### `start` ```python theme={null} start(self, func: object, *args: object) -> NoReturn ``` Since `start` returns the result of `task_status.started()` but here we must return the key instead, we just won't support this method for now. #### `start_soon` ```python theme={null} start_soon(self, func: Callable[[Unpack[PosArgsT]], Awaitable[Any]], *args: Unpack[PosArgsT]) -> UUID ``` ### `LazySemaphore` # callables Source: https://docs.prefect.io/v3/api-ref/python/prefect-utilities-callables # `prefect.utilities.callables` Utilities for working with Python callables. ## Functions ### `get_call_parameters` ```python theme={null} get_call_parameters(fn: Callable[..., Any], call_args: tuple[Any, ...], call_kwargs: dict[str, Any], apply_defaults: bool = True) -> dict[str, Any] ``` Bind a call to a function to get parameter/value mapping. Default values on the signature will be included if not overridden. If the function has a `__prefect_self__` attribute, it will be included as the first parameter. This attribute is set when Prefect decorates a bound method, so this approach allows Prefect to work with bound methods in a way that is consistent with how Python handles them (i.e. users don't have to pass the instance argument to the method) while still making the implicit self argument visible to all of Prefect's parameter machinery (such as cache key functions). Raises a ParameterBindError if the arguments/kwargs are not valid for the function ### `get_parameter_defaults` ```python theme={null} get_parameter_defaults(fn: Callable[..., Any]) -> dict[str, Any] ``` Get default parameter values for a callable. ### `explode_variadic_parameter` ```python theme={null} explode_variadic_parameter(fn: Callable[..., Any], parameters: dict[str, Any]) -> dict[str, Any] ``` Given a parameter dictionary, move any parameters stored in a variadic keyword argument parameter (i.e. \*\*kwargs) into the top level. Example: ```python theme={null} def foo(a, b, **kwargs): pass parameters = {"a": 1, "b": 2, "kwargs": {"c": 3, "d": 4}} explode_variadic_parameter(foo, parameters) # {"a": 1, "b": 2, "c": 3, "d": 4} ``` ### `collapse_variadic_parameters` ```python theme={null} collapse_variadic_parameters(fn: Callable[..., Any], parameters: dict[str, Any]) -> dict[str, Any] ``` Given a parameter dictionary, move any parameters stored not present in the signature into the variadic keyword argument. Example: ```python theme={null} def foo(a, b, **kwargs): pass parameters = {"a": 1, "b": 2, "c": 3, "d": 4} collapse_variadic_parameters(foo, parameters) # {"a": 1, "b": 2, "kwargs": {"c": 3, "d": 4}} ``` ### `parameters_to_args_kwargs` ```python theme={null} parameters_to_args_kwargs(fn: Callable[..., Any], parameters: dict[str, Any]) -> tuple[tuple[Any, ...], dict[str, Any]] ``` Convert a `parameters` dictionary to positional and keyword arguments The function *must* have an identical signature to the original function or this will return an empty tuple and dict. ### `call_with_parameters` ```python theme={null} call_with_parameters(fn: Callable[..., R], parameters: dict[str, Any]) -> R ``` Call a function with parameters extracted with `get_call_parameters` The function *must* have an identical signature to the original function or this will fail. If you need to send to a function with a different signature, extract the args/kwargs using `parameters_to_positional_and_keyword` directly ### `cloudpickle_wrapped_call` ```python theme={null} cloudpickle_wrapped_call(__fn: Callable[..., Any], *args: Any, **kwargs: Any) -> Callable[[], bytes] ``` Serializes a function call using cloudpickle then returns a callable which will execute that call and return a cloudpickle serialized return value This is particularly useful for sending calls to libraries that only use the Python built-in pickler (e.g. `anyio.to_process` and `multiprocessing`) but may require a wider range of pickling support. ### `parameter_docstrings` ```python theme={null} parameter_docstrings(docstring: Optional[str]) -> dict[str, str] ``` Given a docstring in Google docstring format, parse the parameter section and return a dictionary that maps parameter names to docstring. **Args:** * `docstring`: The function's docstring. **Returns:** * Mapping from parameter names to docstrings. ### `process_v1_params` ```python theme={null} process_v1_params(param: inspect.Parameter) -> tuple[str, Any, Any] ``` ### `create_v1_schema` ```python theme={null} create_v1_schema(name_: str, model_cfg: type[Any], model_fields: Optional[dict[str, Any]] = None) -> dict[str, Any] ``` ### `parameter_schema` ```python theme={null} parameter_schema(fn: Callable[..., Any]) -> ParameterSchema ``` Given a function, generates an OpenAPI-compatible description of the function's arguments, including: * name * typing information * whether it is required * a default value * additional constraints (like possible enum values) **Args:** * `fn`: The function whose arguments will be serialized **Returns:** * the argument schema ### `parameter_schema_from_entrypoint` ```python theme={null} parameter_schema_from_entrypoint(entrypoint: str) -> ParameterSchema ``` Generate a parameter schema from an entrypoint string. Will load the source code of the function and extract the signature and docstring to generate the schema. Useful for generating a schema for a function when instantiating the function may not be possible due to missing imports or other issues. **Args:** * `entrypoint`: A string representing the entrypoint to a function. The string should be in the format of `module.path.to.function\:do_stuff`. **Returns:** * The parameter schema for the function. ### `generate_parameter_schema` ```python theme={null} generate_parameter_schema(signature: inspect.Signature, docstrings: dict[str, str]) -> ParameterSchema ``` Generate a parameter schema from a function signature and docstrings. To get a signature from a function, use `inspect.signature(fn)` or `_generate_signature_from_source(source_code, func_name)`. **Args:** * `signature`: The function signature. * `docstrings`: A dictionary mapping parameter names to docstrings. **Returns:** * The parameter schema. ### `raise_for_reserved_arguments` ```python theme={null} raise_for_reserved_arguments(fn: Callable[..., Any], reserved_arguments: Iterable[str]) -> None ``` Raise a ReservedArgumentError if `fn` has any parameters that conflict with the names contained in `reserved_arguments`. ### `expand_mapping_parameters` ```python theme={null} expand_mapping_parameters(func: Callable[..., Any], parameters: dict[str, Any]) -> list[dict[str, Any]] ``` Generates a list of call parameters to be used for individual calls in a mapping operation. **Args:** * `func`: The function to be called * `parameters`: A dictionary of parameters with iterables to be mapped over **Returns:** * A list of dictionaries to be used as parameters for each call in the mapping operation ## Classes ### `ParameterSchema` Simple data model corresponding to an OpenAPI `Schema`. **Methods:** #### `model_dump_for_openapi` ```python theme={null} model_dump_for_openapi(self) -> dict[str, Any] ``` # collections Source: https://docs.prefect.io/v3/api-ref/python/prefect-utilities-collections # `prefect.utilities.collections` Utilities for extensions of and operations on Python collections. ## Functions ### `dict_to_flatdict` ```python theme={null} dict_to_flatdict(dct: NestedDict[KT, VT]) -> dict[tuple[KT, ...], VT] ``` Converts a (nested) dictionary to a flattened representation. Each key of the flat dict will be a CompoundKey tuple containing the "chain of keys" for the corresponding value. **Args:** * `dct`: The dictionary to flatten **Returns:** * A flattened dict of the same type as dct ### `flatdict_to_dict` ```python theme={null} flatdict_to_dict(dct: dict[tuple[KT, ...], VT]) -> NestedDict[KT, VT] ``` Converts a flattened dictionary back to a nested dictionary. **Args:** * `dct`: The dictionary to be nested. Each key should be a tuple of keys as generated by `dict_to_flatdict` Returns A nested dict of the same type as dct ### `isiterable` ```python theme={null} isiterable(obj: Any) -> bool ``` Return a boolean indicating if an object is iterable. Excludes types that are iterable but typically used as singletons: * str * bytes * IO objects ### `ensure_iterable` ```python theme={null} ensure_iterable(obj: Union[T, Iterable[T]]) -> Collection[T] ``` ### `listrepr` ```python theme={null} listrepr(objs: Iterable[Any], sep: str = ' ') -> str ``` ### `extract_instances` ```python theme={null} extract_instances(objects: Iterable[Any], types: Union[type[T], tuple[type[T], ...]] = object) -> Union[list[T], dict[type[T], list[T]]] ``` Extract objects from a file and returns a dict of type -> instances **Args:** * `objects`: An iterable of objects * `types`: A type or tuple of types to extract, defaults to all objects **Returns:** * If a single type is given: a list of instances of that type * If a tuple of types is given: a mapping of type to a list of instances ### `batched_iterable` ```python theme={null} batched_iterable(iterable: Iterable[T], size: int) -> Generator[tuple[T, ...], None, None] ``` Yield batches of a certain size from an iterable **Args:** * `iterable`: An iterable * `size`: The batch size to return ### `visit_collection` ```python theme={null} visit_collection(expr: Any, visit_fn: Union[Callable[[Any, dict[str, VT]], Any], Callable[[Any], Any]]) -> Optional[Any] ``` Visits and potentially transforms every element of an arbitrary Python collection. If an element is a Python collection, it will be visited recursively. If an element is not a collection, `visit_fn` will be called with the element. The return value of `visit_fn` can be used to alter the element if `return_data` is set to `True`. Note: * When `return_data` is `True`, a copy of each collection is created only if `visit_fn` modifies an element within that collection. This approach minimizes performance penalties by avoiding unnecessary copying. * When `return_data` is `False`, no copies are created, and only side effects from `visit_fn` are applied. This mode is faster and should be used when no transformation of the collection is required, because it never has to copy any data. Supported types: * List (including iterators) * Tuple * Set * Dict (note: keys are also visited recursively) * Dataclass * Pydantic model * Prefect annotations Note that visit\_collection will not consume generators or async generators, as it would prevent the caller from iterating over them. **Args:** * `expr`: A Python object or expression. * `visit_fn`: A function that will be applied to every non-collection element of `expr`. The function can accept one or two arguments. If two arguments are accepted, the second argument will be the context dictionary. * `return_data`: If `True`, a copy of `expr` containing data modified by `visit_fn` will be returned. This is slower than `return_data=False` (the default). * `max_depth`: Controls the depth of recursive visitation. If set to zero, no recursion will occur. If set to a positive integer `N`, visitation will only descend to `N` layers deep. If set to any negative integer, no limit will be enforced and recursion will continue until terminal items are reached. By default, recursion is unlimited. * `context`: An optional dictionary. If passed, the context will be sent to each call to the `visit_fn`. The context can be mutated by each visitor and will be available for later visits to expressions at the given depth. Values will not be available "up" a level from a given expression. The context will be automatically populated with an 'annotation' key when visiting collections within a `BaseAnnotation` type. This requires the caller to pass `context={}` and will not be activated by default. * `remove_annotations`: If set, annotations will be replaced by their contents. By default, annotations are preserved but their contents are visited. * `_seen`: A set of object ids that have already been visited. This prevents infinite recursion when visiting recursive data structures. **Returns:** * The modified collection if `return_data` is `True`, otherwise `None`. ### `remove_nested_keys` ```python theme={null} remove_nested_keys(keys_to_remove: list[HashableT], obj: Union[NestedDict[HashableT, VT], Any]) -> Union[NestedDict[HashableT, VT], Any] ``` Recurses a dictionary returns a copy without all keys that match an entry in `key_to_remove`. Return `obj` unchanged if not a dictionary. **Args:** * `keys_to_remove`: A list of keys to remove from obj obj: The object to remove keys from. **Returns:** * `obj` without keys matching an entry in `keys_to_remove` if `obj` is a dictionary. `obj` if `obj` is not a dictionary. ### `distinct` ```python theme={null} distinct(iterable: Iterable[Union[T, HashableT]], key: Optional[Callable[[T], Hashable]] = None) -> Iterator[Union[T, HashableT]] ``` ### `get_from_dict` ```python theme={null} get_from_dict(dct: NestedDict[str, VT], keys: Union[str, list[str]], default: Optional[R] = None) -> Union[VT, R, None] ``` Fetch a value from a nested dictionary or list using a sequence of keys. This function allows to fetch a value from a deeply nested structure of dictionaries and lists using either a dot-separated string or a list of keys. If a requested key does not exist, the function returns the provided default value. **Args:** * `dct`: The nested dictionary or list from which to fetch the value. * `keys`: The sequence of keys to use for access. Can be a dot-separated string or a list of keys. List indices can be included in the sequence as either integer keys or as string indices in square brackets. * `default`: The default value to return if the requested key path does not exist. Defaults to None. **Returns:** * The fetched value if the key exists, or the default value if it does not. Examples: ```python theme={null} get_from_dict({'a': {'b': {'c': [1, 2, 3, 4]}}}, 'a.b.c[1]') # 2 get_from_dict({'a': {'b': [0, {'c': [1, 2]}]}}, ['a', 'b', 1, 'c', 1]) # 2 get_from_dict({'a': {'b': [0, {'c': [1, 2]}]}}, 'a.b.1.c.2', 'default') # 'default' ``` ### `set_in_dict` ```python theme={null} set_in_dict(dct: NestedDict[str, VT], keys: Union[str, list[str]], value: VT) -> None ``` Sets a value in a nested dictionary using a sequence of keys. This function allows to set a value in a deeply nested structure of dictionaries and lists using either a dot-separated string or a list of keys. If a requested key does not exist, the function will create it as a new dictionary. **Args:** * `dct`: The dictionary to set the value in. * `keys`: The sequence of keys to use for access. Can be a dot-separated string or a list of keys. * `value`: The value to set in the dictionary. **Returns:** * The modified dictionary with the value set at the specified key path. **Raises:** * `KeyError`: If the key path exists and is not a dictionary. ### `deep_merge` ```python theme={null} deep_merge(dct: NestedDict[str, VT1], merge: NestedDict[str, VT2]) -> NestedDict[str, Union[VT1, VT2]] ``` Recursively merges `merge` into `dct`. **Args:** * `dct`: The dictionary to merge into. * `merge`: The dictionary to merge from. **Returns:** * A new dictionary with the merged contents. ### `deep_merge_dicts` ```python theme={null} deep_merge_dicts(*dicts: NestedDict[str, Any]) -> NestedDict[str, Any] ``` Recursively merges multiple dictionaries. **Args:** * `dicts`: The dictionaries to merge. **Returns:** * A new dictionary with the merged contents. ## Classes ### `AutoEnum` An enum class that automatically generates value from variable names. This guards against common errors where variable names are updated but values are not. In addition, because AutoEnums inherit from `str`, they are automatically JSON-serializable. See [https://docs.python.org/3/library/enum.html#using-automatic-values](https://docs.python.org/3/library/enum.html#using-automatic-values) **Methods:** #### `auto` ```python theme={null} auto() -> str ``` Exposes `enum.auto()` to avoid requiring a second import to use `AutoEnum` ### `StopVisiting` A special exception used to stop recursive visits in `visit_collection`. When raised, the expression is returned without modification and recursive visits in that path will end. # compat Source: https://docs.prefect.io/v3/api-ref/python/prefect-utilities-compat # `prefect.utilities.compat` Utilities for Python version compatibility # context Source: https://docs.prefect.io/v3/api-ref/python/prefect-utilities-context # `prefect.utilities.context` ## Functions ### `temporary_context` ```python theme={null} temporary_context(context: Context) -> Generator[None, Any, None] ``` ### `get_task_run_id` ```python theme={null} get_task_run_id() -> Optional[UUID] ``` ### `get_flow_run_id` ```python theme={null} get_flow_run_id() -> Optional[UUID] ``` ### `get_task_and_flow_run_ids` ```python theme={null} get_task_and_flow_run_ids() -> tuple[Optional[UUID], Optional[UUID]] ``` Get the task run and flow run ids from the context, if available. **Returns:** * tuple\[Optional\[UUID], Optional\[UUID]]: a tuple of the task run id and flow run id # dispatch Source: https://docs.prefect.io/v3/api-ref/python/prefect-utilities-dispatch # `prefect.utilities.dispatch` Provides methods for performing dynamic dispatch for actions on base type to one of its subtypes. Example: ```python theme={null} @register_base_type class Base: @classmethod def __dispatch_key__(cls): return cls.__name__.lower() class Foo(Base): ... key = get_dispatch_key(Foo) # 'foo' lookup_type(Base, key) # Foo ``` ## Functions ### `get_registry_for_type` ```python theme={null} get_registry_for_type(cls: T) -> Optional[dict[str, T]] ``` Get the first matching registry for a class or any of its base classes. If not found, `None` is returned. ### `get_dispatch_key` ```python theme={null} get_dispatch_key(cls_or_instance: Any, allow_missing: bool = False) -> Optional[str] ``` Retrieve the unique dispatch key for a class type or instance. This key is defined at the `__dispatch_key__` attribute. If it is a callable, it will be resolved. If `allow_missing` is `False`, an exception will be raised if the attribute is not defined or the key is null. If `True`, `None` will be returned in these cases. ### `register_base_type` ```python theme={null} register_base_type(cls: T) -> T ``` Register a base type allowing child types to be registered for dispatch with `register_type`. The base class may or may not define a `__dispatch_key__` to allow lookups of the base type. ### `register_type` ```python theme={null} register_type(cls: T) -> T ``` Register a type for lookup with dispatch. The type or one of its parents must define a unique `__dispatch_key__`. One of the classes base types must be registered using `register_base_type`. ### `lookup_type` ```python theme={null} lookup_type(cls: T, dispatch_key: str) -> T ``` Look up a dispatch key in the type registry for the given class. # dockerutils Source: https://docs.prefect.io/v3/api-ref/python/prefect-utilities-dockerutils # `prefect.utilities.dockerutils` ## Functions ### `python_version_minor` ```python theme={null} python_version_minor() -> str ``` ### `python_version_micro` ```python theme={null} python_version_micro() -> str ``` ### `get_prefect_image_name` ```python theme={null} get_prefect_image_name(prefect_version: Optional[str] = None, python_version: Optional[str] = None, flavor: Optional[str] = None) -> str ``` Get the Prefect image name matching the current Prefect and Python versions. **Args:** * `prefect_version`: An optional override for the Prefect version. * `python_version`: An optional override for the Python version; must be at the minor level e.g. '3.9'. * `flavor`: An optional alternative image flavor to build, like 'conda' ### `silence_docker_warnings` ```python theme={null} silence_docker_warnings() -> Generator[None, None, None] ``` ### `docker_client` ```python theme={null} docker_client() -> Generator['DockerClient', None, None] ``` Get the environmentally-configured Docker client ### `build_image` ```python theme={null} build_image(context: Path, dockerfile: str = 'Dockerfile', tag: Optional[str] = None, pull: bool = False, platform: Optional[str] = None, stream_progress_to: Optional[TextIO] = None, **kwargs: Any) -> str ``` Builds a Docker image, returning the image ID **Args:** * `context`: the root directory for the Docker build context * `dockerfile`: the path to the Dockerfile, relative to the context * `tag`: the tag to give this image * `pull`: True to pull the base image during the build * `stream_progress_to`: an optional stream (like sys.stdout, or an io.TextIO) that will collect the build output as it is reported by Docker **Returns:** * The image ID ### `push_image` ```python theme={null} push_image(image_id: str, registry_url: str, name: str, tag: Optional[str] = None, stream_progress_to: Optional[TextIO] = None) -> str ``` Pushes a local image to a Docker registry, returning the registry-qualified tag for that image This assumes that the environment's Docker daemon is already authenticated to the given registry, and currently makes no attempt to authenticate. **Args:** * `image_id`: a Docker image ID * `registry_url`: the URL of a Docker registry * `name`: the name of this image * `tag`: the tag to give this image (defaults to a short representation of the image's ID) * `stream_progress_to`: an optional stream (like sys.stdout, or an io.TextIO) that will collect the build output as it is reported by Docker **Returns:** * A registry-qualified tag, like my-registry.example.com/my-image:abcdefg ### `to_run_command` ```python theme={null} to_run_command(command: list[str]) -> str ``` Convert a process-style list of command arguments to a single Dockerfile RUN instruction. ### `parse_image_tag` ```python theme={null} parse_image_tag(name: str) -> tuple[str, Optional[str]] ``` Parse Docker Image String * If a tag or digest exists, this function parses and returns the image registry and tag/digest, separately as a tuple. * Example 1: 'prefecthq/prefect:latest' -> ('prefecthq/prefect', 'latest') * Example 2: 'hostname.io:5050/folder/subfolder:latest' -> ('hostname.io:5050/folder/subfolder', 'latest') * Example 3: 'prefecthq/prefect\@sha256:abc123' -> ('prefecthq/prefect', 'sha256:abc123') * Supports parsing Docker Image strings that follow Docker Image Specification v1.1.0 * Image building tools typically enforce this standard **Args:** * `name`: Name of Docker Image ### `split_repository_path` ```python theme={null} split_repository_path(repository_path: str) -> tuple[Optional[str], str] ``` Splits a Docker repository path into its namespace and repository components. **Args:** * `repository_path`: The Docker repository path to split. **Returns:** * Tuple\[Optional\[str], str]: A tuple containing the namespace and repository components. * namespace (Optional\[str]): The Docker namespace, combining the registry and organization. None if not present. * repository (Optionals\[str]): The repository name. ### `format_outlier_version_name` ```python theme={null} format_outlier_version_name(version: str) -> str ``` Formats outlier docker version names to pass `packaging.version.parse` validation * Current cases are simple, but creates stub for more complicated formatting if eventually needed. * Example outlier versions that throw a parsing exception: * "20.10.0-ce" (variant of community edition label) * "20.10.0-ee" (variant of enterprise edition label) **Args:** * `version`: raw docker version value **Returns:** * value that can pass `packaging.version.parse` validation ### `generate_default_dockerfile` ```python theme={null} generate_default_dockerfile(context: Optional[Path] = None) ``` Generates a default Dockerfile used for deploying flows. The Dockerfile is written to a temporary file and yielded. The temporary file is removed after the context manager exits. **Args:** * `- context`: The context to use for the Dockerfile. Defaults to the current working directory. ## Classes ### `BuildError` Raised when a Docker build fails ### `ImageBuilder` An interface for preparing Docker build contexts and building images **Methods:** #### `add_line` ```python theme={null} add_line(self, line: str) -> None ``` Add a line to this image's Dockerfile #### `add_lines` ```python theme={null} add_lines(self, lines: Iterable[str]) -> None ``` Add lines to this image's Dockerfile #### `assert_has_file` ```python theme={null} assert_has_file(self, source: Path, container_path: PurePosixPath) -> None ``` Asserts that the given file or directory will be copied into the container at the given path #### `assert_has_line` ```python theme={null} assert_has_line(self, line: str) -> None ``` Asserts that the given line is in the Dockerfile #### `assert_line_absent` ```python theme={null} assert_line_absent(self, line: str) -> None ``` Asserts that the given line is absent from the Dockerfile #### `assert_line_after` ```python theme={null} assert_line_after(self, second: str, first: str) -> None ``` Asserts that the second line appears after the first line #### `assert_line_before` ```python theme={null} assert_line_before(self, first: str, second: str) -> None ``` Asserts that the first line appears before the second line #### `build` ```python theme={null} build(self, pull: bool = False, stream_progress_to: Optional[TextIO] = None) -> str ``` Build the Docker image from the current state of the ImageBuilder **Args:** * `pull`: True to pull the base image during the build * `stream_progress_to`: an optional stream (like sys.stdout, or an io.TextIO) that will collect the build output as it is reported by Docker **Returns:** * The image ID #### `copy` ```python theme={null} copy(self, source: Union[str, Path], destination: Union[str, PurePosixPath]) -> None ``` Copy a file to this image #### `write_text` ```python theme={null} write_text(self, text: str, destination: Union[str, PurePosixPath]) -> None ``` ### `PushError` Raised when a Docker image push fails # engine Source: https://docs.prefect.io/v3/api-ref/python/prefect-utilities-engine # `prefect.utilities.engine` ## Functions ### `collect_task_run_inputs` ```python theme={null} collect_task_run_inputs(expr: Any, max_depth: int = -1) -> set[Union[TaskRunResult, FlowRunResult]] ``` This function recurses through an expression to generate a set of any discernible task run inputs it finds in the data structure. It produces a set of all inputs found. Examples: ```python theme={null} task_inputs = { k: await collect_task_run_inputs(v) for k, v in parameters.items() } ``` ### `collect_task_run_inputs_sync` ```python theme={null} collect_task_run_inputs_sync(expr: Any, future_cls: Any = PrefectFuture, max_depth: int = -1) -> set[Union[TaskRunResult, FlowRunResult]] ``` This function recurses through an expression to generate a set of any discernible task run inputs it finds in the data structure. It produces a set of all inputs found. **Examples:** ```python theme={null} task_inputs = { k: collect_task_run_inputs_sync(v) for k, v in parameters.items() } ``` ### `capture_sigterm` ```python theme={null} capture_sigterm() -> Generator[None, Any, None] ``` ### `resolve_inputs` ```python theme={null} resolve_inputs(parameters: dict[str, Any], return_data: bool = True, max_depth: int = -1) -> dict[str, Any] ``` Resolve any `Quote`, `PrefectFuture`, or `State` types nested in parameters into data. **Returns:** * A copy of the parameters with resolved data **Raises:** * `UpstreamTaskError`: If any of the upstream states are not `COMPLETED` ### `propose_state` ```python theme={null} propose_state(client: 'PrefectClient', state: State[Any], flow_run_id: UUID, force: bool = False) -> State[Any] ``` Propose a new state for a flow run, invoking Prefect orchestration logic. If the proposed state is accepted, the provided `state` will be augmented with details and returned. If the proposed state is rejected, a new state returned by the Prefect API will be returned. If the proposed state results in a WAIT instruction from the Prefect API, the function will sleep and attempt to propose the state again. If the proposed state results in an ABORT instruction from the Prefect API, an error will be raised. **Args:** * `state`: a new state for a flow run * `flow_run_id`: an optional flow run id, used when proposing flow run states **Returns:** * a State model representation of the flow run state **Raises:** * `prefect.exceptions.Abort`: if an ABORT instruction is received from the Prefect API ### `propose_state_sync` ```python theme={null} propose_state_sync(client: 'SyncPrefectClient', state: State[Any], flow_run_id: UUID, force: bool = False) -> State[Any] ``` Propose a new state for a flow run, invoking Prefect orchestration logic. If the proposed state is accepted, the provided `state` will be augmented with details and returned. If the proposed state is rejected, a new state returned by the Prefect API will be returned. If the proposed state results in a WAIT instruction from the Prefect API, the function will sleep and attempt to propose the state again. If the proposed state results in an ABORT instruction from the Prefect API, an error will be raised. **Args:** * `state`: a new state for the flow run * `flow_run_id`: an optional flow run id, used when proposing flow run states **Returns:** * a State model representation of the flow run state **Raises:** * `ValueError`: if flow\_run\_id is not provided * `prefect.exceptions.Abort`: if an ABORT instruction is received from the Prefect API ### `get_state_for_result` ```python theme={null} get_state_for_result(obj: Any) -> Optional[tuple[State, RunType]] ``` Get the state related to a result object. `link_state_to_result` must have been called first. For objects that support `__weakref__`, the entry stored by `link_state_to_result` carries a weak reference back to the original object. We verify here that the entry's weak reference still points to the *same* object that registered the entry — not just to *some* object that happens to share its `id()`. This prevents stale hits caused by CPython recycling a freed memory address. Stale entries are evicted on detection. For objects that do not support `__weakref__` (plain `dict`, `list`, `set`, `str`, `int`, `tuple`, ...), the entry has no weak reference and we fall back to the legacy `id()`-only lookup. This preserves today's behavior for those types — including the latent stale-id bug — and isolates the limitation to a single named code path. ### `link_state_to_flow_run_result` ```python theme={null} link_state_to_flow_run_result(state: State, result: Any) -> None ``` Creates a link between a state and flow run result ### `link_state_to_task_run_result` ```python theme={null} link_state_to_task_run_result(state: State, result: Any) -> None ``` Creates a link between a state and task run result ### `link_state_to_result` ```python theme={null} link_state_to_result(state: State, result: Any, run_type: RunType) -> None ``` Caches a link between a state and a result and its components using the `id` of the components to map to the state. The cache is persisted to the current flow run context since task relationships are limited to within a flow run. This allows dependency tracking to occur when results are passed around. Note: Because `id` is used, we cannot cache links between singleton objects. We only cache the relationship between components 1-layer deep. Example: Given the result \[1, \["a","b"], ("c",)], the following elements will be mapped to the state: * \[1, \["a","b"], ("c",)] * \["a","b"] * ("c",) Note: the int `1` will not be mapped to the state because it is a singleton. Other Notes: We do not hash the result because: * If changes are made to the object in the flow between task calls, we can still track that they are related. * Hashing can be expensive. * Not all objects are hashable. * Hash-based keying would also conflate equal-but-distinct objects from unrelated tasks. We do not set an attribute, e.g. `__prefect_state__`, on the result because: * Mutating user's objects is dangerous. * Unrelated equality comparisons can break unexpectedly. * The field can be preserved on copy. * We cannot set this attribute on Python built-ins. ### `should_log_prints` ```python theme={null} should_log_prints(flow_or_task: Union['Flow[..., Any]', 'Task[..., Any]']) -> bool ``` ### `check_api_reachable` ```python theme={null} check_api_reachable(client: 'PrefectClient', fail_message: str) -> None ``` ### `emit_task_run_state_change_event` ```python theme={null} emit_task_run_state_change_event(task_run: TaskRun, initial_state: Optional[State[Any]], validated_state: State[Any], follows: Optional[Event] = None) -> Optional[Event] ``` ### `resolve_to_final_result` ```python theme={null} resolve_to_final_result(expr: Any, context: dict[str, Any]) -> Any ``` Resolve any `PrefectFuture`, or `State` types nested in parameters into data. Designed to be use with `visit_collection`. ### `resolve_inputs_sync` ```python theme={null} resolve_inputs_sync(parameters: dict[str, Any], return_data: bool = True, max_depth: int = -1) -> dict[str, Any] ``` Resolve any `Quote`, `PrefectFuture`, or `State` types nested in parameters into data. **Returns:** * A copy of the parameters with resolved data **Raises:** * `UpstreamTaskError`: If any of the upstream states are not `COMPLETED` # filesystem Source: https://docs.prefect.io/v3/api-ref/python/prefect-utilities-filesystem # `prefect.utilities.filesystem` Utilities for working with file systems ## Functions ### `create_default_ignore_file` ```python theme={null} create_default_ignore_file(path: str) -> bool ``` Creates default ignore file in the provided path if one does not already exist; returns boolean specifying whether a file was created. ### `filter_files` ```python theme={null} filter_files(root: str = '.', ignore_patterns: Optional[Iterable[AnyStr]] = None, include_dirs: bool = True) -> set[str] ``` This function accepts a root directory path and a list of file patterns to ignore, and returns a list of files that excludes those that should be ignored. The specification matches that of [.gitignore files](https://git-scm.com/docs/gitignore). ### `tmpchdir` ```python theme={null} tmpchdir(path: str) ``` Change current-working directories for the duration of the context, with special handling for UNC paths on Windows. ### `filename` ```python theme={null} filename(path: str) -> str ``` Extract the file name from a path with remote file system support ### `is_local_path` ```python theme={null} is_local_path(path: Union[str, pathlib.Path, Any]) -> bool ``` Check if the given path points to a local or remote file system ### `to_display_path` ```python theme={null} to_display_path(path: Union[pathlib.Path, str], relative_to: Optional[Union[pathlib.Path, str]] = None) -> str ``` Convert a path to a displayable path. The absolute path or relative path to the current (or given) directory will be returned, whichever is shorter. ### `relative_path_to_current_platform` ```python theme={null} relative_path_to_current_platform(path_str: str) -> Path ``` Converts a relative path generated on any platform to a relative path for the current platform. ### `get_open_file_limit` ```python theme={null} get_open_file_limit() -> int ``` Get the maximum number of open files allowed for the current process # generics Source: https://docs.prefect.io/v3/api-ref/python/prefect-utilities-generics # `prefect.utilities.generics` ## Functions ### `validate_list` ```python theme={null} validate_list(model: type[T], input: Any) -> list[T] ``` # hashing Source: https://docs.prefect.io/v3/api-ref/python/prefect-utilities-hashing # `prefect.utilities.hashing` ## Functions ### `stable_hash` ```python theme={null} stable_hash(*args: Union[str, bytes]) -> str ``` Given some arguments, produces a stable 64-bit hash of their contents. Supports bytes and strings. Strings will be UTF-8 encoded. **Args:** * `*args`: Items to include in the hash. * `hash_algo`: Hash algorithm from hashlib to use. **Returns:** * A hex hash. ### `file_hash` ```python theme={null} file_hash(path: str, hash_algo: Callable[..., Any] = _md5) -> str ``` Given a path to a file, produces a stable hash of the file contents. **Args:** * `path`: the path to a file * `hash_algo`: Hash algorithm from hashlib to use. **Returns:** * a hash of the file contents ### `hash_objects` ```python theme={null} hash_objects(*args: Any, **kwargs: Any) -> Optional[str] ``` Attempt to hash objects by dumping to JSON or serializing with cloudpickle. **Args:** * `*args`: Positional arguments to hash * `hash_algo`: Hash algorithm to use * `raise_on_failure`: If True, raise exceptions instead of returning None * `**kwargs`: Keyword arguments to hash **Returns:** * A hash string or None if hashing failed **Raises:** * `HashError`: If objects cannot be hashed and raise\_on\_failure is True # importtools Source: https://docs.prefect.io/v3/api-ref/python/prefect-utilities-importtools # `prefect.utilities.importtools` ## Functions ### `to_qualified_name` ```python theme={null} to_qualified_name(obj: Any) -> str ``` Given an object, returns its fully-qualified name: a string that represents its Python import path. **Args:** * `obj`: an importable Python object **Returns:** * the qualified name ### `from_qualified_name` ```python theme={null} from_qualified_name(name: str) -> Any ``` Import an object given a fully-qualified name. **Args:** * `name`: The fully-qualified name of the object to import. **Returns:** * the imported object **Examples:** ```python theme={null} obj = from_qualified_name("random.randint") import random obj == random.randint # True ``` ### `load_script_as_module` ```python theme={null} load_script_as_module(path: str) -> ModuleType ``` Execute a script at the given path. Sets the module name to a unique identifier to ensure thread safety. Uses a lock to safely modify sys.path for relative imports. If an exception occurs during execution of the script, a `prefect.exceptions.ScriptError` is created to wrap the exception and raised. ### `load_module` ```python theme={null} load_module(module_name: str) -> ModuleType ``` Import a module with support for relative imports within the module. ### `import_object` ```python theme={null} import_object(import_path: str) -> Any ``` Load an object from an import path. Import paths can be formatted as one of: * module.object * module:object * /path/to/script.py:object * module:object.method * /path/to/script.py:object.method This function is not thread safe as it modifies the 'sys' module during execution. ### `lazy_import` ```python theme={null} lazy_import(name: str, error_on_import: bool = False, help_message: Optional[str] = None) -> ModuleType ``` Create a lazily-imported module to use in place of the module of the given name. Use this to retain module-level imports for libraries that we don't want to actually import until they are needed. NOTE: Lazy-loading a subpackage can cause the subpackage to be imported twice if another non-lazy import also imports the subpackage. For example, using both `lazy_import("docker.errors")` and `import docker.errors` in the same codebase will import `docker.errors` twice and can lead to unexpected behavior, e.g. type check failures and import-time side effects running twice. Adapted from the [Python documentation][1] and [lazy\_loader][2] [1]: https://docs.python.org/3/library/importlib.html#implementing-lazy-imports [2]: https://github.com/scientific-python/lazy_loader ### `safe_load_namespace` ```python theme={null} safe_load_namespace(source_code: str, filepath: Optional[str] = None) -> dict[str, Any] ``` Safely load a namespace from source code, optionally handling relative imports. If a `filepath` is provided, `sys.path` is modified to support relative imports. Changes to `sys.path` are reverted after completion, but this function is not thread safe and use of it in threaded contexts may result in undesirable behavior. **Args:** * `source_code`: The source code to load * `filepath`: Optional file path of the source code. If provided, enables relative imports. **Returns:** * The namespace loaded from the source code. ## Classes ### `DelayedImportErrorModule` A fake module returned by `lazy_import` when the module cannot be found. When any of the module's attributes are accessed, we will throw a `ModuleNotFoundError`. Adapted from [lazy\_loader][1] [1]: https://github.com/scientific-python/lazy_loader ### `AliasedModuleDefinition` A definition for the `AliasedModuleFinder`. **Args:** * `alias`: The import name to create * `real`: The import name of the module to reference for the alias * `callback`: A function to call when the alias module is loaded ### `AliasedModuleFinder` **Methods:** #### `find_spec` ```python theme={null} find_spec(self, fullname: str, path: Optional[Sequence[str]] = None, target: Optional[ModuleType] = None) -> Optional[ModuleSpec] ``` The fullname is the imported path, e.g. "foo.bar". If there is an alias "phi" for "foo" then on import of "phi.bar" we will find the spec for "foo.bar" and create a new spec for "phi.bar" that points to "foo.bar". ### `AliasedModuleLoader` **Methods:** #### `exec_module` ```python theme={null} exec_module(self, module: ModuleType) -> None ``` # math Source: https://docs.prefect.io/v3/api-ref/python/prefect-utilities-math # `prefect.utilities.math` ## Functions ### `poisson_interval` ```python theme={null} poisson_interval(average_interval: float, lower: float = 0, upper: float = 1) -> float ``` Generates an "inter-arrival time" for a Poisson process. Draws a random variable from an exponential distribution using the inverse-CDF method. Can optionally be passed a lower and upper bound between (0, 1] to clamp the potential output values. ### `exponential_cdf` ```python theme={null} exponential_cdf(x: float, average_interval: float) -> float ``` ### `lower_clamp_multiple` ```python theme={null} lower_clamp_multiple(k: float) -> float ``` Computes a lower clamp multiple that can be used to bound a random variate drawn from an exponential distribution. Given an upper clamp multiple `k` (and corresponding upper bound k \* average\_interval), this function computes a lower clamp multiple `c` (corresponding to a lower bound c \* average\_interval) where the probability mass between the lower bound and the median is equal to the probability mass between the median and the upper bound. ### `clamped_poisson_interval` ```python theme={null} clamped_poisson_interval(average_interval: float, clamping_factor: float = 0.3) -> float ``` Bounds Poisson "inter-arrival times" to a range defined by the clamping factor. The upper bound for this random variate is: average\_interval \* (1 + clamping\_factor). A lower bound is picked so that the average interval remains approximately fixed. ### `bounded_poisson_interval` ```python theme={null} bounded_poisson_interval(lower_bound: float, upper_bound: float) -> float ``` Bounds Poisson "inter-arrival times" to a range. Unlike `clamped_poisson_interval` this does not take a target average interval. Instead, the interval is predetermined and the average is calculated as their midpoint. This allows Poisson intervals to be used in cases where a lower bound must be enforced. # names Source: https://docs.prefect.io/v3/api-ref/python/prefect-utilities-names # `prefect.utilities.names` ## Functions ### `generate_slug` ```python theme={null} generate_slug(n_words: int) -> str ``` Generates a random slug. **Args:** * `- n_words`: the number of words in the slug ### `obfuscate` ```python theme={null} obfuscate(s: Any, show_tail: bool = False) -> str ``` Obfuscates any data type's string representation. See `obfuscate_string`. ### `obfuscate_string` ```python theme={null} obfuscate_string(s: str, show_tail: bool = False) -> str ``` Obfuscates a string by returning a new string of 8 characters. If the input string is longer than 10 characters and show\_tail is True, then up to 4 of its final characters will become final characters of the obfuscated string; all other characters are "\*". "abc" -> "********" "abcdefgh" -> "********" "abcdefghijk" -> "\*\*\*\*\*\*\*k" "abcdefghijklmnopqrs" -> "\*\*\*\*pqrs" # processutils Source: https://docs.prefect.io/v3/api-ref/python/prefect-utilities-processutils # `prefect.utilities.processutils` ## Functions ### `command_to_string` ```python theme={null} command_to_string(command: list[str]) -> str ``` Serialize a command list to a platform-neutral string. We use POSIX shell quoting so stored commands round-trip across platforms when paired with `command_from_string`. ### `command_from_string` ```python theme={null} command_from_string(command: str) -> list[str] ``` Parse a command string back into argv tokens. Prefect-owned command strings use POSIX shell quoting. Other command strings keep native parsing so existing Windows configuration still works. ### `open_process` ```python theme={null} open_process(command: list[str], **kwargs: Any) -> AsyncGenerator[anyio.abc.Process, Any] ``` Like `anyio.open_process` but with: * Support for Windows command joining * Termination of the process on exception during yield * Forced cleanup of process resources during cancellation ### `run_process` ```python theme={null} run_process(command: list[str], **kwargs: Any) -> anyio.abc.Process ``` Like `anyio.run_process` but with: * Use of our `open_process` utility to ensure resources are cleaned up * Simple `stream_output` support to connect the subprocess to the parent stdout/err * Support for submission with `TaskGroup.start` marking as 'started' after the process has been created. When used, the PID is returned to the task status. ### `consume_process_output` ```python theme={null} consume_process_output(process: anyio.abc.Process, stdout_sink: Optional[TextSink[str]] = None, stderr_sink: Optional[TextSink[str]] = None) -> None ``` ### `stream_text` ```python theme={null} stream_text(source: TextReceiveStream, *sinks: Optional[TextSink[str]]) -> None ``` ### `forward_signal_handler` ```python theme={null} forward_signal_handler(pid: int, signum: int, *signums: int) -> None ``` Forward subsequent signum events (e.g. interrupts) to respective signums. ### `setup_signal_handlers_server` ```python theme={null} setup_signal_handlers_server(pid: int, process_name: str, print_fn: PrintFn) -> None ``` Handle interrupts of the server gracefully. ### `setup_signal_handlers_agent` ```python theme={null} setup_signal_handlers_agent(pid: int, process_name: str, print_fn: PrintFn) -> None ``` Handle interrupts of the agent gracefully. ### `setup_signal_handlers_worker` ```python theme={null} setup_signal_handlers_worker(pid: int, process_name: str, print_fn: PrintFn) -> None ``` Handle interrupts of workers gracefully. ### `get_sys_executable` ```python theme={null} get_sys_executable() -> str ``` # pydantic Source: https://docs.prefect.io/v3/api-ref/python/prefect-utilities-pydantic # `prefect.utilities.pydantic` ## Functions ### `add_cloudpickle_reduction` ```python theme={null} add_cloudpickle_reduction(__model_cls: Optional[type[M]] = None, **kwargs: Any) -> Union[type[M], Callable[[type[M]], type[M]]] ``` Adds a `__reducer__` to the given class that ensures it is cloudpickle compatible. Workaround for issues with cloudpickle when using cythonized pydantic which throws exceptions when attempting to pickle the class which has "compiled" validator methods dynamically attached to it. We cannot define this utility in the model class itself because the class is the type that contains unserializable methods. Any model using some features of Pydantic (e.g. `Path` validation) with a Cython compiled Pydantic installation may encounter pickling issues. See related issue at [https://github.com/cloudpipe/cloudpickle/issues/408](https://github.com/cloudpipe/cloudpickle/issues/408) ### `get_class_fields_only` ```python theme={null} get_class_fields_only(model: type[BaseModel]) -> set[str] ``` Gets all the field names defined on the model class but not any parent classes. Any fields that are on the parent but redefined on the subclass are included. ### `add_type_dispatch` ```python theme={null} add_type_dispatch(model_cls: type[M]) -> type[M] ``` Extend a Pydantic model to add a 'type' field that is used as a discriminator field to dynamically determine the subtype that when deserializing models. This allows automatic resolution to subtypes of the decorated model. If a type field already exists, it should be a string literal field that has a constant value for each subclass. The default value of this field will be used as the dispatch key. If a type field does not exist, one will be added. In this case, the value of the field will be set to the value of the `__dispatch_key__`. The base class should define a `__dispatch_key__` class method that is used to determine the unique key for each subclass. Alternatively, each subclass can define the `__dispatch_key__` as a string literal. The base class must not define a 'type' field. If it is not desirable to add a field to the model and the dispatch key can be tracked separately, the lower level utilities in `prefect.utilities.dispatch` should be used directly. ### `custom_pydantic_encoder` ```python theme={null} custom_pydantic_encoder(type_encoders: dict[Any, Callable[[type[Any]], Any]], obj: Any) -> Any ``` ### `parse_obj_as` ```python theme={null} parse_obj_as(type_: type[T], data: Any, mode: Literal['python', 'json', 'strings'] = 'python') -> T ``` Parse a given data structure as a Pydantic model via `TypeAdapter`. Read more about `TypeAdapter` [here](https://docs.pydantic.dev/latest/concepts/type_adapter/). **Args:** * `type_`: The type to parse the data as. * `data`: The data to be parsed. * `mode`: The mode to use for parsing, either `python`, `json`, or `strings`. Defaults to `python`, where `data` should be a Python object (e.g. `dict`). **Returns:** * The parsed `data` as the given `type_`. ### `handle_secret_render` ```python theme={null} handle_secret_render(value: object, context: dict[str, Any]) -> object ``` ## Classes ### `PartialModel` A utility for creating a Pydantic model in several steps. Fields may be set at initialization, via attribute assignment, or at finalization when the concrete model is returned. Pydantic validation does not occur until finalization. Each field can only be set once and a `ValueError` will be raised on assignment if a field already has a value. **Methods:** #### `finalize` ```python theme={null} finalize(self, **kwargs: Any) -> M ``` #### `raise_if_already_set` ```python theme={null} raise_if_already_set(self, name: str) -> None ``` #### `raise_if_not_in_model` ```python theme={null} raise_if_not_in_model(self, name: str) -> None ``` # render_swagger Source: https://docs.prefect.io/v3/api-ref/python/prefect-utilities-render_swagger # `prefect.utilities.render_swagger` ## Functions ### `swagger_lib` ```python theme={null} swagger_lib(config: MkDocsConfig) -> dict[str, Any] ``` Provides the actual swagger library used ## Classes ### `SwaggerPlugin` **Methods:** #### `on_page_markdown` ```python theme={null} on_page_markdown() -> Optional[str] ``` # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-utilities-schema_tools-__init__ # `prefect.utilities.schema_tools` *This module is empty or contains only private/internal implementations.* # hydration Source: https://docs.prefect.io/v3/api-ref/python/prefect-utilities-schema_tools-hydration # `prefect.utilities.schema_tools.hydration` ## Functions ### `handler` ```python theme={null} handler(kind: PrefectKind) -> Callable[[Handler], Handler] ``` ### `call_handler` ```python theme={null} call_handler(kind: PrefectKind, obj: dict[str, Any], ctx: HydrationContext) -> Any ``` ### `null_handler` ```python theme={null} null_handler(obj: dict[str, Any], ctx: HydrationContext) ``` ### `json_handler` ```python theme={null} json_handler(obj: dict[str, Any], ctx: HydrationContext) ``` ### `jinja_handler` ```python theme={null} jinja_handler(obj: dict[str, Any], ctx: HydrationContext) -> Any ``` ### `workspace_variable_handler` ```python theme={null} workspace_variable_handler(obj: dict[str, Any], ctx: HydrationContext) -> Any ``` ### `hydrate` ```python theme={null} hydrate(obj: dict[str, Any], ctx: Optional[HydrationContext] = None) -> dict[str, Any] ``` ## Classes ### `HydrationContext` **Methods:** #### `build` ```python theme={null} build(cls, session: AsyncSession, raise_on_error: bool = False, render_jinja: bool = False, render_workspace_variables: bool = False) -> Self ``` ### `Placeholder` **Methods:** #### `is_error` ```python theme={null} is_error(self) -> bool ``` ### `RemoveValue` **Methods:** #### `is_error` ```python theme={null} is_error(self) -> bool ``` ### `HydrationError` **Methods:** #### `is_error` ```python theme={null} is_error(self) -> bool ``` #### `is_error` ```python theme={null} is_error(self) -> bool ``` #### `message` ```python theme={null} message(self) -> str ``` ### `KeyNotFound` **Methods:** #### `is_error` ```python theme={null} is_error(self) -> bool ``` #### `key` ```python theme={null} key(self) -> str ``` #### `message` ```python theme={null} message(self) -> str ``` #### `message` ```python theme={null} message(self) -> str ``` ### `ValueNotFound` **Methods:** #### `key` ```python theme={null} key(self) -> str ``` #### `key` ```python theme={null} key(self) -> str ``` #### `message` ```python theme={null} message(self) -> str ``` ### `TemplateNotFound` **Methods:** #### `key` ```python theme={null} key(self) -> str ``` #### `key` ```python theme={null} key(self) -> str ``` #### `message` ```python theme={null} message(self) -> str ``` ### `VariableNameNotFound` **Methods:** #### `key` ```python theme={null} key(self) -> str ``` #### `key` ```python theme={null} key(self) -> str ``` #### `message` ```python theme={null} message(self) -> str ``` ### `InvalidJSON` **Methods:** #### `is_error` ```python theme={null} is_error(self) -> bool ``` #### `message` ```python theme={null} message(self) -> str ``` #### `message` ```python theme={null} message(self) -> str ``` ### `InvalidJinja` **Methods:** #### `is_error` ```python theme={null} is_error(self) -> bool ``` #### `message` ```python theme={null} message(self) -> str ``` #### `message` ```python theme={null} message(self) -> str ``` ### `WorkspaceVariableNotFound` **Methods:** #### `is_error` ```python theme={null} is_error(self) -> bool ``` #### `message` ```python theme={null} message(self) -> str ``` #### `message` ```python theme={null} message(self) -> str ``` #### `variable_name` ```python theme={null} variable_name(self) -> str ``` ### `WorkspaceVariable` **Methods:** #### `is_error` ```python theme={null} is_error(self) -> bool ``` ### `ValidJinja` **Methods:** #### `is_error` ```python theme={null} is_error(self) -> bool ``` # validation Source: https://docs.prefect.io/v3/api-ref/python/prefect-utilities-schema_tools-validation # `prefect.utilities.schema_tools.validation` ## Functions ### `is_valid_schema` ```python theme={null} is_valid_schema(schema: ObjectSchema, preprocess: bool = True) -> None ``` ### `validate` ```python theme={null} validate(obj: dict[str, Any], schema: ObjectSchema, raise_on_error: bool = False, preprocess: bool = True, ignore_required: bool = False, allow_none_with_default: bool = False) -> list[JSONSchemaValidationError] ``` ### `is_valid` ```python theme={null} is_valid(obj: dict[str, Any], schema: ObjectSchema) -> bool ``` ### `prioritize_placeholder_errors` ```python theme={null} prioritize_placeholder_errors(errors: list[JSONSchemaValidationError]) -> list[JSONSchemaValidationError] ``` ### `build_error_obj` ```python theme={null} build_error_obj(errors: list[JSONSchemaValidationError]) -> dict[str, Any] ``` ### `process_properties` ```python theme={null} process_properties(properties: dict[str, dict[str, Any]], required_fields: list[str], allow_none_with_default: bool = False) -> None ``` ### `preprocess_schema` ```python theme={null} preprocess_schema(schema: ObjectSchema, allow_none_with_default: bool = False) -> ObjectSchema ``` ## Classes ### `CircularSchemaRefError` ### `ValidationError` # services Source: https://docs.prefect.io/v3/api-ref/python/prefect-utilities-services # `prefect.utilities.services` ## Functions ### `critical_service_loop` ```python theme={null} critical_service_loop(workload: Callable[..., Coroutine[Any, Any, Any]], interval: float, memory: int = 10, consecutive: int = 3, backoff: int = 1, printer: Callable[..., None] = print, run_once: bool = False, jitter_range: Optional[float] = None) -> None ``` Runs the given `workload` function on the specified `interval`, while being forgiving of intermittent issues like temporary HTTP errors. If more than a certain number of `consecutive` errors occur, print a summary of up to `memory` recent exceptions to `printer`, then begin backoff. The loop will exit after reaching the consecutive error limit `backoff` times. On each backoff, the interval will be doubled. On a successful loop, the backoff will be reset. **Args:** * `workload`: the function to call * `interval`: how frequently to call it * `memory`: how many recent errors to remember * `consecutive`: how many consecutive errors must we see before we begin backoff * `backoff`: how many times we should allow consecutive errors before exiting * `printer`: a `print`-like function where errors will be reported * `run_once`: if set, the loop will only run once then return * `jitter_range`: if set, the interval will be a random variable (rv) drawn from a clamped Poisson distribution where lambda = interval and the rv is bound between `interval * (1 - range) < rv < interval * (1 + range)` ### `start_client_metrics_server` ```python theme={null} start_client_metrics_server() -> None ``` Start the process-wide Prometheus metrics server for client metrics (if enabled with `PREFECT_CLIENT_METRICS_ENABLED`) on the port `PREFECT_CLIENT_METRICS_PORT`. ### `stop_client_metrics_server` ```python theme={null} stop_client_metrics_server() -> None ``` Stop the process-wide Prometheus metrics server for client metrics, if it has previously been started # slugify Source: https://docs.prefect.io/v3/api-ref/python/prefect-utilities-slugify # `prefect.utilities.slugify` *This module is empty or contains only private/internal implementations.* # templating Source: https://docs.prefect.io/v3/api-ref/python/prefect-utilities-templating # `prefect.utilities.templating` ## Functions ### `determine_placeholder_type` ```python theme={null} determine_placeholder_type(name: str) -> PlaceholderType ``` Determines the type of a placeholder based on its name. **Args:** * `name`: The name of the placeholder **Returns:** * The type of the placeholder ### `find_placeholders` ```python theme={null} find_placeholders(template: T) -> set[Placeholder] ``` Finds all placeholders in a template. **Args:** * `template`: template to discover placeholders in **Returns:** * A set of all placeholders in the template ### `apply_values` ```python theme={null} apply_values(template: T, values: dict[str, Any], remove_notset: bool = True, warn_on_notset: bool = False, skip_prefixes: Optional[list[str]] = None) -> Union[T, type[NotSet]] ``` Replaces placeholders in a template with values from a supplied dictionary. Will recursively replace placeholders in dictionaries and lists. If a value has no placeholders, it will be returned unchanged. If a template contains only a single placeholder, the placeholder will be fully replaced with the value. If a template contains text before or after a placeholder or there are multiple placeholders, the placeholders will be replaced with the corresponding variable values. If a template contains a placeholder that is not in `values`, NotSet will be returned to signify that no placeholder replacement occurred. If `template` is a dictionary that contains a key with a value of NotSet, the key will be removed in the return value unless `remove_notset` is set to False. **Args:** * `template`: template to discover and replace values in * `values`: The values to apply to placeholders in the template * `remove_notset`: If True, remove keys with an unset value * `warn_on_notset`: If True, warn when a placeholder is not found in `values` * `skip_prefixes`: If provided, placeholders whose names start with any of these prefixes will be left untouched in the template. **Returns:** * The template with the values applied ### `resolve_block_document_references` ```python theme={null} resolve_block_document_references(template: T, client: Optional['PrefectClient'] = None, value_transformer: Optional[Callable[[str, Any], Any]] = None) -> Union[T, dict[str, Any]] ``` Resolve block document references in a template by replacing each reference with its value or the return value of the transformer function if provided. Recursively searches for block document references in dictionaries and lists. Identifies block document references by the as dictionary with the following structure: ``` { "$ref": { "block_document_id": } } ``` where `` is the ID of the block document to resolve. Once the block document is retrieved from the API, the data of the block document is used to replace the reference. ## Accessing Values: To access different values in a block document, use dot notation combined with the block document's prefix, slug, and block name. For a block document with the structure: ```json theme={null} { "value": { "key": { "nested-key": "nested-value" }, "list": [ {"list-key": "list-value"}, 1, 2 ] } } ``` examples of value resolution are as follows: 1. Accessing a nested dictionary: Format: `prefect.blocks...value.key` Example: Returns `{"nested-key": "nested-value"}` 2. Accessing a specific nested value: Format: `prefect.blocks...value.key.nested-key` Example: Returns `"nested-value"` 3. Accessing a list element's key-value: Format: `prefect.blocks...value.list[0].list-key` Example: Returns `"list-value"` ## Default Resolution for System Blocks: For system blocks, which only contain a `value` attribute, this attribute is resolved by default. **Args:** * `template`: The template to resolve block documents in * `value_transformer`: A function that takes the block placeholder and the block value and returns replacement text for the template **Returns:** * The template with block documents resolved ### `resolve_variables` ```python theme={null} resolve_variables(template: T, client: Optional['PrefectClient'] = None) -> T ``` Resolve variables in a template by replacing each variable placeholder with the value of the variable. Recursively searches for variable placeholders in dictionaries and lists. Strips variable placeholders if the variable is not found. **Args:** * `template`: The template to resolve variables in **Returns:** * The template with variables resolved ## Classes ### `PlaceholderType` ### `Placeholder` # text Source: https://docs.prefect.io/v3/api-ref/python/prefect-utilities-text # `prefect.utilities.text` ## Functions ### `truncated_to` ```python theme={null} truncated_to(length: int, value: Optional[str]) -> str ``` ### `fuzzy_match_string` ```python theme={null} fuzzy_match_string(word: str, possibilities: Iterable[str]) -> Optional[str] ``` # timeout Source: https://docs.prefect.io/v3/api-ref/python/prefect-utilities-timeout # `prefect.utilities.timeout` ## Functions ### `fail_if_not_timeout_error` ```python theme={null} fail_if_not_timeout_error(timeout_exc_type: type[Exception]) -> None ``` ### `timeout_async` ```python theme={null} timeout_async(seconds: Optional[float] = None, timeout_exc_type: type[TimeoutError] = TimeoutError) ``` ### `timeout` ```python theme={null} timeout(seconds: Optional[float] = None, timeout_exc_type: type[TimeoutError] = TimeoutError) ``` # urls Source: https://docs.prefect.io/v3/api-ref/python/prefect-utilities-urls # `prefect.utilities.urls` ## Functions ### `validate_restricted_url` ```python theme={null} validate_restricted_url(url: str) -> None ``` Validate that the provided URL is safe for outbound requests. This prevents attacks like SSRF (Server Side Request Forgery), where an attacker can make requests to internal services (like the GCP metadata service, localhost addresses, or in-cluster Kubernetes services) **Args:** * `url`: The URL to validate. **Raises:** * `ValueError`: If the URL is a restricted URL. ### `convert_class_to_name` ```python theme={null} convert_class_to_name(obj: Any) -> str ``` Convert CamelCase class name to dash-separated lowercase name ### `url_for` ```python theme={null} url_for(obj: Union['PrefectFuture[Any]', 'Block', 'Variable', 'Automation', 'Resource', 'ReceivedEvent', BaseModel, str], obj_id: Optional[Union[str, UUID]] = None, url_type: URLType = 'ui', default_base_url: Optional[str] = None, **additional_format_kwargs: Any) -> Optional[str] ``` Returns the URL for a Prefect object. Pass in a supported object directly or provide an object name and ID. **Args:** * `obj`: A Prefect object to get the URL for, or its URL name and ID. * `obj_id`: The UUID of the object. * `url_type`: Whether to return the URL for the UI (default) or API. * `default_base_url`: The default base URL to use if no URL is configured. * `additional_format_kwargs`: Additional keyword arguments to pass to the URL format. **Returns:** * Optional\[str]: The URL for the given object or None if the object is not supported. **Examples:** url\_for(my\_flow\_run) url\_for(obj=my\_flow\_run) url\_for("flow-run", obj\_id="123e4567-e89b-12d3-a456-426614174000") # visualization Source: https://docs.prefect.io/v3/api-ref/python/prefect-utilities-visualization # `prefect.utilities.visualization` Utilities for working with Flow\.visualize() ## Functions ### `get_task_viz_tracker` ```python theme={null} get_task_viz_tracker() -> Optional['TaskVizTracker'] ``` ### `track_viz_task` ```python theme={null} track_viz_task(is_async: bool, task_name: str, parameters: dict[str, Any], viz_return_value: Optional[Any] = None) -> Union[Coroutine[Any, Any, Any], Any] ``` Return a result if sync otherwise return a coroutine that returns the result ### `build_task_dependencies` ```python theme={null} build_task_dependencies(task_run_tracker: TaskVizTracker) -> graphviz.Digraph ``` Constructs a Graphviz directed graph object that represents the dependencies between tasks in the given TaskVizTracker. * task\_run\_tracker (TaskVizTracker): An object containing tasks and their dependencies. * graphviz.Digraph: A directed graph object depicting the relationships and dependencies between tasks. Raises: * GraphvizImportError: If there's an ImportError related to graphviz. * FlowVisualizationError: If there's any other error during the visualization process or if return values of tasks are directly accessed without specifying a `viz_return_value`. ### `visualize_task_dependencies` ```python theme={null} visualize_task_dependencies(graph: graphviz.Digraph, flow_run_name: str) -> None ``` Renders and displays a Graphviz directed graph representing task dependencies. The graph is rendered in PNG format and saved with the name specified by flow\_run\_name. After rendering, the visualization is opened and displayed. Parameters: * graph (graphviz.Digraph): The directed graph object to visualize. * flow\_run\_name (str): The name to use when saving the rendered graph image. Raises: * GraphvizExecutableNotFoundError: If Graphviz isn't found on the system. * FlowVisualizationError: If there's any other error during the visualization process or if return values of tasks are directly accessed without specifying a `viz_return_value`. ## Classes ### `FlowVisualizationError` ### `VisualizationUnsupportedError` ### `TaskVizTrackerState` ### `GraphvizImportError` ### `GraphvizExecutableNotFoundError` ### `VizTask` ### `TaskVizTracker` **Methods:** #### `add_task` ```python theme={null} add_task(self, task: VizTask) -> None ``` #### `link_viz_return_value_to_viz_task` ```python theme={null} link_viz_return_value_to_viz_task(self, viz_return_value: Any, viz_task: VizTask) -> None ``` We cannot track booleans, Ellipsis, None, NotImplemented, or the integers from -5 to 256 because they are singletons. # variables Source: https://docs.prefect.io/v3/api-ref/python/prefect-variables # `prefect.variables` ## Classes ### `Variable` Variables are named, mutable JSON values that can be shared across tasks and flows. **Args:** * `name`: A string identifying the variable. * `value`: A string that is the value of the variable. * `tags`: An optional list of strings to associate with the variable. **Methods:** #### `aget` ```python theme={null} aget(cls, name: str) -> T | None ``` #### `aget` ```python theme={null} aget(cls, name: str, default: T) -> T ``` #### `aget` ```python theme={null} aget(cls, name: str, default: StrictVariableValue | None = None) -> StrictVariableValue | None ``` Asynchronously get a variable's value by name. If the variable does not exist, return the default value. **Args:** * `- name`: The name of the variable value to get. * `- default`: The default value to return if the variable does not exist. #### `aset` ```python theme={null} aset(cls, name: str, value: StrictVariableValue, tags: Optional[list[str]] = None, overwrite: bool = False) -> 'Variable' ``` Asynchronously sets a new variable. If one exists with the same name, must pass `overwrite=True` Returns the newly set variable object. **Args:** * `- name`: The name of the variable to set. * `- value`: The value of the variable to set. * `- tags`: An optional list of strings to associate with the variable. * `- overwrite`: Whether to overwrite the variable if it already exists. #### `aunset` ```python theme={null} aunset(cls, name: str) -> bool ``` Asynchronously unset a variable by name. **Args:** * `- name`: The name of the variable to unset. Returns `True` if the variable was deleted, `False` if the variable did not exist. #### `get` ```python theme={null} get(cls, name: str) -> T | None ``` #### `get` ```python theme={null} get(cls, name: str, default: T) -> T ``` #### `get` ```python theme={null} get(cls, name: str, default: StrictVariableValue | None = None) -> StrictVariableValue | None ``` Get a variable's value by name. If the variable does not exist, return the default value. **Args:** * `- name`: The name of the variable value to get. * `- default`: The default value to return if the variable does not exist. #### `set` ```python theme={null} set(cls, name: str, value: StrictVariableValue, tags: Optional[list[str]] = None, overwrite: bool = False) -> 'Variable' ``` Sets a new variable. If one exists with the same name, must pass `overwrite=True` Returns the newly set variable object. **Args:** * `- name`: The name of the variable to set. * `- value`: The value of the variable to set. * `- tags`: An optional list of strings to associate with the variable. * `- overwrite`: Whether to overwrite the variable if it already exists. #### `unset` ```python theme={null} unset(cls, name: str) -> bool ``` Unset a variable by name. **Args:** * `- name`: The name of the variable to unset. Returns `True` if the variable was deleted, `False` if the variable did not exist. # __init__ Source: https://docs.prefect.io/v3/api-ref/python/prefect-workers-__init__ # `prefect.workers` *This module is empty or contains only private/internal implementations.* # base Source: https://docs.prefect.io/v3/api-ref/python/prefect-workers-base # `prefect.workers.base` ## Classes ### `BaseJobConfiguration` **Methods:** #### `from_template_and_values` ```python theme={null} from_template_and_values(cls, base_job_template: dict[str, Any], values: dict[str, Any], client: 'PrefectClient | None' = None) ``` Creates a valid worker configuration object from the provided base configuration and overrides. Important: this method expects that the base\_job\_template was already validated server-side. #### `is_using_a_runner` ```python theme={null} is_using_a_runner(self) -> bool ``` #### `json_template` ```python theme={null} json_template(cls) -> dict[str, Any] ``` Returns a dict with job configuration as keys and the corresponding templates as values Defaults to using the job configuration parameter name as the template variable name. e.g. ```python theme={null} { key1: '{{ key1 }}', # default variable template key2: '{{ template2 }}', # `template2` specifically provide as template } ``` #### `prepare_for_flow_run` ```python theme={null} prepare_for_flow_run(self, flow_run: 'FlowRun', deployment: 'DeploymentResponse | None' = None, flow: 'APIFlow | None' = None, work_pool: 'WorkPool | None' = None, worker_name: str | None = None, worker_id: 'UUID | None' = None) -> None ``` Prepare the job configuration for a flow run. This method is called by the worker before starting a flow run. It should be used to set any configuration values that are dependent on the flow run. **Args:** * `flow_run`: The flow run to be executed. * `deployment`: The deployment that the flow run is associated with. * `flow`: The flow that the flow run is associated with. * `work_pool`: The work pool that the flow run is running in. * `worker_name`: The name of the worker that is submitting the flow run. * `worker_id`: The backend ID of the worker that is submitting the flow run. ### `BaseVariables` **Methods:** #### `model_json_schema` ```python theme={null} model_json_schema(cls, by_alias: bool = True, ref_template: str = '#/definitions/{model}', schema_generator: Type[GenerateJsonSchema] = GenerateJsonSchema, mode: Literal['validation', 'serialization'] = 'validation') -> dict[str, Any] ``` TODO: stop overriding this method - use GenerateSchema in ConfigDict instead? ### `BaseWorkerResult` ### `BaseWorker` **Methods:** #### `client` ```python theme={null} client(self) -> PrefectClient ``` #### `get_all_available_worker_types` ```python theme={null} get_all_available_worker_types() -> list[str] ``` Returns all worker types available in the local registry. #### `get_and_submit_flow_runs` ```python theme={null} get_and_submit_flow_runs(self) -> list['FlowRun'] ``` #### `get_default_base_job_template` ```python theme={null} get_default_base_job_template(cls) -> dict[str, Any] ``` #### `get_description` ```python theme={null} get_description(cls) -> str ``` #### `get_documentation_url` ```python theme={null} get_documentation_url(cls) -> str ``` #### `get_flow_run_logger` ```python theme={null} get_flow_run_logger(self, flow_run: 'FlowRun') -> PrefectLogAdapter ``` #### `get_logo_url` ```python theme={null} get_logo_url(cls) -> str ``` #### `get_name_slug` ```python theme={null} get_name_slug(self) -> str ``` #### `get_status` ```python theme={null} get_status(self) -> dict[str, Any] ``` Retrieves the status of the current worker including its name, current worker pool, the work pool queues it is polling, and its local settings. #### `get_worker_class_from_type` ```python theme={null} get_worker_class_from_type(type: str) -> Optional[Type['BaseWorker[Any, Any, Any]']] ``` Returns the worker class for a given worker type. If the worker type is not recognized, returns None. #### `is_worker_still_polling` ```python theme={null} is_worker_still_polling(self, query_interval_seconds: float) -> bool ``` This method is invoked by a webserver healthcheck handler and returns a boolean indicating if the worker has recorded a scheduled flow run poll within a variable amount of time. The `query_interval_seconds` is the same value that is used by the loop services - we will evaluate if the \_last\_polled\_time was within that interval x 30 (so 10s -> 5m) The instance property `self._last_polled_time` is currently set/updated in `get_and_submit_flow_runs()` #### `kill_infrastructure` ```python theme={null} kill_infrastructure(self, infrastructure_pid: str, configuration: C, grace_seconds: int = 30) -> None ``` Kill infrastructure for a flow run. Override this method in subclasses to implement infrastructure-specific termination logic. **Args:** * `infrastructure_pid`: The infrastructure identifier from the flow run. * `configuration`: The job configuration for connecting to infrastructure. * `grace_seconds`: Time to allow for graceful shutdown before force killing. **Raises:** * `NotImplementedError`: If the worker doesn't support killing infrastructure. * `InfrastructureNotFound`: If the infrastructure doesn't exist. * `InfrastructureNotAvailable`: If the infrastructure can't be killed by this worker. #### `limiter` ```python theme={null} limiter(self) -> anyio.CapacityLimiter ``` #### `run` ```python theme={null} run(self, flow_run: 'FlowRun', configuration: C, task_status: Optional[anyio.abc.TaskStatus[int]] = None) -> R ``` Runs a given flow run on the current worker. #### `setup` ```python theme={null} setup(self) -> None ``` Prepares the worker to run. #### `start` ```python theme={null} start(self, run_once: bool = False, with_healthcheck: bool = False, printer: Callable[..., None] = print) -> None ``` Starts the worker and runs the main worker loops. By default, the worker will run loops to poll for scheduled/cancelled flow runs and sync with the Prefect API server. If `run_once` is set, the worker will only run each loop once and then return. If `with_healthcheck` is set, the worker will start a healthcheck server which can be used to determine if the worker is still polling for flow runs and restart the worker if necessary. **Args:** * `run_once`: If set, the worker will only run each loop once then return. * `with_healthcheck`: If set, the worker will start a healthcheck server. * `printer`: A `print`-like function where logs will be reported. #### `submit` ```python theme={null} submit(self, flow: 'Flow[..., FR]', parameters: dict[str, Any] | None = None, job_variables: dict[str, Any] | None = None, flow_run: 'FlowRun | None' = None) -> 'PrefectFlowRunFuture[FR]' ``` EXPERIMENTAL: The interface for this method is subject to change. Submits a flow to run via the worker. **Args:** * `flow`: The flow to submit * `parameters`: The parameters to pass to the flow * `job_variables`: Job variables for infrastructure configuration * `flow_run`: Optional existing flow run to retry (reuses ID instead of creating new) **Returns:** * A flow run future #### `sync_with_backend` ```python theme={null} sync_with_backend(self) -> None ``` Updates the worker's local information about it's current work pool and queues. Sends a worker heartbeat to the API. #### `teardown` ```python theme={null} teardown(self, *exc_info: Any) -> None ``` Cleans up resources after the worker is stopped. #### `work_pool` ```python theme={null} work_pool(self) -> WorkPool ``` # block Source: https://docs.prefect.io/v3/api-ref/python/prefect-workers-block # `prefect.workers.block` 2024-06-27: This surfaces an actionable error message for moved or removed objects in Prefect 3.0 upgrade. # cloud Source: https://docs.prefect.io/v3/api-ref/python/prefect-workers-cloud # `prefect.workers.cloud` 2024-06-27: This surfaces an actionable error message for moved or removed objects in Prefect 3.0 upgrade. # process Source: https://docs.prefect.io/v3/api-ref/python/prefect-workers-process # `prefect.workers.process` Module containing the Process worker used for executing flow runs as subprocesses. To start a Process worker, run the following command: ```bash theme={null} prefect worker start --pool 'my-work-pool' --type process ``` Replace `my-work-pool` with the name of the work pool you want the worker to poll for flow runs. For more information about work pools and workers, checkout out the [Prefect docs](https://docs.prefect.io/v3/concepts/work-pools/). ## Classes ### `ProcessJobConfiguration` **Methods:** #### `from_template_and_values` ```python theme={null} from_template_and_values(cls, base_job_template: dict[str, Any], values: dict[str, Any], client: 'PrefectClient | None' = None) ``` Creates a valid worker configuration object from the provided base configuration and overrides. Important: this method expects that the base\_job\_template was already validated server-side. #### `is_using_a_runner` ```python theme={null} is_using_a_runner(self) -> bool ``` #### `json_template` ```python theme={null} json_template(cls) -> dict[str, Any] ``` Returns a dict with job configuration as keys and the corresponding templates as values Defaults to using the job configuration parameter name as the template variable name. e.g. ```python theme={null} { key1: '{{ key1 }}', # default variable template key2: '{{ template2 }}', # `template2` specifically provide as template } ``` #### `prepare_for_flow_run` ```python theme={null} prepare_for_flow_run(self, flow_run: 'FlowRun', deployment: 'DeploymentResponse | None' = None, flow: 'APIFlow | None' = None, work_pool: 'WorkPool | None' = None, worker_name: str | None = None, worker_id: 'UUID | None' = None) -> None ``` #### `prepare_for_flow_run` ```python theme={null} prepare_for_flow_run(self, flow_run: 'FlowRun', deployment: 'DeploymentResponse | None' = None, flow: 'APIFlow | None' = None, work_pool: 'WorkPool | None' = None, worker_name: str | None = None, worker_id: 'UUID | None' = None) -> None ``` Prepare the job configuration for a flow run. This method is called by the worker before starting a flow run. It should be used to set any configuration values that are dependent on the flow run. **Args:** * `flow_run`: The flow run to be executed. * `deployment`: The deployment that the flow run is associated with. * `flow`: The flow that the flow run is associated with. * `work_pool`: The work pool that the flow run is running in. * `worker_name`: The name of the worker that is submitting the flow run. * `worker_id`: The backend ID of the worker that is submitting the flow run. #### `validate_working_dir` ```python theme={null} validate_working_dir(cls, v: Path | str | None) -> Path | None ``` ### `ProcessVariables` **Methods:** #### `model_json_schema` ```python theme={null} model_json_schema(cls, by_alias: bool = True, ref_template: str = '#/definitions/{model}', schema_generator: Type[GenerateJsonSchema] = GenerateJsonSchema, mode: Literal['validation', 'serialization'] = 'validation') -> dict[str, Any] ``` TODO: stop overriding this method - use GenerateSchema in ConfigDict instead? ### `ProcessWorkerResult` Contains information about the final state of a completed process ### `ProcessWorker` **Methods:** #### `run` ```python theme={null} run(self, flow_run: 'FlowRun', configuration: ProcessJobConfiguration, task_status: Optional[anyio.abc.TaskStatus[int]] = None) -> ProcessWorkerResult ``` #### `start` ```python theme={null} start(self, run_once: bool = False, with_healthcheck: bool = False, printer: Callable[..., None] = print) -> None ``` Starts the worker and runs the main worker loops. By default, the worker will run loops to poll for scheduled/cancelled flow runs and sync with the Prefect API server. If `run_once` is set, the worker will only run each loop once and then return. If `with_healthcheck` is set, the worker will start a healthcheck server which can be used to determine if the worker is still polling for flow runs and restart the worker if necessary. **Args:** * `run_once`: If set, the worker will only run each loop once then return. * `with_healthcheck`: If set, the worker will start a healthcheck server. * `printer`: A `print`-like function where logs will be reported. # server Source: https://docs.prefect.io/v3/api-ref/python/prefect-workers-server # `prefect.workers.server` ## Functions ### `build_healthcheck_server` ```python theme={null} build_healthcheck_server(worker: BaseWorker[Any, Any, Any], query_interval_seconds: float, log_level: str = 'error') -> uvicorn.Server ``` Build a healthcheck FastAPI server for a worker. **Args:** * `worker`: the worker whose health we will check * `log_level`: the log ### `start_healthcheck_server` ```python theme={null} start_healthcheck_server(worker: BaseWorker[Any, Any, Any], query_interval_seconds: float, log_level: str = 'error') -> None ``` Run a healthcheck FastAPI server for a worker. **Args:** * `worker`: the worker whose health we will check * `log_level`: the log level to use for the server # utilities Source: https://docs.prefect.io/v3/api-ref/python/prefect-workers-utilities # `prefect.workers.utilities` ## Functions ### `get_available_work_pool_types` ```python theme={null} get_available_work_pool_types() -> List[str] ``` ### `get_default_base_job_template_for_infrastructure_type` ```python theme={null} get_default_base_job_template_for_infrastructure_type(infra_type: str) -> Optional[Dict[str, Any]] ``` # Cloud API Overview Source: https://docs.prefect.io/v3/api-ref/rest-api/cloud/index The Prefect Cloud API enables you to interact programmatically with Prefect Cloud. The Prefect Cloud API is organized around REST. Explore the interactive [Prefect Cloud REST API reference](https://app.prefect.cloud/api/docs). # REST API overview Source: https://docs.prefect.io/v3/api-ref/rest-api/index Prefect REST API for interacting with Prefect Cloud & self-hosted Prefect server. The Prefect API is organized around REST. It is used for communicating data from clients to a self-hosted Prefect server instance so that orchestration can be performed. This API is consumed by clients such as the Prefect Python SDK or the server dashboard. Prefect Cloud and self-hosted Prefect server each provide a REST API. * Prefect Cloud: * [Interactive Prefect Cloud REST API documentation](https://app.prefect.cloud/api/docs) * [Finding your Prefect Cloud details](#finding-your-prefect-cloud-details) * Self-hosted Prefect server: * Interactive REST API documentation for self-hosted Prefect server is available under **Server API** on the sidebar navigation or at `http://localhost:4200/docs` or the `/docs` endpoint of the [PREFECT\_API\_URL](/v3/develop/settings-and-profiles/) you have configured to access the server. You must have the server running with `prefect server start` to access the interactive documentation. ## Interact with the REST API You can interact with the Prefect REST API in several ways: * Create an instance of [`PrefectClient`](https://reference.prefect.io/prefect/client/orchestration/#prefect.client.orchestration.PrefectClient), which is part of the [Prefect Python SDK](/v3/api-ref/python/). * Use your favorite Python HTTP library such as [Requests](https://requests.readthedocs.io/en/latest/) or [HTTPX](https://www.python-httpx.org/) * Use an HTTP library in your language of choice * Use [curl](https://curl.se/) from the command line ### PrefectClient with self-hosted Prefect server This example uses `PrefectClient` with self-hosted Prefect server: ```python theme={null} import asyncio from prefect.client.orchestration import get_client async def get_flows(): client = get_client() r = await client.read_flows(limit=5) return r r = asyncio.run(get_flows()) for flow in r: print(flow.name, flow.id) if __name__ == "__main__": asyncio.run(get_flows()) ``` Output: ```bash theme={null} cat-facts 58ed68b1-0201-4f37-adef-0ea24bd2a022 dog-facts e7c0403d-44e7-45cf-a6c8-79117b7f3766 sloth-facts 771c0574-f5bf-4f59-a69d-3be3e061a62d capybara-facts fbadaf8b-584f-48b9-b092-07d351edd424 lemur-facts 53f710e7-3b0f-4b2f-ab6b-44934111818c ``` ### Requests with Prefect This example uses the Requests library with Prefect Cloud to return the five newest artifacts. ```python theme={null} import requests PREFECT_API_URL="https://api.prefect.cloud/api/accounts/abc-my-cloud-account-id-is-here/workspaces/123-my-workspace-id-is-here" PREFECT_API_KEY="123abc_my_api_key_goes_here" data = { "sort": "CREATED_DESC", "limit": 5, "artifacts": { "key": { "exists_": True } } } headers = {"Authorization": f"Bearer {PREFECT_API_KEY}"} endpoint = f"{PREFECT_API_URL}/artifacts/filter" response = requests.post(endpoint, headers=headers, json=data) assert response.status_code == 200 for artifact in response.json(): print(artifact) ``` ### curl with Prefect Cloud This example uses curl with Prefect Cloud to create a flow run: ```bash theme={null} ACCOUNT_ID="abc-my-cloud-account-id-goes-here" WORKSPACE_ID="123-my-workspace-id-goes-here" PREFECT_API_URL="https://api.prefect.cloud/api/accounts/$ACCOUNT_ID/workspaces/$WORKSPACE_ID" PREFECT_API_KEY="123abc_my_api_key_goes_here" DEPLOYMENT_ID="my_deployment_id" curl --location --request POST "$PREFECT_API_URL/deployments/$DEPLOYMENT_ID/create_flow_run" \ --header "Content-Type: application/json" \ --header "Authorization: Bearer $PREFECT_API_KEY" \ --header "X-PREFECT-API-VERSION: 0.8.4" \ --data-raw "{}" ``` Note that in this example `--data-raw "{}"` is required and is where you can specify other aspects of the flow run such as the state. Windows users substitute `^` for `\` for line multi-line commands. ## Finding your Prefect Cloud details When working with the Prefect Cloud REST API you will need your Account ID and often the Workspace ID for the [workspace](/v3/manage/cloud/workspaces/) you want to interact with. You can find both IDs for a [Prefect profile](/v3/develop/settings-and-profiles/) in the CLI with `prefect profile inspect my_profile`. This command will also display your [Prefect API key](/v3/how-to-guides/cloud/manage-users/api-keys), as shown below: ```bash theme={null} PREFECT_API_URL='https://api.prefect.cloud/api/accounts/abc-my-account-id-is-here/workspaces/123-my-workspace-id-is-here' PREFECT_API_KEY='123abc_my_api_key_is_here' ``` Alternatively, view your Account ID and Workspace ID in your browser URL. For example: `https://app.prefect.cloud/account/abc-my-account-id-is-here/workspaces/123-my-workspace-id-is-here`. ## REST guidelines The REST APIs adhere to the following guidelines: * Collection names are pluralized (for example, `/flows` or `/runs`). * We indicate variable placeholders with colons: `GET /flows/:id`. * We use snake case for route names: `GET /task_runs`. * We avoid nested resources unless there is no possibility of accessing the child resource outside the parent context. For example, we query `/task_runs` with a flow run filter instead of accessing `/flow_runs/:id/task_runs`. * The API is hosted with an `/api/:version` prefix that (optionally) allows versioning in the future. By convention, we treat that as part of the base URL and do not include that in API examples. * Filtering, sorting, and pagination parameters are provided in the request body of `POST` requests where applicable. * Pagination parameters are `limit` and `offset`. * Sorting is specified with a single `sort` parameter. * See more information on [filtering](#filtering) below. ### HTTP verbs * `GET`, `PUT` and `DELETE` requests are always idempotent. `POST` and `PATCH` are not guaranteed to be idempotent. * `GET` requests cannot receive information from the request body. * `POST` requests can receive information from the request body. * `POST /collection` creates a new member of the collection. * `GET /collection` lists all members of the collection. * `GET /collection/:id` gets a specific member of the collection by ID. * `DELETE /collection/:id` deletes a specific member of the collection. * `PUT /collection/:id` creates or replaces a specific member of the collection. * `PATCH /collection/:id` partially updates a specific member of the collection. * `POST /collection/action` is how we implement non-CRUD actions. For example, to set a flow run's state, we use `POST /flow_runs/:id/set_state`. * `POST /collection/action` may also be used for read-only queries. This is to allow us to send complex arguments as body arguments (which often cannot be done via `GET`). Examples include `POST /flow_runs/filter`, `POST /flow_runs/count`, and `POST /flow_runs/history`. ## Filter results Objects can be filtered by providing filter criteria in the body of a `POST` request. When multiple criteria are specified, logical AND will be applied to the criteria. Filter criteria are structured as follows: ```json theme={null} { "objects": { "object_field": { "field_operator_": } } } ``` In this example, `objects` is the name of the collection to filter over (for example, `flows`). The collection can be either the object being queried for (`flows` for `POST /flows/filter`) or a related object (`flow_runs` for `POST /flows/filter`). `object_field` is the name of the field over which to filter (`name` for `flows`). Note that some objects may have nested object fields, such as `{flow_run: {state: {type: {any_: []}}}}`. `field_operator_` is the operator to apply to a field when filtering. Common examples include: * `any_`: return objects where this field matches any of the following values. * `is_null_`: return objects where this field is or is not null. * `eq_`: return objects where this field is equal to the following value. * `all_`: return objects where this field matches all of the following values. * `before_`: return objects where this datetime field is less than or equal to the following value. * `after_`: return objects where this datetime field is greater than or equal to the following value. For example, to query for flows with the tag `"database"` and failed flow runs, `POST /flows/filter` with the following request body: ```json theme={null} { "flows": { "tags": { "all_": ["database"] } }, "flow_runs": { "state": { "type": { "any_": ["FAILED"] } } } } ``` ## OpenAPI The Prefect REST API can be fully described with an OpenAPI 3.0 compliant document. [OpenAPI](https://swagger.io/docs/specification/about/) is a standard specification for describing REST APIs. To generate self-hosted Prefect server's complete OpenAPI document, run the following commands in an interactive Python session: ```python theme={null} from prefect.server.api.server import create_app app = create_app() openapi_doc = app.openapi() ``` This document allows you to generate your own API client, explore the API using an API inspection tool, or write tests to ensure API compliance. # Read Settings Source: https://docs.prefect.io/v3/api-ref/rest-api/server/admin/read-settings get /admin/settings Get the current Prefect REST API settings. Secret setting values will be obfuscated. # Read Version Source: https://docs.prefect.io/v3/api-ref/rest-api/server/admin/read-version get /admin/version Returns the Prefect version number # Count Artifacts Source: https://docs.prefect.io/v3/api-ref/rest-api/server/artifacts/count-artifacts post /artifacts/count Count artifacts from the database. # Count Latest Artifacts Source: https://docs.prefect.io/v3/api-ref/rest-api/server/artifacts/count-latest-artifacts post /artifacts/latest/count Count artifacts from the database. # Create Artifact Source: https://docs.prefect.io/v3/api-ref/rest-api/server/artifacts/create-artifact post /artifacts/ Create an artifact. For more information, see https://docs.prefect.io/v3/concepts/artifacts. # Delete Artifact Source: https://docs.prefect.io/v3/api-ref/rest-api/server/artifacts/delete-artifact delete /artifacts/{id} Delete an artifact from the database. # Read Artifact Source: https://docs.prefect.io/v3/api-ref/rest-api/server/artifacts/read-artifact get /artifacts/{id} Retrieve an artifact from the database. # Read Artifacts Source: https://docs.prefect.io/v3/api-ref/rest-api/server/artifacts/read-artifacts post /artifacts/filter Retrieve artifacts from the database. # Read Latest Artifact Source: https://docs.prefect.io/v3/api-ref/rest-api/server/artifacts/read-latest-artifact get /artifacts/{key}/latest Retrieve the latest artifact from the artifact table. # Read Latest Artifacts Source: https://docs.prefect.io/v3/api-ref/rest-api/server/artifacts/read-latest-artifacts post /artifacts/latest/filter Retrieve artifacts from the database. # Update Artifact Source: https://docs.prefect.io/v3/api-ref/rest-api/server/artifacts/update-artifact patch /artifacts/{id} Update an artifact in the database. # Count Automations Source: https://docs.prefect.io/v3/api-ref/rest-api/server/automations/count-automations post /automations/count # Create Automation Source: https://docs.prefect.io/v3/api-ref/rest-api/server/automations/create-automation post /automations/ Create an automation. For more information, see https://docs.prefect.io/v3/concepts/automations. # Delete Automation Source: https://docs.prefect.io/v3/api-ref/rest-api/server/automations/delete-automation delete /automations/{id} # Delete Automations Owned By Resource Source: https://docs.prefect.io/v3/api-ref/rest-api/server/automations/delete-automations-owned-by-resource delete /automations/owned-by/{resource_id} # Patch Automation Source: https://docs.prefect.io/v3/api-ref/rest-api/server/automations/patch-automation patch /automations/{id} # Read Automation Source: https://docs.prefect.io/v3/api-ref/rest-api/server/automations/read-automation get /automations/{id} # Read Automations Source: https://docs.prefect.io/v3/api-ref/rest-api/server/automations/read-automations post /automations/filter # Read Automations Related To Resource Source: https://docs.prefect.io/v3/api-ref/rest-api/server/automations/read-automations-related-to-resource get /automations/related-to/{resource_id} # Update Automation Source: https://docs.prefect.io/v3/api-ref/rest-api/server/automations/update-automation put /automations/{id} # Validate Template Source: https://docs.prefect.io/v3/api-ref/rest-api/server/automations/validate-template post /templates/validate # Validate Template Source: https://docs.prefect.io/v3/api-ref/rest-api/server/automations/validate-template-1 post /automations/templates/validate # Read Available Block Capabilities Source: https://docs.prefect.io/v3/api-ref/rest-api/server/block-capabilities/read-available-block-capabilities get /block_capabilities/ Get available block capabilities. For more information, see https://docs.prefect.io/v3/concepts/blocks. # Count Block Documents Source: https://docs.prefect.io/v3/api-ref/rest-api/server/block-documents/count-block-documents post /block_documents/count Count block documents. # Create Block Document Source: https://docs.prefect.io/v3/api-ref/rest-api/server/block-documents/create-block-document post /block_documents/ Create a new block document. For more information, see https://docs.prefect.io/v3/concepts/blocks. # Delete Block Document Source: https://docs.prefect.io/v3/api-ref/rest-api/server/block-documents/delete-block-document delete /block_documents/{id} # Read Block Document By Id Source: https://docs.prefect.io/v3/api-ref/rest-api/server/block-documents/read-block-document-by-id get /block_documents/{id} # Read Block Documents Source: https://docs.prefect.io/v3/api-ref/rest-api/server/block-documents/read-block-documents post /block_documents/filter Query for block documents. # Update Block Document Data Source: https://docs.prefect.io/v3/api-ref/rest-api/server/block-documents/update-block-document-data patch /block_documents/{id} # Create Block Schema Source: https://docs.prefect.io/v3/api-ref/rest-api/server/block-schemas/create-block-schema post /block_schemas/ Create a block schema. For more information, see https://docs.prefect.io/v3/concepts/blocks. # Delete Block Schema Source: https://docs.prefect.io/v3/api-ref/rest-api/server/block-schemas/delete-block-schema delete /block_schemas/{id} Delete a block schema by id. # Read Block Schema By Checksum Source: https://docs.prefect.io/v3/api-ref/rest-api/server/block-schemas/read-block-schema-by-checksum get /block_schemas/checksum/{checksum} # Read Block Schema By Id Source: https://docs.prefect.io/v3/api-ref/rest-api/server/block-schemas/read-block-schema-by-id get /block_schemas/{id} Get a block schema by id. # Read Block Schemas Source: https://docs.prefect.io/v3/api-ref/rest-api/server/block-schemas/read-block-schemas post /block_schemas/filter Read all block schemas, optionally filtered by type # Create Block Type Source: https://docs.prefect.io/v3/api-ref/rest-api/server/block-types/create-block-type post /block_types/ Create a new block type. For more information, see https://docs.prefect.io/v3/concepts/blocks. # Delete Block Type Source: https://docs.prefect.io/v3/api-ref/rest-api/server/block-types/delete-block-type delete /block_types/{id} # Install System Block Types Source: https://docs.prefect.io/v3/api-ref/rest-api/server/block-types/install-system-block-types post /block_types/install_system_block_types # Read Block Document By Name For Block Type Source: https://docs.prefect.io/v3/api-ref/rest-api/server/block-types/read-block-document-by-name-for-block-type get /block_types/slug/{slug}/block_documents/name/{block_document_name} # Read Block Documents For Block Type Source: https://docs.prefect.io/v3/api-ref/rest-api/server/block-types/read-block-documents-for-block-type get /block_types/slug/{slug}/block_documents # Read Block Type By Id Source: https://docs.prefect.io/v3/api-ref/rest-api/server/block-types/read-block-type-by-id get /block_types/{id} Get a block type by ID. # Read Block Type By Slug Source: https://docs.prefect.io/v3/api-ref/rest-api/server/block-types/read-block-type-by-slug get /block_types/slug/{slug} Get a block type by name. # Read Block Types Source: https://docs.prefect.io/v3/api-ref/rest-api/server/block-types/read-block-types post /block_types/filter Gets all block types. Optionally limit return with limit and offset. # Update Block Type Source: https://docs.prefect.io/v3/api-ref/rest-api/server/block-types/update-block-type patch /block_types/{id} Update a block type. # Read View Content Source: https://docs.prefect.io/v3/api-ref/rest-api/server/collections/read-view-content get /collections/views/{view} Reads the content of a view from the prefect-collection-registry. # Bulk Decrement Active Slots Source: https://docs.prefect.io/v3/api-ref/rest-api/server/concurrency-limits-v2/bulk-decrement-active-slots post /v2/concurrency_limits/decrement # Bulk Decrement Active Slots With Lease Source: https://docs.prefect.io/v3/api-ref/rest-api/server/concurrency-limits-v2/bulk-decrement-active-slots-with-lease post /v2/concurrency_limits/decrement-with-lease # Bulk Increment Active Slots Source: https://docs.prefect.io/v3/api-ref/rest-api/server/concurrency-limits-v2/bulk-increment-active-slots post /v2/concurrency_limits/increment # Bulk Increment Active Slots With Lease Source: https://docs.prefect.io/v3/api-ref/rest-api/server/concurrency-limits-v2/bulk-increment-active-slots-with-lease post /v2/concurrency_limits/increment-with-lease # Create Concurrency Limit V2 Source: https://docs.prefect.io/v3/api-ref/rest-api/server/concurrency-limits-v2/create-concurrency-limit-v2 post /v2/concurrency_limits/ Create a task run concurrency limit. For more information, see https://docs.prefect.io/v3/how-to-guides/workflows/global-concurrency-limits. # Delete Concurrency Limit V2 Source: https://docs.prefect.io/v3/api-ref/rest-api/server/concurrency-limits-v2/delete-concurrency-limit-v2 delete /v2/concurrency_limits/{id_or_name} # Read All Concurrency Limits V2 Source: https://docs.prefect.io/v3/api-ref/rest-api/server/concurrency-limits-v2/read-all-concurrency-limits-v2 post /v2/concurrency_limits/filter # Read Concurrency Limit V2 Source: https://docs.prefect.io/v3/api-ref/rest-api/server/concurrency-limits-v2/read-concurrency-limit-v2 get /v2/concurrency_limits/{id_or_name} # Renew Concurrency Lease Source: https://docs.prefect.io/v3/api-ref/rest-api/server/concurrency-limits-v2/renew-concurrency-lease post /v2/concurrency_limits/leases/{lease_id}/renew # Update Concurrency Limit V2 Source: https://docs.prefect.io/v3/api-ref/rest-api/server/concurrency-limits-v2/update-concurrency-limit-v2 patch /v2/concurrency_limits/{id_or_name} # Create Concurrency Limit Source: https://docs.prefect.io/v3/api-ref/rest-api/server/concurrency-limits/create-concurrency-limit post /concurrency_limits/ Create a task run concurrency limit. For more information, see https://docs.prefect.io/v3/concepts/tag-based-concurrency-limits. # Decrement Concurrency Limits V1 Source: https://docs.prefect.io/v3/api-ref/rest-api/server/concurrency-limits/decrement-concurrency-limits-v1 post /concurrency_limits/decrement Decrement concurrency limits for the given tags. Finds and revokes the lease for V2 limits or decrements V1 active slots. Returns the list of limits that were decremented. # Delete Concurrency Limit Source: https://docs.prefect.io/v3/api-ref/rest-api/server/concurrency-limits/delete-concurrency-limit delete /concurrency_limits/{id} # Delete Concurrency Limit By Tag Source: https://docs.prefect.io/v3/api-ref/rest-api/server/concurrency-limits/delete-concurrency-limit-by-tag delete /concurrency_limits/tag/{tag} # Increment Concurrency Limits V1 Source: https://docs.prefect.io/v3/api-ref/rest-api/server/concurrency-limits/increment-concurrency-limits-v1 post /concurrency_limits/increment Increment concurrency limits for the given tags. During migration, this handles both V1 and V2 limits to support mixed states. Post-migration, it only uses V2 with lease-based concurrency. # Read Concurrency Limit Source: https://docs.prefect.io/v3/api-ref/rest-api/server/concurrency-limits/read-concurrency-limit get /concurrency_limits/{id} Get a concurrency limit by id. The `active slots` field contains a list of TaskRun IDs currently using a concurrency slot for the specified tag. # Read Concurrency Limit By Tag Source: https://docs.prefect.io/v3/api-ref/rest-api/server/concurrency-limits/read-concurrency-limit-by-tag get /concurrency_limits/tag/{tag} Get a concurrency limit by tag. The `active slots` field contains a list of TaskRun IDs currently using a concurrency slot for the specified tag. # Read Concurrency Limits Source: https://docs.prefect.io/v3/api-ref/rest-api/server/concurrency-limits/read-concurrency-limits post /concurrency_limits/filter Query for concurrency limits. For each concurrency limit the `active slots` field contains a list of TaskRun IDs currently using a concurrency slot for the specified tag. # Reset Concurrency Limit By Tag Source: https://docs.prefect.io/v3/api-ref/rest-api/server/concurrency-limits/reset-concurrency-limit-by-tag post /concurrency_limits/tag/{tag}/reset # Create Csrf Token Source: https://docs.prefect.io/v3/api-ref/rest-api/server/create-csrf-token get /csrf-token Create or update a CSRF token for a client # Bulk Create Flow Runs From Deployment Source: https://docs.prefect.io/v3/api-ref/rest-api/server/deployments/bulk-create-flow-runs-from-deployment post /deployments/{id}/create_flow_run/bulk Create multiple flow runs from a deployment. Any parameters not provided will be inferred from the deployment's parameters. If tags are not provided, the deployment's tags will be used. If no state is provided, the flow runs will be created in a SCHEDULED state. # Bulk Delete Deployments Source: https://docs.prefect.io/v3/api-ref/rest-api/server/deployments/bulk-delete-deployments post /deployments/bulk_delete Bulk delete deployments matching the specified filter criteria. Returns the IDs of deployments that were deleted. # Count Deployments Source: https://docs.prefect.io/v3/api-ref/rest-api/server/deployments/count-deployments post /deployments/count Count deployments. # Create Deployment Source: https://docs.prefect.io/v3/api-ref/rest-api/server/deployments/create-deployment post /deployments/ Creates a new deployment from the provided schema. If a deployment with the same name and flow_id already exists, the deployment is updated. If the deployment has an active schedule, flow runs will be scheduled. When upserting, any scheduled runs from the existing deployment will be deleted. For more information, see https://docs.prefect.io/v3/concepts/deployments. # Create Deployment Schedules Source: https://docs.prefect.io/v3/api-ref/rest-api/server/deployments/create-deployment-schedules post /deployments/{id}/schedules # Create Flow Run From Deployment Source: https://docs.prefect.io/v3/api-ref/rest-api/server/deployments/create-flow-run-from-deployment post /deployments/{id}/create_flow_run Create a flow run from a deployment. Any parameters not provided will be inferred from the deployment's parameters. If tags are not provided, the deployment's tags will be used. If no state is provided, the flow run will be created in a SCHEDULED state. # Delete Deployment Source: https://docs.prefect.io/v3/api-ref/rest-api/server/deployments/delete-deployment delete /deployments/{id} Delete a deployment by id. # Delete Deployment Schedule Source: https://docs.prefect.io/v3/api-ref/rest-api/server/deployments/delete-deployment-schedule delete /deployments/{id}/schedules/{schedule_id} # Get Scheduled Flow Runs For Deployments Source: https://docs.prefect.io/v3/api-ref/rest-api/server/deployments/get-scheduled-flow-runs-for-deployments post /deployments/get_scheduled_flow_runs Get scheduled runs for a set of deployments. Used by a runner to poll for work. # Paginate Deployments Source: https://docs.prefect.io/v3/api-ref/rest-api/server/deployments/paginate-deployments post /deployments/paginate Pagination query for flow runs. # Pause Deployment Source: https://docs.prefect.io/v3/api-ref/rest-api/server/deployments/pause-deployment post /deployments/{id}/pause_deployment Set a deployment schedule to inactive. Any auto-scheduled runs still in a Scheduled state will be deleted. # Read Deployment Source: https://docs.prefect.io/v3/api-ref/rest-api/server/deployments/read-deployment get /deployments/{id} Get a deployment by id. # Read Deployment By Name Source: https://docs.prefect.io/v3/api-ref/rest-api/server/deployments/read-deployment-by-name get /deployments/name/{flow_name}/{deployment_name} Get a deployment using the name of the flow and the deployment. # Read Deployment Schedules Source: https://docs.prefect.io/v3/api-ref/rest-api/server/deployments/read-deployment-schedules get /deployments/{id}/schedules # Read Deployments Source: https://docs.prefect.io/v3/api-ref/rest-api/server/deployments/read-deployments post /deployments/filter Query for deployments. # Resume Deployment Source: https://docs.prefect.io/v3/api-ref/rest-api/server/deployments/resume-deployment post /deployments/{id}/resume_deployment Set a deployment schedule to active. Runs will be scheduled immediately. # Schedule Deployment Source: https://docs.prefect.io/v3/api-ref/rest-api/server/deployments/schedule-deployment post /deployments/{id}/schedule Schedule runs for a deployment. For backfills, provide start/end times in the past. This function will generate the minimum number of runs that satisfy the min and max times, and the min and max counts. Specifically, the following order will be respected. - Runs will be generated starting on or after the `start_time` - No more than `max_runs` runs will be generated - No runs will be generated after `end_time` is reached - At least `min_runs` runs will be generated - Runs will be generated until at least `start_time + min_time` is reached # Update Deployment Source: https://docs.prefect.io/v3/api-ref/rest-api/server/deployments/update-deployment patch /deployments/{id} # Update Deployment Schedule Source: https://docs.prefect.io/v3/api-ref/rest-api/server/deployments/update-deployment-schedule patch /deployments/{id}/schedules/{schedule_id} # Work Queue Check For Deployment Source: https://docs.prefect.io/v3/api-ref/rest-api/server/deployments/work-queue-check-for-deployment get /deployments/{id}/work_queue_check Get list of work-queues that are able to pick up the specified deployment. This endpoint is intended to be used by the UI to provide users warnings about deployments that are unable to be executed because there are no work queues that will pick up their runs, based on existing filter criteria. It may be deprecated in the future because there is not a strict relationship between work queues and deployments. # Count Account Events Source: https://docs.prefect.io/v3/api-ref/rest-api/server/events/count-account-events post /events/count-by/{countable} Returns distinct objects and the count of events associated with them. Objects that can be counted include the day the event occurred, the type of event, or the IDs of the resources associated with the event. # Create Events Source: https://docs.prefect.io/v3/api-ref/rest-api/server/events/create-events post /events Record a batch of Events. For more information, see https://docs.prefect.io/v3/concepts/events. # Read Account Events Page Source: https://docs.prefect.io/v3/api-ref/rest-api/server/events/read-account-events-page get /events/filter/next Returns the next page of Events for a previous query against the given Account, and the URL to request the next page (if there are more results). # Read Events Source: https://docs.prefect.io/v3/api-ref/rest-api/server/events/read-events post /events/filter Queries for Events matching the given filter criteria in the given Account. Returns the first page of results, and the URL to request the next page (if there are more results). # Read Flow Run State Source: https://docs.prefect.io/v3/api-ref/rest-api/server/flow-run-states/read-flow-run-state get /flow_run_states/{id} Get a flow run state by id. For more information, see https://docs.prefect.io/v3/concepts/flows#final-state-determination. # Read Flow Run States Source: https://docs.prefect.io/v3/api-ref/rest-api/server/flow-run-states/read-flow-run-states get /flow_run_states/ Get states associated with a flow run. # Average Flow Run Lateness Source: https://docs.prefect.io/v3/api-ref/rest-api/server/flow-runs/average-flow-run-lateness post /flow_runs/lateness Query for average flow-run lateness in seconds. # Bulk Delete Flow Runs Source: https://docs.prefect.io/v3/api-ref/rest-api/server/flow-runs/bulk-delete-flow-runs post /flow_runs/bulk_delete Bulk delete flow runs matching the specified filter criteria. Returns the IDs of flow runs that were deleted. # Bulk Set Flow Run State Source: https://docs.prefect.io/v3/api-ref/rest-api/server/flow-runs/bulk-set-flow-run-state post /flow_runs/bulk_set_state Bulk set state for flow runs matching the specified filter criteria. Returns the orchestration results for each flow run. # Count Flow Runs Source: https://docs.prefect.io/v3/api-ref/rest-api/server/flow-runs/count-flow-runs post /flow_runs/count Query for flow runs. # Create Flow Run Source: https://docs.prefect.io/v3/api-ref/rest-api/server/flow-runs/create-flow-run post /flow_runs/ Create a flow run. If a flow run with the same flow_id and idempotency key already exists, the existing flow run will be returned. If no state is provided, the flow run will be created in a PENDING state. For more information, see https://docs.prefect.io/v3/concepts/flows. # Create Flow Run Input Source: https://docs.prefect.io/v3/api-ref/rest-api/server/flow-runs/create-flow-run-input post /flow_runs/{id}/input Create a key/value input for a flow run. # Delete Flow Run Source: https://docs.prefect.io/v3/api-ref/rest-api/server/flow-runs/delete-flow-run delete /flow_runs/{id} Delete a flow run by id. # Delete Flow Run Input Source: https://docs.prefect.io/v3/api-ref/rest-api/server/flow-runs/delete-flow-run-input delete /flow_runs/{id}/input/{key} Delete a flow run input # Download Logs Source: https://docs.prefect.io/v3/api-ref/rest-api/server/flow-runs/download-logs get /flow_runs/{id}/logs/download Download all flow run logs as a CSV file, collecting all logs until there are no more logs to retrieve. # Filter Flow Run Input Source: https://docs.prefect.io/v3/api-ref/rest-api/server/flow-runs/filter-flow-run-input post /flow_runs/{id}/input/filter Filter flow run inputs by key prefix # Flow Run History Source: https://docs.prefect.io/v3/api-ref/rest-api/server/flow-runs/flow-run-history post /flow_runs/history Query for flow run history data across a given range and interval. # Read Flow Runs Source: https://docs.prefect.io/v3/api-ref/rest-api/server/flow-runs/paginate-flow-runs post /flow_runs/filter Query for flow runs. # Read Flow Run Source: https://docs.prefect.io/v3/api-ref/rest-api/server/flow-runs/read-flow-run get /flow_runs/{id} Get a flow run by id. # Read Flow Run Graph V1 Source: https://docs.prefect.io/v3/api-ref/rest-api/server/flow-runs/read-flow-run-graph-v1 get /flow_runs/{id}/graph Get a task run dependency map for a given flow run. # Read Flow Run Graph V2 Source: https://docs.prefect.io/v3/api-ref/rest-api/server/flow-runs/read-flow-run-graph-v2 get /flow_runs/{id}/graph-v2 Get a graph of the tasks and subflow runs for the given flow run # Read Flow Run Input Source: https://docs.prefect.io/v3/api-ref/rest-api/server/flow-runs/read-flow-run-input get /flow_runs/{id}/input/{key} Create a value from a flow run input # Read Flow Runs Source: https://docs.prefect.io/v3/api-ref/rest-api/server/flow-runs/read-flow-runs post /flow_runs/filter Query for flow runs. # Resume Flow Run Source: https://docs.prefect.io/v3/api-ref/rest-api/server/flow-runs/resume-flow-run post /flow_runs/{id}/resume Resume a paused flow run. # Set Flow Run State Source: https://docs.prefect.io/v3/api-ref/rest-api/server/flow-runs/set-flow-run-state post /flow_runs/{id}/set_state Set a flow run state, invoking any orchestration rules. # Update Flow Run Source: https://docs.prefect.io/v3/api-ref/rest-api/server/flow-runs/update-flow-run patch /flow_runs/{id} Updates a flow run. # Update Flow Run Labels Source: https://docs.prefect.io/v3/api-ref/rest-api/server/flow-runs/update-flow-run-labels patch /flow_runs/{id}/labels Update the labels of a flow run. # Bulk Delete Flows Source: https://docs.prefect.io/v3/api-ref/rest-api/server/flows/bulk-delete-flows post /flows/bulk_delete Bulk delete flows matching the specified filter criteria. This also deletes all associated deployments. Returns the IDs of flows that were deleted. # Count Flows Source: https://docs.prefect.io/v3/api-ref/rest-api/server/flows/count-flows post /flows/count Count flows. # Create Flow Source: https://docs.prefect.io/v3/api-ref/rest-api/server/flows/create-flow post /flows/ Creates a new flow from the provided schema. If a flow with the same name already exists, the existing flow is returned. For more information, see https://docs.prefect.io/v3/concepts/flows. # Delete Flow Source: https://docs.prefect.io/v3/api-ref/rest-api/server/flows/delete-flow delete /flows/{id} Delete a flow by id. # Paginate Flows Source: https://docs.prefect.io/v3/api-ref/rest-api/server/flows/paginate-flows post /flows/paginate Pagination query for flows. # Read Flow Source: https://docs.prefect.io/v3/api-ref/rest-api/server/flows/read-flow get /flows/{id} Get a flow by id. # Read Flow By Name Source: https://docs.prefect.io/v3/api-ref/rest-api/server/flows/read-flow-by-name get /flows/name/{name} Get a flow by name. # Read Flows Source: https://docs.prefect.io/v3/api-ref/rest-api/server/flows/read-flows post /flows/filter Query for flows. # Update Flow Source: https://docs.prefect.io/v3/api-ref/rest-api/server/flows/update-flow patch /flows/{id} Updates a flow. # Server API Overview Source: https://docs.prefect.io/v3/api-ref/rest-api/server/index The Prefect server API enables you to interact programmatically with self-hosted Prefect server. The self-hosted Prefect server API is organized around REST. Select links in the left navigation menu to explore. Learn about [self-hosting Prefect server](/v3/how-to-guides/self-hosted/server-cli). # Create Logs Source: https://docs.prefect.io/v3/api-ref/rest-api/server/logs/create-logs post /logs/ Create new logs from the provided schema. For more information, see https://docs.prefect.io/v3/how-to-guides/workflows/add-logging. # Read Logs Source: https://docs.prefect.io/v3/api-ref/rest-api/server/logs/read-logs post /logs/filter Query for logs. # Health Check Source: https://docs.prefect.io/v3/api-ref/rest-api/server/root/health-check get /health # Hello Source: https://docs.prefect.io/v3/api-ref/rest-api/server/root/hello get /hello Say hello! # Perform Readiness Check Source: https://docs.prefect.io/v3/api-ref/rest-api/server/root/perform-readiness-check get /ready # Server Version Source: https://docs.prefect.io/v3/api-ref/rest-api/server/root/server-version get /version # Create Saved Search Source: https://docs.prefect.io/v3/api-ref/rest-api/server/savedsearches/create-saved-search put /saved_searches/ Creates a new saved search from the provided schema. If a saved search with the same name already exists, the saved search's fields are replaced. # Delete Saved Search Source: https://docs.prefect.io/v3/api-ref/rest-api/server/savedsearches/delete-saved-search delete /saved_searches/{id} Delete a saved search by id. # Read Saved Search Source: https://docs.prefect.io/v3/api-ref/rest-api/server/savedsearches/read-saved-search get /saved_searches/{id} Get a saved search by id. # Read Saved Searches Source: https://docs.prefect.io/v3/api-ref/rest-api/server/savedsearches/read-saved-searches post /saved_searches/filter Query for saved searches. # Read Task Run State Source: https://docs.prefect.io/v3/api-ref/rest-api/server/task-run-states/read-task-run-state get /task_run_states/{id} Get a task run state by id. For more information, see https://docs.prefect.io/v3/concepts/tasks. # Read Task Run States Source: https://docs.prefect.io/v3/api-ref/rest-api/server/task-run-states/read-task-run-states get /task_run_states/ Get states associated with a task run. # Count Task Runs Source: https://docs.prefect.io/v3/api-ref/rest-api/server/task-runs/count-task-runs post /task_runs/count Count task runs. # Create Task Run Source: https://docs.prefect.io/v3/api-ref/rest-api/server/task-runs/create-task-run post /task_runs/ Create a task run. If a task run with the same flow_run_id, task_key, and dynamic_key already exists, the existing task run will be returned. If no state is provided, the task run will be created in a PENDING state. For more information, see https://docs.prefect.io/v3/concepts/tasks. # Delete Task Run Source: https://docs.prefect.io/v3/api-ref/rest-api/server/task-runs/delete-task-run delete /task_runs/{id} Delete a task run by id. # Paginate Task Runs Source: https://docs.prefect.io/v3/api-ref/rest-api/server/task-runs/paginate-task-runs post /task_runs/paginate Pagination query for task runs. # Read Task Run Source: https://docs.prefect.io/v3/api-ref/rest-api/server/task-runs/read-task-run get /task_runs/{id} Get a task run by id. # Read Task Runs Source: https://docs.prefect.io/v3/api-ref/rest-api/server/task-runs/read-task-runs post /task_runs/filter Query for task runs. # Set Task Run State Source: https://docs.prefect.io/v3/api-ref/rest-api/server/task-runs/set-task-run-state post /task_runs/{id}/set_state Set a task run state, invoking any orchestration rules. # Task Run History Source: https://docs.prefect.io/v3/api-ref/rest-api/server/task-runs/task-run-history post /task_runs/history Query for task run history data across a given range and interval. # Update Task Run Source: https://docs.prefect.io/v3/api-ref/rest-api/server/task-runs/update-task-run patch /task_runs/{id} Updates a task run. # Read Task Workers Source: https://docs.prefect.io/v3/api-ref/rest-api/server/task-workers/read-task-workers post /task_workers/filter Read active task workers. Optionally filter by task keys. For more information, see https://docs.prefect.io/v3/how-to-guides/workflows/run-background-tasks. # Count Variables Source: https://docs.prefect.io/v3/api-ref/rest-api/server/variables/count-variables post /variables/count # Create Variable Source: https://docs.prefect.io/v3/api-ref/rest-api/server/variables/create-variable post /variables/ Create a variable. For more information, see https://docs.prefect.io/v3/concepts/variables. # Delete Variable Source: https://docs.prefect.io/v3/api-ref/rest-api/server/variables/delete-variable delete /variables/{id} # Delete Variable By Name Source: https://docs.prefect.io/v3/api-ref/rest-api/server/variables/delete-variable-by-name delete /variables/name/{name} # Read Variable Source: https://docs.prefect.io/v3/api-ref/rest-api/server/variables/read-variable get /variables/{id} # Read Variable By Name Source: https://docs.prefect.io/v3/api-ref/rest-api/server/variables/read-variable-by-name get /variables/name/{name} # Read Variables Source: https://docs.prefect.io/v3/api-ref/rest-api/server/variables/read-variables post /variables/filter # Update Variable Source: https://docs.prefect.io/v3/api-ref/rest-api/server/variables/update-variable patch /variables/{id} # Update Variable By Name Source: https://docs.prefect.io/v3/api-ref/rest-api/server/variables/update-variable-by-name patch /variables/name/{name} # Count Work Pools Source: https://docs.prefect.io/v3/api-ref/rest-api/server/work-pools/count-work-pools post /work_pools/count Count work pools # Create Work Pool Source: https://docs.prefect.io/v3/api-ref/rest-api/server/work-pools/create-work-pool post /work_pools/ Creates a new work pool. If a work pool with the same name already exists, an error will be raised. For more information, see https://docs.prefect.io/v3/concepts/work-pools. # Create Work Queue Source: https://docs.prefect.io/v3/api-ref/rest-api/server/work-pools/create-work-queue post /work_pools/{work_pool_name}/queues Creates a new work pool queue. If a work pool queue with the same name already exists, an error will be raised. For more information, see https://docs.prefect.io/v3/concepts/work-pools#work-queues. # Delete Work Pool Source: https://docs.prefect.io/v3/api-ref/rest-api/server/work-pools/delete-work-pool delete /work_pools/{name} Delete a work pool # Delete Work Queue Source: https://docs.prefect.io/v3/api-ref/rest-api/server/work-pools/delete-work-queue delete /work_pools/{work_pool_name}/queues/{name} Delete a work pool queue # Delete Worker Source: https://docs.prefect.io/v3/api-ref/rest-api/server/work-pools/delete-worker delete /work_pools/{work_pool_name}/workers/{name} Delete a work pool's worker # Get Scheduled Flow Runs Source: https://docs.prefect.io/v3/api-ref/rest-api/server/work-pools/get-scheduled-flow-runs post /work_pools/{name}/get_scheduled_flow_runs Load scheduled runs for a worker # Read Work Pool Source: https://docs.prefect.io/v3/api-ref/rest-api/server/work-pools/read-work-pool get /work_pools/{name} Read a work pool by name # Read Work Pool Concurrency Status Source: https://docs.prefect.io/v3/api-ref/rest-api/server/work-pools/read-work-pool-concurrency-status post /work_pools/{name}/concurrency_status Read concurrency status for a work pool, including per-queue breakdown with flow run summaries. Queues are paginated; flow runs per queue are capped by flow_run_limit. # Read Work Pools Source: https://docs.prefect.io/v3/api-ref/rest-api/server/work-pools/read-work-pools post /work_pools/filter Read multiple work pools # Read Work Queue Source: https://docs.prefect.io/v3/api-ref/rest-api/server/work-pools/read-work-queue get /work_pools/{work_pool_name}/queues/{name} Read a work pool queue # Read Work Queues Source: https://docs.prefect.io/v3/api-ref/rest-api/server/work-pools/read-work-queues post /work_pools/{work_pool_name}/queues/filter Read all work pool queues # Read Workers Source: https://docs.prefect.io/v3/api-ref/rest-api/server/work-pools/read-workers post /work_pools/{work_pool_name}/workers/filter Read all worker processes # Update Work Pool Source: https://docs.prefect.io/v3/api-ref/rest-api/server/work-pools/update-work-pool patch /work_pools/{name} Update a work pool # Update Work Queue Source: https://docs.prefect.io/v3/api-ref/rest-api/server/work-pools/update-work-queue patch /work_pools/{work_pool_name}/queues/{name} Update a work pool queue # Worker Heartbeat Source: https://docs.prefect.io/v3/api-ref/rest-api/server/work-pools/worker-heartbeat post /work_pools/{work_pool_name}/workers/heartbeat # Create Work Queue Source: https://docs.prefect.io/v3/api-ref/rest-api/server/work-queues/create-work-queue post /work_queues/ Creates a new work queue. If a work queue with the same name already exists, an error will be raised. For more information, see https://docs.prefect.io/v3/concepts/work-pools#work-queues. # Delete Work Queue Source: https://docs.prefect.io/v3/api-ref/rest-api/server/work-queues/delete-work-queue delete /work_queues/{id} Delete a work queue by id. # Read Work Queue Source: https://docs.prefect.io/v3/api-ref/rest-api/server/work-queues/read-work-queue get /work_queues/{id} Get a work queue by id. # Read Work Queue By Name Source: https://docs.prefect.io/v3/api-ref/rest-api/server/work-queues/read-work-queue-by-name get /work_queues/name/{name} Get a work queue by id. # Read Work Queue Concurrency Status Source: https://docs.prefect.io/v3/api-ref/rest-api/server/work-queues/read-work-queue-concurrency-status post /work_queues/{id}/concurrency_status Read concurrency status for a work queue, including paginated flow run summaries. active_slots always reflects the total count. # Read Work Queue Runs Source: https://docs.prefect.io/v3/api-ref/rest-api/server/work-queues/read-work-queue-runs post /work_queues/{id}/get_runs Get flow runs from the work queue. # Read Work Queue Status Source: https://docs.prefect.io/v3/api-ref/rest-api/server/work-queues/read-work-queue-status get /work_queues/{id}/status Get the status of a work queue. # Read Work Queues Source: https://docs.prefect.io/v3/api-ref/rest-api/server/work-queues/read-work-queues post /work_queues/filter Query for work queues. # Update Work Queue Source: https://docs.prefect.io/v3/api-ref/rest-api/server/work-queues/update-work-queue patch /work_queues/{id} Updates an existing work queue. # Settings reference Source: https://docs.prefect.io/v3/api-ref/settings-ref Reference for all available settings for Prefect. To use `prefect.toml` or `pyproject.toml` for configuration, `prefect>=3.1` must be installed. ## Root Settings ### `home` The path to the Prefect home directory. Defaults to \~/.prefect **Type**: `string` **Default**: `~/.prefect` **TOML dotted key path**: `home` **Supported environment variables**: `PREFECT_HOME` ### `profiles_path` The path to a profiles configuration file. Supports \$PREFECT\_HOME templating. Defaults to \$PREFECT\_HOME/profiles.toml. **Type**: `string` **TOML dotted key path**: `profiles_path` **Supported environment variables**: `PREFECT_PROFILES_PATH` ### `debug_mode` If True, enables debug mode which may provide additional logging and debugging features. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `debug_mode` **Supported environment variables**: `PREFECT_DEBUG_MODE` ### `api` **Type**: [APISettings](#apisettings) **TOML dotted key path**: `api` ### `cli` **Type**: [CLISettings](#clisettings) **TOML dotted key path**: `cli` ### `client` **Type**: [ClientSettings](#clientsettings) **TOML dotted key path**: `client` ### `cloud` **Type**: [CloudSettings](#cloudsettings) **TOML dotted key path**: `cloud` ### `deployments` **Type**: [DeploymentsSettings](#deploymentssettings) **TOML dotted key path**: `deployments` ### `events` **Type**: [EventsSettings](#eventssettings) **TOML dotted key path**: `events` ### `experiments` Settings for controlling experimental features **Type**: [ExperimentsSettings](#experimentssettings) **TOML dotted key path**: `experiments` ### `flows` **Type**: [FlowsSettings](#flowssettings) **TOML dotted key path**: `flows` ### `internal` Settings for internal Prefect machinery **Type**: [InternalSettings](#internalsettings) **TOML dotted key path**: `internal` ### `logging` **Type**: [LoggingSettings](#loggingsettings) **TOML dotted key path**: `logging` ### `results` **Type**: [ResultsSettings](#resultssettings) **TOML dotted key path**: `results` ### `runner` **Type**: [RunnerSettings](#runnersettings) **TOML dotted key path**: `runner` ### `server` **Type**: [ServerSettings](#serversettings) **TOML dotted key path**: `server` ### `tasks` Settings for controlling task behavior **Type**: [TasksSettings](#taskssettings) **TOML dotted key path**: `tasks` ### `telemetry` Settings for configuring telemetry collection **Type**: [TelemetrySettings](#telemetrysettings) **TOML dotted key path**: `telemetry` ### `testing` Settings used during testing **Type**: [TestingSettings](#testingsettings) **TOML dotted key path**: `testing` ### `worker` Settings for controlling worker behavior **Type**: [WorkerSettings](#workersettings) **TOML dotted key path**: `worker` ### `ui_url` The URL of the Prefect UI. If not set, the client will attempt to infer it. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `ui_url` **Supported environment variables**: `PREFECT_UI_URL` ### `silence_api_url_misconfiguration` If `True`, disable the warning when a user accidentally misconfigure its `PREFECT_API_URL` Sometimes when a user manually set `PREFECT_API_URL` to a custom url,reverse-proxy for example, we would like to silence this warning so we will set it to `FALSE`. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `silence_api_url_misconfiguration` **Supported environment variables**: `PREFECT_SILENCE_API_URL_MISCONFIGURATION` *** ## APISettings Settings for interacting with the Prefect API ### `url` The URL of the Prefect API. If not set, the client will attempt to infer it. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `api.url` **Supported environment variables**: `PREFECT_API_URL` ### `auth_string` The auth string used for basic authentication with a self-hosted Prefect API. Should be kept secret. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `api.auth_string` **Supported environment variables**: `PREFECT_API_AUTH_STRING` ### `key` The API key used for authentication with the Prefect API. Should be kept secret. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `api.key` **Supported environment variables**: `PREFECT_API_KEY` ### `tls_insecure_skip_verify` If `True`, disables SSL checking to allow insecure requests. Setting to False is recommended only during development. For example, when using self-signed certificates. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `api.tls_insecure_skip_verify` **Supported environment variables**: `PREFECT_API_TLS_INSECURE_SKIP_VERIFY` ### `ssl_cert_file` This configuration settings option specifies the path to an SSL certificate file. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `api.ssl_cert_file` **Supported environment variables**: `PREFECT_API_SSL_CERT_FILE` ### `enable_http2` If true, enable support for HTTP/2 for communicating with an API. If the API does not support HTTP/2, this will have no effect and connections will be made via HTTP/1.1. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `api.enable_http2` **Supported environment variables**: `PREFECT_API_ENABLE_HTTP2` ### `request_timeout` The default timeout for requests to the API **Type**: `number` **Default**: `60.0` **TOML dotted key path**: `api.request_timeout` **Supported environment variables**: `PREFECT_API_REQUEST_TIMEOUT` *** ## CLISettings Settings for controlling CLI behavior ### `colors` If True, use colors in CLI output. If `False`, output will not include colors codes. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `cli.colors` **Supported environment variables**: `PREFECT_CLI_COLORS` ### `prompt` If `True`, use interactive prompts in CLI commands. If `False`, no interactive prompts will be used. If `None`, the value will be dynamically determined based on the presence of an interactive-enabled terminal. **Type**: `boolean | None` **Default**: `None` **TOML dotted key path**: `cli.prompt` **Supported environment variables**: `PREFECT_CLI_PROMPT` ### `wrap_lines` If `True`, wrap text by inserting new lines in long lines in CLI output. If `False`, output will not be wrapped. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `cli.wrap_lines` **Supported environment variables**: `PREFECT_CLI_WRAP_LINES` *** ## ClientMetricsSettings Settings for controlling metrics reporting from the client ### `enabled` Whether or not to enable Prometheus metrics in the client. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `client.metrics.enabled` **Supported environment variables**: `PREFECT_CLIENT_METRICS_ENABLED`, `PREFECT_CLIENT_ENABLE_METRICS` ### `port` The port to expose the client Prometheus metrics on. **Type**: `integer` **Default**: `4201` **TOML dotted key path**: `client.metrics.port` **Supported environment variables**: `PREFECT_CLIENT_METRICS_PORT` *** ## ClientSettings Settings for controlling API client behavior ### `max_retries` The maximum number of retries to perform on failed HTTP requests. Defaults to 5. Set to 0 to disable retries. See `PREFECT_CLIENT_RETRY_EXTRA_CODES` for details on which HTTP status codes are retried. **Type**: `integer` **Default**: `5` **Constraints**: * Minimum: 0 **TOML dotted key path**: `client.max_retries` **Supported environment variables**: `PREFECT_CLIENT_MAX_RETRIES` ### `retry_jitter_factor` A value greater than or equal to zero to control the amount of jitter added to retried client requests. Higher values introduce larger amounts of jitter. Set to 0 to disable jitter. See `clamped_poisson_interval` for details on the how jitter can affect retry lengths. **Type**: `number` **Default**: `0.2` **Constraints**: * Minimum: 0.0 **TOML dotted key path**: `client.retry_jitter_factor` **Supported environment variables**: `PREFECT_CLIENT_RETRY_JITTER_FACTOR` ### `retry_extra_codes` A list of extra HTTP status codes to retry on. Defaults to an empty list. 429, 502 and 503 are always retried. Please note that not all routes are idempotent and retrying may result in unexpected behavior. **Type**: `string | integer | array | None` **Examples**: * `"404,429,503"` * `"429"` * `[404, 429, 503]` **TOML dotted key path**: `client.retry_extra_codes` **Supported environment variables**: `PREFECT_CLIENT_RETRY_EXTRA_CODES` ### `csrf_support_enabled` Determines if CSRF token handling is active in the Prefect client for API requests. When enabled (`True`), the client automatically manages CSRF tokens by retrieving, storing, and including them in applicable state-changing requests **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `client.csrf_support_enabled` **Supported environment variables**: `PREFECT_CLIENT_CSRF_SUPPORT_ENABLED` ### `custom_headers` Custom HTTP headers to include with every API request to the Prefect server. Headers are specified as key-value pairs. Note that headers like 'User-Agent' and CSRF-related headers are managed by Prefect and cannot be overridden. **Type**: `object` **Examples**: * `{'X-Custom-Header': 'value'}` * `{'Authorization': 'Bearer token'}` **TOML dotted key path**: `client.custom_headers` **Supported environment variables**: `PREFECT_CLIENT_CUSTOM_HEADERS` ### `server_version_check_enabled` Whether the client should check the server's API version on startup. When disabled, the client will skip the call to /admin/version that normally runs once per client context entry. This is useful for worker subprocesses that inherit a known-compatible server configuration and do not need to repeat the version handshake. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `client.server_version_check_enabled` **Supported environment variables**: `PREFECT_CLIENT_SERVER_VERSION_CHECK_ENABLED` ### `metrics` **Type**: [ClientMetricsSettings](#clientmetricssettings) **TOML dotted key path**: `client.metrics` *** ## CloudSettings Settings for interacting with Prefect Cloud ### `api_url` API URL for Prefect Cloud. Used for authentication with Prefect Cloud. **Type**: `string` **Default**: `https://api.prefect.cloud/api` **TOML dotted key path**: `cloud.api_url` **Supported environment variables**: `PREFECT_CLOUD_API_URL` ### `enable_orchestration_telemetry` Whether or not to enable orchestration telemetry. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `cloud.enable_orchestration_telemetry` **Supported environment variables**: `PREFECT_CLOUD_ENABLE_ORCHESTRATION_TELEMETRY` ### `max_log_size` Maximum size in characters for a single log when sending logs to Prefect Cloud. **Type**: `integer` **Default**: `25000` **TOML dotted key path**: `cloud.max_log_size` **Supported environment variables**: `PREFECT_CLOUD_MAX_LOG_SIZE` ### `ui_url` The URL of the Prefect Cloud UI. If not set, the client will attempt to infer it. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `cloud.ui_url` **Supported environment variables**: `PREFECT_CLOUD_UI_URL` *** ## DeploymentsSettings Settings for configuring deployments defaults ### `default_work_pool_name` The default work pool to use when creating deployments. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `deployments.default_work_pool_name` **Supported environment variables**: `PREFECT_DEPLOYMENTS_DEFAULT_WORK_POOL_NAME`, `PREFECT_DEFAULT_WORK_POOL_NAME` ### `default_docker_build_namespace` The default Docker namespace to use when building images. **Type**: `string | None` **Default**: `None` **Examples**: * `"my-dockerhub-registry"` * `"4999999999999.dkr.ecr.us-east-2.amazonaws.com/my-ecr-repo"` **TOML dotted key path**: `deployments.default_docker_build_namespace` **Supported environment variables**: `PREFECT_DEPLOYMENTS_DEFAULT_DOCKER_BUILD_NAMESPACE`, `PREFECT_DEFAULT_DOCKER_BUILD_NAMESPACE` *** ## EventsSettings Settings for controlling events behavior ### `worker_max_queue_size` Maximum number of events that can be queued for delivery to the Prefect server. When the queue is full, new events are dropped with a warning. Set to 0 for unbounded (the default). Warning: setting this value too low may result in data loss as events will be silently dropped when the queue is full. **Type**: `integer` **Default**: `0` **Constraints**: * Minimum: 0 **TOML dotted key path**: `events.worker_max_queue_size` **Supported environment variables**: `PREFECT_EVENTS_WORKER_MAX_QUEUE_SIZE` *** ## ExperimentsSettings Settings for configuring experimental features ### `warn` If `True`, warn on usage of experimental features. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `experiments.warn` **Supported environment variables**: `PREFECT_EXPERIMENTS_WARN`, `PREFECT_EXPERIMENTAL_WARN` ### `plugins` Settings for the experimental plugin system **Type**: [PluginsSettings](#pluginssettings) **TOML dotted key path**: `experiments.plugins` *** ## FlowsSettings Settings for controlling flow behavior ### `heartbeat_frequency` Number of seconds between flow run heartbeats. Heartbeats are used to detect crashed flow runs. **Type**: `integer | None` **Default**: `180` **TOML dotted key path**: `flows.heartbeat_frequency` **Supported environment variables**: `PREFECT_FLOWS_HEARTBEAT_FREQUENCY`, `PREFECT_RUNNER_HEARTBEAT_FREQUENCY` ### `default_retries` This value sets the default number of retries for all flows. **Type**: `integer` **Default**: `0` **Constraints**: * Minimum: 0 **TOML dotted key path**: `flows.default_retries` **Supported environment variables**: `PREFECT_FLOWS_DEFAULT_RETRIES`, `PREFECT_FLOW_DEFAULT_RETRIES` ### `default_retry_delay_seconds` This value sets the default retry delay seconds for all flows. **Type**: `integer | number | array` **Default**: `0` **TOML dotted key path**: `flows.default_retry_delay_seconds` **Supported environment variables**: `PREFECT_FLOWS_DEFAULT_RETRY_DELAY_SECONDS`, `PREFECT_FLOW_DEFAULT_RETRY_DELAY_SECONDS` *** ## InternalSettings ### `logging_level` The default logging level for Prefect's internal machinery loggers. **Type**: `string` **Default**: `ERROR` **Constraints**: * Allowed values: 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL' **TOML dotted key path**: `internal.logging_level` **Supported environment variables**: `PREFECT_INTERNAL_LOGGING_LEVEL`, `PREFECT_LOGGING_INTERNAL_LEVEL` *** ## LoggingSettings Settings for controlling logging behavior ### `level` The default logging level for Prefect loggers. **Type**: `string` **Default**: `INFO` **Constraints**: * Allowed values: 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL' **TOML dotted key path**: `logging.level` **Supported environment variables**: `PREFECT_LOGGING_LEVEL` ### `config_path` A path to a logging configuration file. Defaults to \$PREFECT\_HOME/logging.yml **Type**: `string` **TOML dotted key path**: `logging.config_path` **Supported environment variables**: `PREFECT_LOGGING_CONFIG_PATH`, `PREFECT_LOGGING_SETTINGS_PATH` ### `extra_loggers` Additional loggers to attach to Prefect logging at runtime. **Type**: `string | array | None` **Default**: `None` **TOML dotted key path**: `logging.extra_loggers` **Supported environment variables**: `PREFECT_LOGGING_EXTRA_LOGGERS` ### `log_prints` If `True`, `print` statements in flows and tasks will be redirected to the Prefect logger for the given run. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `logging.log_prints` **Supported environment variables**: `PREFECT_LOGGING_LOG_PRINTS` ### `colors` If `True`, use colors in CLI output. If `False`, output will not include colors codes. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `logging.colors` **Supported environment variables**: `PREFECT_LOGGING_COLORS` ### `markup` Whether to interpret strings wrapped in square brackets as a style. This allows styles to be conveniently added to log messages, e.g. `[red]This is a red message.[/red]`. However, the downside is, if enabled, strings that contain square brackets may be inaccurately interpreted and lead to incomplete output, e.g. `[red]This is a red message.[/red]` may be interpreted as `[red]This is a red message.[/red]`. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `logging.markup` **Supported environment variables**: `PREFECT_LOGGING_MARKUP` ### `to_api` **Type**: [LoggingToAPISettings](#loggingtoapisettings) **TOML dotted key path**: `logging.to_api` *** ## LoggingToAPISettings Settings for controlling logging to the API ### `enabled` If `True`, logs will be sent to the API. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `logging.to_api.enabled` **Supported environment variables**: `PREFECT_LOGGING_TO_API_ENABLED` ### `batch_interval` The number of seconds between batched writes of logs to the API. **Type**: `number` **Default**: `2.0` **TOML dotted key path**: `logging.to_api.batch_interval` **Supported environment variables**: `PREFECT_LOGGING_TO_API_BATCH_INTERVAL` ### `batch_size` The number of logs to batch before sending to the API. **Type**: `integer` **Default**: `4000000` **TOML dotted key path**: `logging.to_api.batch_size` **Supported environment variables**: `PREFECT_LOGGING_TO_API_BATCH_SIZE` ### `max_log_size` The maximum size in characters for a single log. When connected to Prefect Cloud, this value is capped at `PREFECT_CLOUD_MAX_LOG_SIZE` (default 25,000). **Type**: `integer` **Default**: `1000000` **TOML dotted key path**: `logging.to_api.max_log_size` **Supported environment variables**: `PREFECT_LOGGING_TO_API_MAX_LOG_SIZE` ### `when_missing_flow` Controls the behavior when loggers attempt to send logs to the API handler from outside of a flow. All logs sent to the API must be associated with a flow run. The API log handler can only be used outside of a flow by manually providing a flow run identifier. Logs that are not associated with a flow run will not be sent to the API. This setting can be used to determine if a warning or error is displayed when the identifier is missing. The following options are available: * "warn": Log a warning message. * "error": Raise an error. * "ignore": Do not log a warning message or raise an error. **Type**: `string` **Default**: `warn` **Constraints**: * Allowed values: 'warn', 'error', 'ignore' **TOML dotted key path**: `logging.to_api.when_missing_flow` **Supported environment variables**: `PREFECT_LOGGING_TO_API_WHEN_MISSING_FLOW` *** ## PluginsSettings Settings for configuring the experimental plugin system ### `enabled` Enable the experimental plugin system. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `experiments.plugins.enabled` **Supported environment variables**: `PREFECT_EXPERIMENTS_PLUGINS_ENABLED` ### `allow` Comma-separated list of plugin names to allow. If set, only these plugins will be loaded. **Type**: `array | None` **Default**: `None` **TOML dotted key path**: `experiments.plugins.allow` **Supported environment variables**: `PREFECT_EXPERIMENTS_PLUGINS_ALLOW` ### `deny` Comma-separated list of plugin names to deny. These plugins will not be loaded. **Type**: `array | None` **Default**: `None` **TOML dotted key path**: `experiments.plugins.deny` **Supported environment variables**: `PREFECT_EXPERIMENTS_PLUGINS_DENY` ### `setup_timeout_seconds` Maximum time in seconds for all plugins to complete their setup hooks. **Type**: `number` **Default**: `20.0` **TOML dotted key path**: `experiments.plugins.setup_timeout_seconds` **Supported environment variables**: `PREFECT_EXPERIMENTS_PLUGINS_SETUP_TIMEOUT_SECONDS` ### `strict` If True, exit if a required plugin fails during setup. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `experiments.plugins.strict` **Supported environment variables**: `PREFECT_EXPERIMENTS_PLUGINS_STRICT` ### `safe_mode` If True, load plugins but do not execute their hooks. Useful for testing. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `experiments.plugins.safe_mode` **Supported environment variables**: `PREFECT_EXPERIMENTS_PLUGINS_SAFE_MODE` *** ## ResultsSettings Settings for controlling result storage behavior ### `default_serializer` The default serializer to use when not otherwise specified. **Type**: `string` **Default**: `pickle` **TOML dotted key path**: `results.default_serializer` **Supported environment variables**: `PREFECT_RESULTS_DEFAULT_SERIALIZER` ### `persist_by_default` The default setting for persisting results when not otherwise specified. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `results.persist_by_default` **Supported environment variables**: `PREFECT_RESULTS_PERSIST_BY_DEFAULT` ### `default_storage_block` The `block-type/block-document` slug of a block to use as the default result storage. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `results.default_storage_block` **Supported environment variables**: `PREFECT_RESULTS_DEFAULT_STORAGE_BLOCK`, `PREFECT_DEFAULT_RESULT_STORAGE_BLOCK` ### `local_storage_path` The default location for locally persisted results. Defaults to \$PREFECT\_HOME/storage. **Type**: `string` **TOML dotted key path**: `results.local_storage_path` **Supported environment variables**: `PREFECT_RESULTS_LOCAL_STORAGE_PATH`, `PREFECT_LOCAL_STORAGE_PATH` *** ## RunnerServerSettings Settings for controlling runner server behavior ### `enable` Whether or not to enable the runner's webserver. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `runner.server.enable` **Supported environment variables**: `PREFECT_RUNNER_SERVER_ENABLE` ### `host` The host address the runner's webserver should bind to. **Type**: `string` **Default**: `localhost` **TOML dotted key path**: `runner.server.host` **Supported environment variables**: `PREFECT_RUNNER_SERVER_HOST` ### `port` The port the runner's webserver should bind to. **Type**: `integer` **Default**: `8080` **TOML dotted key path**: `runner.server.port` **Supported environment variables**: `PREFECT_RUNNER_SERVER_PORT` ### `log_level` The log level of the runner's webserver. **Type**: `string` **Default**: `ERROR` **Constraints**: * Allowed values: 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL' **TOML dotted key path**: `runner.server.log_level` **Supported environment variables**: `PREFECT_RUNNER_SERVER_LOG_LEVEL` ### `missed_polls_tolerance` Number of missed polls before a runner is considered unhealthy by its webserver. **Type**: `integer` **Default**: `2` **TOML dotted key path**: `runner.server.missed_polls_tolerance` **Supported environment variables**: `PREFECT_RUNNER_SERVER_MISSED_POLLS_TOLERANCE` *** ## RunnerSettings Settings for controlling runner behavior ### `process_limit` Maximum number of processes a runner will execute in parallel. **Type**: `integer` **Default**: `5` **TOML dotted key path**: `runner.process_limit` **Supported environment variables**: `PREFECT_RUNNER_PROCESS_LIMIT` ### `poll_frequency` Number of seconds a runner should wait between queries for scheduled work. **Type**: `integer` **Default**: `10` **TOML dotted key path**: `runner.poll_frequency` **Supported environment variables**: `PREFECT_RUNNER_POLL_FREQUENCY` ### `crash_on_cancellation_failure` Whether to crash flow runs and shut down the runner when cancellation observing fails. When enabled, if both websocket and polling mechanisms for detecting cancellation events fail, all in-flight flow runs will be marked as crashed and the runner will shut down. When disabled (default), the runner will log an error but continue executing flow runs. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `runner.crash_on_cancellation_failure` **Supported environment variables**: `PREFECT_RUNNER_CRASH_ON_CANCELLATION_FAILURE` ### `server` **Type**: [RunnerServerSettings](#runnerserversettings) **TOML dotted key path**: `runner.server` *** ## SQLAlchemyConnectArgsSettings Settings for controlling SQLAlchemy connection behavior; note that these settings only take effect when using a PostgreSQL database. ### `application_name` Controls the application\_name field for connections opened from the connection pool when using a PostgreSQL database with the Prefect backend. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `server.database.sqlalchemy.connect_args.application_name` **Supported environment variables**: `PREFECT_SERVER_DATABASE_SQLALCHEMY_CONNECT_ARGS_APPLICATION_NAME` ### `search_path` PostgreSQL schema name to set in search\_path when using a PostgreSQL database with the Prefect backend. Note: The public schema should be included in the search path (e.g. 'myschema, public') to ensure that pg\_trgm and other extensions remain available. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `server.database.sqlalchemy.connect_args.search_path` **Supported environment variables**: `PREFECT_SERVER_DATABASE_SQLALCHEMY_CONNECT_ARGS_SEARCH_PATH` ### `statement_cache_size` Controls statement cache size for PostgreSQL connections. Setting this to 0 is required when using PgBouncer in transaction mode. Defaults to None. **Type**: `integer | None` **Default**: `None` **TOML dotted key path**: `server.database.sqlalchemy.connect_args.statement_cache_size` **Supported environment variables**: `PREFECT_SERVER_DATABASE_SQLALCHEMY_CONNECT_ARGS_STATEMENT_CACHE_SIZE` ### `prepared_statement_cache_size` Controls the size of the statement cache for PostgreSQL connections. When set to 0, statement caching is disabled. Defaults to None to use SQLAlchemy's default behavior. **Type**: `integer | None` **Default**: `None` **TOML dotted key path**: `server.database.sqlalchemy.connect_args.prepared_statement_cache_size` **Supported environment variables**: `PREFECT_SERVER_DATABASE_SQLALCHEMY_CONNECT_ARGS_PREPARED_STATEMENT_CACHE_SIZE` ### `tls` Settings for controlling SQLAlchemy mTLS behavior **Type**: [SQLAlchemyTLSSettings](#sqlalchemytlssettings) **TOML dotted key path**: `server.database.sqlalchemy.connect_args.tls` *** ## SQLAlchemySettings Settings for controlling SQLAlchemy behavior; note that these settings only take effect when using a PostgreSQL database. ### `connect_args` Settings for controlling SQLAlchemy connection behavior **Type**: [SQLAlchemyConnectArgsSettings](#sqlalchemyconnectargssettings) **TOML dotted key path**: `server.database.sqlalchemy.connect_args` ### `pool_size` Controls connection pool size of database connection pools from the Prefect backend. **Type**: `integer` **Default**: `5` **TOML dotted key path**: `server.database.sqlalchemy.pool_size` **Supported environment variables**: `PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_SIZE`, `PREFECT_SQLALCHEMY_POOL_SIZE` ### `pool_recycle` This setting causes the pool to recycle connections after the given number of seconds has passed; set it to -1 to avoid recycling entirely. **Type**: `integer` **Default**: `3600` **TOML dotted key path**: `server.database.sqlalchemy.pool_recycle` **Supported environment variables**: `PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_RECYCLE` ### `pool_timeout` Number of seconds to wait before giving up on getting a connection from the pool. Defaults to 30 seconds. **Type**: `number | None` **Default**: `30.0` **TOML dotted key path**: `server.database.sqlalchemy.pool_timeout` **Supported environment variables**: `PREFECT_SERVER_DATABASE_SQLALCHEMY_POOL_TIMEOUT` ### `max_overflow` Controls maximum overflow of the connection pool. To prevent overflow, set to -1. **Type**: `integer` **Default**: `10` **TOML dotted key path**: `server.database.sqlalchemy.max_overflow` **Supported environment variables**: `PREFECT_SERVER_DATABASE_SQLALCHEMY_MAX_OVERFLOW`, `PREFECT_SQLALCHEMY_MAX_OVERFLOW` *** ## SQLAlchemyTLSSettings Settings for controlling SQLAlchemy mTLS context when using a PostgreSQL database. ### `enabled` Controls whether connected to mTLS enabled PostgreSQL when using a PostgreSQL database with the Prefect backend. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `server.database.sqlalchemy.connect_args.tls.enabled` **Supported environment variables**: `PREFECT_SERVER_DATABASE_SQLALCHEMY_CONNECT_ARGS_TLS_ENABLED` ### `ca_file` This configuration settings option specifies the path to PostgreSQL client certificate authority file. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `server.database.sqlalchemy.connect_args.tls.ca_file` **Supported environment variables**: `PREFECT_SERVER_DATABASE_SQLALCHEMY_CONNECT_ARGS_TLS_CA_FILE` ### `cert_file` This configuration settings option specifies the path to PostgreSQL client certificate file. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `server.database.sqlalchemy.connect_args.tls.cert_file` **Supported environment variables**: `PREFECT_SERVER_DATABASE_SQLALCHEMY_CONNECT_ARGS_TLS_CERT_FILE` ### `key_file` This configuration settings option specifies the path to PostgreSQL client key file. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `server.database.sqlalchemy.connect_args.tls.key_file` **Supported environment variables**: `PREFECT_SERVER_DATABASE_SQLALCHEMY_CONNECT_ARGS_TLS_KEY_FILE` ### `check_hostname` This configuration settings option specifies whether to verify PostgreSQL server hostname. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `server.database.sqlalchemy.connect_args.tls.check_hostname` **Supported environment variables**: `PREFECT_SERVER_DATABASE_SQLALCHEMY_CONNECT_ARGS_TLS_CHECK_HOSTNAME` *** ## ServerAPISettings Settings for controlling API server behavior ### `auth_string` A string to use for basic authentication with the API in the form 'user:password'. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `server.api.auth_string` **Supported environment variables**: `PREFECT_SERVER_API_AUTH_STRING` ### `host` The API's host address (defaults to `127.0.0.1`). **Type**: `string` **Default**: `127.0.0.1` **TOML dotted key path**: `server.api.host` **Supported environment variables**: `PREFECT_SERVER_API_HOST` ### `port` The API's port address (defaults to `4200`). **Type**: `integer` **Default**: `4200` **TOML dotted key path**: `server.api.port` **Supported environment variables**: `PREFECT_SERVER_API_PORT` ### `base_path` The base URL path to serve the API under. **Type**: `string | None` **Default**: `None` **Examples**: * `"/v2/api"` **TOML dotted key path**: `server.api.base_path` **Supported environment variables**: `PREFECT_SERVER_API_BASE_PATH` ### `default_limit` The default limit applied to queries that can return multiple objects, such as `POST /flow_runs/filter`. **Type**: `integer` **Default**: `200` **TOML dotted key path**: `server.api.default_limit` **Supported environment variables**: `PREFECT_SERVER_API_DEFAULT_LIMIT`, `PREFECT_API_DEFAULT_LIMIT` ### `keepalive_timeout` The API's keep alive timeout (defaults to `5`). Refer to [https://www.uvicorn.org/settings/#timeouts](https://www.uvicorn.org/settings/#timeouts) for details. When the API is hosted behind a load balancer, you may want to set this to a value greater than the load balancer's idle timeout. Note this setting only applies when calling `prefect server start`; if hosting the API with another tool you will need to configure this there instead. **Type**: `integer` **Default**: `5` **TOML dotted key path**: `server.api.keepalive_timeout` **Supported environment variables**: `PREFECT_SERVER_API_KEEPALIVE_TIMEOUT` ### `csrf_protection_enabled` Controls the activation of CSRF protection for the Prefect server API. When enabled (`True`), the server enforces CSRF validation checks on incoming state-changing requests (POST, PUT, PATCH, DELETE), requiring a valid CSRF token to be included in the request headers or body. This adds a layer of security by preventing unauthorized or malicious sites from making requests on behalf of authenticated users. It is recommended to enable this setting in production environments where the API is exposed to web clients to safeguard against CSRF attacks. Note: Enabling this setting requires corresponding support in the client for CSRF token management. See PREFECT\_CLIENT\_CSRF\_SUPPORT\_ENABLED for more. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `server.api.csrf_protection_enabled` **Supported environment variables**: `PREFECT_SERVER_API_CSRF_PROTECTION_ENABLED`, `PREFECT_SERVER_CSRF_PROTECTION_ENABLED` ### `csrf_token_expiration` Specifies the duration for which a CSRF token remains valid after being issued by the server. The default expiration time is set to 1 hour, which offers a reasonable compromise. Adjust this setting based on your specific security requirements and usage patterns. **Type**: `string` **Default**: `PT1H` **TOML dotted key path**: `server.api.csrf_token_expiration` **Supported environment variables**: `PREFECT_SERVER_API_CSRF_TOKEN_EXPIRATION`, `PREFECT_SERVER_CSRF_TOKEN_EXPIRATION` ### `cors_allowed_origins` A comma-separated list of origins that are authorized to make cross-origin requests to the API. By default, this is set to `*`, which allows requests from all origins. **Type**: `string` **Default**: `*` **TOML dotted key path**: `server.api.cors_allowed_origins` **Supported environment variables**: `PREFECT_SERVER_API_CORS_ALLOWED_ORIGINS`, `PREFECT_SERVER_CORS_ALLOWED_ORIGINS` ### `cors_allowed_methods` A comma-separated list of methods that are authorized to make cross-origin requests to the API. By default, this is set to `*`, which allows requests from all methods. **Type**: `string` **Default**: `*` **TOML dotted key path**: `server.api.cors_allowed_methods` **Supported environment variables**: `PREFECT_SERVER_API_CORS_ALLOWED_METHODS`, `PREFECT_SERVER_CORS_ALLOWED_METHODS` ### `cors_allowed_headers` A comma-separated list of headers that are authorized to make cross-origin requests to the API. By default, this is set to `*`, which allows requests from all headers. **Type**: `string` **Default**: `*` **TOML dotted key path**: `server.api.cors_allowed_headers` **Supported environment variables**: `PREFECT_SERVER_API_CORS_ALLOWED_HEADERS`, `PREFECT_SERVER_CORS_ALLOWED_HEADERS` ### `max_parameter_size` The maximum size of parameters (in bytes, JSON-serialized) that can be stored on a flow run or deployment. Set to 0 to disable the limit. **Type**: `integer` **Default**: `524288` **Constraints**: * Minimum: 0 **TOML dotted key path**: `server.api.max_parameter_size` **Supported environment variables**: `PREFECT_SERVER_API_MAX_PARAMETER_SIZE` *** ## ServerConcurrencySettings ### `lease_storage` The module to use for storing concurrency limit leases. **Type**: `string` **Default**: `prefect.server.concurrency.lease_storage.memory` **TOML dotted key path**: `server.concurrency.lease_storage` **Supported environment variables**: `PREFECT_SERVER_CONCURRENCY_LEASE_STORAGE` ### `initial_deployment_lease_duration` Initial duration for deployment concurrency lease in seconds. **Type**: `number` **Default**: `300.0` **Constraints**: * Minimum: 30.0 * Maximum: 3600.0 **TOML dotted key path**: `server.concurrency.initial_deployment_lease_duration` **Supported environment variables**: `PREFECT_SERVER_CONCURRENCY_INITIAL_DEPLOYMENT_LEASE_DURATION` ### `maximum_concurrency_slot_wait_seconds` The maximum number of seconds to wait before retrying when a concurrency slot cannot be acquired. **Type**: `number` **Default**: `30` **Constraints**: * Minimum: 0 **TOML dotted key path**: `server.concurrency.maximum_concurrency_slot_wait_seconds` **Supported environment variables**: `PREFECT_SERVER_CONCURRENCY_MAXIMUM_CONCURRENCY_SLOT_WAIT_SECONDS` *** ## ServerDatabaseSettings Settings for controlling server database behavior ### `sqlalchemy` Settings for controlling SQLAlchemy behavior **Type**: [SQLAlchemySettings](#sqlalchemysettings) **TOML dotted key path**: `server.database.sqlalchemy` ### `connection_url` A database connection URL in a SQLAlchemy-compatible format. Prefect currently supports SQLite and Postgres. Note that all Prefect database engines must use an async driver - for SQLite, use `sqlite+aiosqlite` and for Postgres use `postgresql+asyncpg`. SQLite in-memory databases can be used by providing the url `sqlite+aiosqlite:///file::memory:?cache=shared&uri=true&check_same_thread=false`, which will allow the database to be accessed by multiple threads. Note that in-memory databases can not be accessed from multiple processes and should only be used for simple tests. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `server.database.connection_url` **Supported environment variables**: `PREFECT_SERVER_DATABASE_CONNECTION_URL`, `PREFECT_API_DATABASE_CONNECTION_URL` ### `driver` The database driver to use when connecting to the database. If not set, the driver will be inferred from the connection URL. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `server.database.driver` **Supported environment variables**: `PREFECT_SERVER_DATABASE_DRIVER`, `PREFECT_API_DATABASE_DRIVER` ### `host` The database server host. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `server.database.host` **Supported environment variables**: `PREFECT_SERVER_DATABASE_HOST`, `PREFECT_API_DATABASE_HOST` ### `port` The database server port. **Type**: `integer | None` **Default**: `None` **TOML dotted key path**: `server.database.port` **Supported environment variables**: `PREFECT_SERVER_DATABASE_PORT`, `PREFECT_API_DATABASE_PORT` ### `user` The user to use when connecting to the database. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `server.database.user` **Supported environment variables**: `PREFECT_SERVER_DATABASE_USER`, `PREFECT_API_DATABASE_USER` ### `name` The name of the Prefect database on the remote server, or the path to the database file for SQLite. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `server.database.name` **Supported environment variables**: `PREFECT_SERVER_DATABASE_NAME`, `PREFECT_API_DATABASE_NAME` ### `password` The password to use when connecting to the database. Should be kept secret. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `server.database.password` **Supported environment variables**: `PREFECT_SERVER_DATABASE_PASSWORD`, `PREFECT_API_DATABASE_PASSWORD` ### `echo` If `True`, SQLAlchemy will log all SQL issued to the database. Defaults to `False`. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `server.database.echo` **Supported environment variables**: `PREFECT_SERVER_DATABASE_ECHO`, `PREFECT_API_DATABASE_ECHO` ### `migrate_on_start` If `True`, the database will be migrated on application startup. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `server.database.migrate_on_start` **Supported environment variables**: `PREFECT_SERVER_DATABASE_MIGRATE_ON_START`, `PREFECT_API_DATABASE_MIGRATE_ON_START` ### `timeout` A statement timeout, in seconds, applied to all database interactions made by the Prefect backend. Defaults to 10 seconds. **Type**: `number | None` **Default**: `10.0` **TOML dotted key path**: `server.database.timeout` **Supported environment variables**: `PREFECT_SERVER_DATABASE_TIMEOUT`, `PREFECT_API_DATABASE_TIMEOUT` ### `connection_timeout` A connection timeout, in seconds, applied to database connections. Defaults to `5`. **Type**: `number | None` **Default**: `5.0` **TOML dotted key path**: `server.database.connection_timeout` **Supported environment variables**: `PREFECT_SERVER_DATABASE_CONNECTION_TIMEOUT`, `PREFECT_API_DATABASE_CONNECTION_TIMEOUT` *** ## ServerDeploymentsSettings ### `concurrency_slot_wait_seconds` The number of seconds to wait before retrying when a deployment flow run cannot secure a concurrency slot from the server. **Type**: `number` **Default**: `30.0` **Constraints**: * Minimum: 0.0 **TOML dotted key path**: `server.deployments.concurrency_slot_wait_seconds` **Supported environment variables**: `PREFECT_SERVER_DEPLOYMENTS_CONCURRENCY_SLOT_WAIT_SECONDS`, `PREFECT_DEPLOYMENT_CONCURRENCY_SLOT_WAIT_SECONDS` *** ## ServerDocketSettings Settings for controlling Docket behavior ### `name` The name of the Docket instance. **Type**: `string` **Default**: `prefect-server` **TOML dotted key path**: `server.docket.name` **Supported environment variables**: `PREFECT_SERVER_DOCKET_NAME` ### `url` The URL of the Redis server to use for Docket. **Type**: `string` **Default**: `memory://` **TOML dotted key path**: `server.docket.url` **Supported environment variables**: `PREFECT_SERVER_DOCKET_URL` *** ## ServerEphemeralSettings Settings for controlling ephemeral server behavior ### `enabled` Controls whether or not a subprocess server can be started when no API URL is provided. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `server.ephemeral.enabled` **Supported environment variables**: `PREFECT_SERVER_EPHEMERAL_ENABLED`, `PREFECT_SERVER_ALLOW_EPHEMERAL_MODE` ### `startup_timeout_seconds` The number of seconds to wait for the server to start when ephemeral mode is enabled. Defaults to `20`. **Type**: `integer` **Default**: `20` **TOML dotted key path**: `server.ephemeral.startup_timeout_seconds` **Supported environment variables**: `PREFECT_SERVER_EPHEMERAL_STARTUP_TIMEOUT_SECONDS` *** ## ServerEventsSettings Settings for controlling behavior of the events subsystem ### `stream_out_enabled` Whether or not to stream events out to the API via websockets. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `server.events.stream_out_enabled` **Supported environment variables**: `PREFECT_SERVER_EVENTS_STREAM_OUT_ENABLED`, `PREFECT_API_EVENTS_STREAM_OUT_ENABLED` ### `related_resource_cache_ttl` The number of seconds to cache related resources for in the API. **Type**: `string` **Default**: `PT5M` **TOML dotted key path**: `server.events.related_resource_cache_ttl` **Supported environment variables**: `PREFECT_SERVER_EVENTS_RELATED_RESOURCE_CACHE_TTL`, `PREFECT_API_EVENTS_RELATED_RESOURCE_CACHE_TTL` ### `maximum_labels_per_resource` The maximum number of labels a resource may have. **Type**: `integer` **Default**: `500` **TOML dotted key path**: `server.events.maximum_labels_per_resource` **Supported environment variables**: `PREFECT_SERVER_EVENTS_MAXIMUM_LABELS_PER_RESOURCE`, `PREFECT_EVENTS_MAXIMUM_LABELS_PER_RESOURCE` ### `maximum_related_resources` The maximum number of related resources an Event may have. **Type**: `integer` **Default**: `100` **TOML dotted key path**: `server.events.maximum_related_resources` **Supported environment variables**: `PREFECT_SERVER_EVENTS_MAXIMUM_RELATED_RESOURCES`, `PREFECT_EVENTS_MAXIMUM_RELATED_RESOURCES` ### `maximum_size_bytes` The maximum size of an Event when serialized to JSON **Type**: `integer` **Default**: `1500000` **TOML dotted key path**: `server.events.maximum_size_bytes` **Supported environment variables**: `PREFECT_SERVER_EVENTS_MAXIMUM_SIZE_BYTES`, `PREFECT_EVENTS_MAXIMUM_SIZE_BYTES` ### `expired_bucket_buffer` The amount of time to retain expired automation buckets **Type**: `string` **Default**: `PT1M` **TOML dotted key path**: `server.events.expired_bucket_buffer` **Supported environment variables**: `PREFECT_SERVER_EVENTS_EXPIRED_BUCKET_BUFFER`, `PREFECT_EVENTS_EXPIRED_BUCKET_BUFFER` ### `proactive_granularity` How frequently proactive automations are evaluated **Type**: `string` **Default**: `PT5S` **TOML dotted key path**: `server.events.proactive_granularity` **Supported environment variables**: `PREFECT_SERVER_EVENTS_PROACTIVE_GRANULARITY`, `PREFECT_EVENTS_PROACTIVE_GRANULARITY` ### `retention_period` The amount of time to retain events in the database. **Type**: `string` **Default**: `P7D` **TOML dotted key path**: `server.events.retention_period` **Supported environment variables**: `PREFECT_SERVER_EVENTS_RETENTION_PERIOD`, `PREFECT_EVENTS_RETENTION_PERIOD` ### `maximum_websocket_backfill` The maximum range to look back for backfilling events for a websocket subscriber. **Type**: `string` **Default**: `PT15M` **TOML dotted key path**: `server.events.maximum_websocket_backfill` **Supported environment variables**: `PREFECT_SERVER_EVENTS_MAXIMUM_WEBSOCKET_BACKFILL`, `PREFECT_EVENTS_MAXIMUM_WEBSOCKET_BACKFILL` ### `websocket_backfill_page_size` The page size for the queries to backfill events for websocket subscribers. **Type**: `integer` **Default**: `250` **TOML dotted key path**: `server.events.websocket_backfill_page_size` **Supported environment variables**: `PREFECT_SERVER_EVENTS_WEBSOCKET_BACKFILL_PAGE_SIZE`, `PREFECT_EVENTS_WEBSOCKET_BACKFILL_PAGE_SIZE` ### `messaging_broker` Which message broker implementation to use for the messaging system, should point to a module that exports a Publisher and Consumer class. **Type**: `string` **Default**: `prefect.server.utilities.messaging.memory` **TOML dotted key path**: `server.events.messaging_broker` **Supported environment variables**: `PREFECT_SERVER_EVENTS_MESSAGING_BROKER`, `PREFECT_MESSAGING_BROKER` ### `messaging_cache` Which cache implementation to use for the events system. Should point to a module that exports a Cache class. **Type**: `string` **Default**: `prefect.server.utilities.messaging.memory` **TOML dotted key path**: `server.events.messaging_cache` **Supported environment variables**: `PREFECT_SERVER_EVENTS_MESSAGING_CACHE`, `PREFECT_MESSAGING_CACHE` ### `causal_ordering` Which causal ordering implementation to use for the events system. Should point to a module that exports a CausalOrdering class. **Type**: `string` **Default**: `prefect.server.events.ordering.memory` **TOML dotted key path**: `server.events.causal_ordering` **Supported environment variables**: `PREFECT_SERVER_EVENTS_CAUSAL_ORDERING` ### `maximum_event_name_length` The maximum length of an event name. **Type**: `integer` **Default**: `1024` **TOML dotted key path**: `server.events.maximum_event_name_length` **Supported environment variables**: `PREFECT_SERVER_EVENTS_MAXIMUM_EVENT_NAME_LENGTH` *** ## ServerFlowRunGraphSettings Settings for controlling behavior of the flow run graph ### `max_nodes` The maximum size of a flow run graph on the v2 API **Type**: `integer` **Default**: `10000` **TOML dotted key path**: `server.flow_run_graph.max_nodes` **Supported environment variables**: `PREFECT_SERVER_FLOW_RUN_GRAPH_MAX_NODES`, `PREFECT_API_MAX_FLOW_RUN_GRAPH_NODES` ### `max_artifacts` The maximum number of artifacts to show on a flow run graph on the v2 API **Type**: `integer` **Default**: `10000` **TOML dotted key path**: `server.flow_run_graph.max_artifacts` **Supported environment variables**: `PREFECT_SERVER_FLOW_RUN_GRAPH_MAX_ARTIFACTS`, `PREFECT_API_MAX_FLOW_RUN_GRAPH_ARTIFACTS` *** ## ServerLogsSettings Settings for controlling behavior of the logs subsystem ### `stream_out_enabled` Whether or not to stream logs out to the API via websockets. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `server.logs.stream_out_enabled` **Supported environment variables**: `PREFECT_SERVER_LOGS_STREAM_OUT_ENABLED` ### `stream_publishing_enabled` Whether or not to publish logs to the streaming system. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `server.logs.stream_publishing_enabled` **Supported environment variables**: `PREFECT_SERVER_LOGS_STREAM_PUBLISHING_ENABLED` *** ## ServerServicesCancellationCleanupSettings Settings for controlling the cancellation cleanup service ### `enabled` Whether or not to start the cancellation cleanup service in the server application. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `server.services.cancellation_cleanup.enabled` **Supported environment variables**: `PREFECT_SERVER_SERVICES_CANCELLATION_CLEANUP_ENABLED`, `PREFECT_API_SERVICES_CANCELLATION_CLEANUP_ENABLED` ### `loop_seconds` The cancellation cleanup service will look for non-terminal tasks and subflows this often. Defaults to `20`. **Type**: `number` **Default**: `20` **TOML dotted key path**: `server.services.cancellation_cleanup.loop_seconds` **Supported environment variables**: `PREFECT_SERVER_SERVICES_CANCELLATION_CLEANUP_LOOP_SECONDS`, `PREFECT_API_SERVICES_CANCELLATION_CLEANUP_LOOP_SECONDS` *** ## ServerServicesDBVacuumSettings Settings for controlling the database vacuum service ### `enabled` Comma-separated set of vacuum types to enable. Valid values: 'events', 'flow\_runs'. Defaults to 'events'. For backward compatibility, 'true' maps to 'events,flow\_runs' and 'false' maps to 'events'. Event vacuum also requires event\_persister.enabled (the default). **Type**: `array | boolean | None` **Default**: `['events']` **TOML dotted key path**: `server.services.db_vacuum.enabled` **Supported environment variables**: `PREFECT_SERVER_SERVICES_DB_VACUUM_ENABLED` ### `loop_seconds` The database vacuum service will run this often, in seconds. Defaults to `3600` (1 hour). **Type**: `number` **Default**: `3600` **TOML dotted key path**: `server.services.db_vacuum.loop_seconds` **Supported environment variables**: `PREFECT_SERVER_SERVICES_DB_VACUUM_LOOP_SECONDS` ### `retention_period` How old a flow run must be (based on end\_time) before it is eligible for deletion. Accepts seconds. Minimum 1 hour. Defaults to 90 days. **Type**: `string` **Default**: `P90D` **TOML dotted key path**: `server.services.db_vacuum.retention_period` **Supported environment variables**: `PREFECT_SERVER_SERVICES_DB_VACUUM_RETENTION_PERIOD` ### `batch_size` The number of records to delete per database transaction. Defaults to `200`. **Type**: `integer` **Default**: `200` **TOML dotted key path**: `server.services.db_vacuum.batch_size` **Supported environment variables**: `PREFECT_SERVER_SERVICES_DB_VACUUM_BATCH_SIZE` ### `event_retention_overrides` Per-event-type retention period overrides. Keys are event type strings (e.g. 'prefect.flow-run.heartbeat'), values are retention periods in seconds. Event types not listed fall back to server.events.retention\_period. Each override is capped by the global events retention period. **Type**: `object` **Default**: `{'prefect.flow-run.heartbeat': 'P7D'}` **TOML dotted key path**: `server.services.db_vacuum.event_retention_overrides` **Supported environment variables**: `PREFECT_SERVER_SERVICES_DB_VACUUM_EVENT_RETENTION_OVERRIDES` *** ## ServerServicesEventLoggerSettings Settings for controlling the event logger service ### `enabled` Whether or not to start the event logger service in the server application. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `server.services.event_logger.enabled` **Supported environment variables**: `PREFECT_SERVER_SERVICES_EVENT_LOGGER_ENABLED`, `PREFECT_API_SERVICES_EVENT_LOGGER_ENABLED` *** ## ServerServicesEventPersisterSettings Settings for controlling the event persister service ### `enabled` Whether or not to start the event persister service in the server application. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `server.services.event_persister.enabled` **Supported environment variables**: `PREFECT_SERVER_SERVICES_EVENT_PERSISTER_ENABLED`, `PREFECT_API_SERVICES_EVENT_PERSISTER_ENABLED` ### `batch_size` The number of events the event persister will attempt to insert in one batch. **Type**: `integer` **Default**: `20` **TOML dotted key path**: `server.services.event_persister.batch_size` **Supported environment variables**: `PREFECT_SERVER_SERVICES_EVENT_PERSISTER_BATCH_SIZE`, `PREFECT_API_SERVICES_EVENT_PERSISTER_BATCH_SIZE` ### `read_batch_size` The number of events the event persister will attempt to read from the message broker in one batch. **Type**: `integer` **Default**: `1` **TOML dotted key path**: `server.services.event_persister.read_batch_size` **Supported environment variables**: `PREFECT_SERVER_SERVICES_EVENT_PERSISTER_READ_BATCH_SIZE`, `PREFECT_API_SERVICES_EVENT_PERSISTER_READ_BATCH_SIZE` ### `flush_interval` The maximum number of seconds between flushes of the event persister. **Type**: `number` **Default**: `5` **TOML dotted key path**: `server.services.event_persister.flush_interval` **Supported environment variables**: `PREFECT_SERVER_SERVICES_EVENT_PERSISTER_FLUSH_INTERVAL`, `PREFECT_API_SERVICES_EVENT_PERSISTER_FLUSH_INTERVAL` ### `queue_max_size` The maximum number of events that can be queued in memory for persistence. When the queue is full, new events will be dropped. **Type**: `integer` **Default**: `50000` **TOML dotted key path**: `server.services.event_persister.queue_max_size` **Supported environment variables**: `PREFECT_SERVER_SERVICES_EVENT_PERSISTER_QUEUE_MAX_SIZE` ### `max_flush_retries` The maximum number of consecutive flush failures before events are dropped instead of being re-queued. **Type**: `integer` **Default**: `5` **TOML dotted key path**: `server.services.event_persister.max_flush_retries` **Supported environment variables**: `PREFECT_SERVER_SERVICES_EVENT_PERSISTER_MAX_FLUSH_RETRIES` *** ## ServerServicesForemanSettings Settings for controlling the foreman service ### `enabled` Whether or not to start the foreman service in the server application. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `server.services.foreman.enabled` **Supported environment variables**: `PREFECT_SERVER_SERVICES_FOREMAN_ENABLED`, `PREFECT_API_SERVICES_FOREMAN_ENABLED` ### `loop_seconds` The foreman service will check for offline workers this often. Defaults to `15`. **Type**: `number` **Default**: `15` **TOML dotted key path**: `server.services.foreman.loop_seconds` **Supported environment variables**: `PREFECT_SERVER_SERVICES_FOREMAN_LOOP_SECONDS`, `PREFECT_API_SERVICES_FOREMAN_LOOP_SECONDS` ### `inactivity_heartbeat_multiple` The number of heartbeats that must be missed before a worker is marked as offline. Defaults to `3`. **Type**: `integer` **Default**: `3` **TOML dotted key path**: `server.services.foreman.inactivity_heartbeat_multiple` **Supported environment variables**: `PREFECT_SERVER_SERVICES_FOREMAN_INACTIVITY_HEARTBEAT_MULTIPLE`, `PREFECT_API_SERVICES_FOREMAN_INACTIVITY_HEARTBEAT_MULTIPLE` ### `fallback_heartbeat_interval_seconds` The number of seconds to use for online/offline evaluation if a worker's heartbeat interval is not set. Defaults to `30`. **Type**: `integer` **Default**: `30` **TOML dotted key path**: `server.services.foreman.fallback_heartbeat_interval_seconds` **Supported environment variables**: `PREFECT_SERVER_SERVICES_FOREMAN_FALLBACK_HEARTBEAT_INTERVAL_SECONDS`, `PREFECT_API_SERVICES_FOREMAN_FALLBACK_HEARTBEAT_INTERVAL_SECONDS` ### `deployment_last_polled_timeout_seconds` The number of seconds before a deployment is marked as not ready if it has not been polled. Defaults to `60`. **Type**: `integer` **Default**: `60` **TOML dotted key path**: `server.services.foreman.deployment_last_polled_timeout_seconds` **Supported environment variables**: `PREFECT_SERVER_SERVICES_FOREMAN_DEPLOYMENT_LAST_POLLED_TIMEOUT_SECONDS`, `PREFECT_API_SERVICES_FOREMAN_DEPLOYMENT_LAST_POLLED_TIMEOUT_SECONDS` ### `work_queue_last_polled_timeout_seconds` The number of seconds before a work queue is marked as not ready if it has not been polled. Defaults to `60`. **Type**: `integer` **Default**: `60` **TOML dotted key path**: `server.services.foreman.work_queue_last_polled_timeout_seconds` **Supported environment variables**: `PREFECT_SERVER_SERVICES_FOREMAN_WORK_QUEUE_LAST_POLLED_TIMEOUT_SECONDS`, `PREFECT_API_SERVICES_FOREMAN_WORK_QUEUE_LAST_POLLED_TIMEOUT_SECONDS` *** ## ServerServicesLateRunsSettings Settings for controlling the late runs service ### `enabled` Whether or not to start the late runs service in the server application. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `server.services.late_runs.enabled` **Supported environment variables**: `PREFECT_SERVER_SERVICES_LATE_RUNS_ENABLED`, `PREFECT_API_SERVICES_LATE_RUNS_ENABLED` ### `loop_seconds` The late runs service will look for runs to mark as late this often. Defaults to `5`. **Type**: `number` **Default**: `5` **TOML dotted key path**: `server.services.late_runs.loop_seconds` **Supported environment variables**: `PREFECT_SERVER_SERVICES_LATE_RUNS_LOOP_SECONDS`, `PREFECT_API_SERVICES_LATE_RUNS_LOOP_SECONDS` ### `after_seconds` The late runs service will mark runs as late after they have exceeded their scheduled start time by this many seconds. Defaults to `5` seconds. **Type**: `string` **Default**: `PT15S` **TOML dotted key path**: `server.services.late_runs.after_seconds` **Supported environment variables**: `PREFECT_SERVER_SERVICES_LATE_RUNS_AFTER_SECONDS`, `PREFECT_API_SERVICES_LATE_RUNS_AFTER_SECONDS` *** ## ServerServicesPauseExpirationsSettings Settings for controlling the pause expiration service ### `enabled` Whether or not to start the paused flow run expiration service in the server application. If disabled, paused flows that have timed out will remain in a Paused state until a resume attempt. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `server.services.pause_expirations.enabled` **Supported environment variables**: `PREFECT_SERVER_SERVICES_PAUSE_EXPIRATIONS_ENABLED`, `PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_ENABLED` ### `loop_seconds` The pause expiration service will look for runs to mark as failed this often. Defaults to `5`. **Type**: `number` **Default**: `5` **TOML dotted key path**: `server.services.pause_expirations.loop_seconds` **Supported environment variables**: `PREFECT_SERVER_SERVICES_PAUSE_EXPIRATIONS_LOOP_SECONDS`, `PREFECT_API_SERVICES_PAUSE_EXPIRATIONS_LOOP_SECONDS` *** ## ServerServicesRepossessorSettings Settings for controlling the repossessor service ### `enabled` Whether or not to start the repossessor service in the server application. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `server.services.repossessor.enabled` **Supported environment variables**: `PREFECT_SERVER_SERVICES_REPOSSESSOR_ENABLED` ### `loop_seconds` The repossessor service will look for expired leases this often. Defaults to `15`. **Type**: `number` **Default**: `15` **TOML dotted key path**: `server.services.repossessor.loop_seconds` **Supported environment variables**: `PREFECT_SERVER_SERVICES_REPOSSESSOR_LOOP_SECONDS` *** ## ServerServicesSchedulerSettings Settings for controlling the scheduler service ### `enabled` Whether or not to start the scheduler service in the server application. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `server.services.scheduler.enabled` **Supported environment variables**: `PREFECT_SERVER_SERVICES_SCHEDULER_ENABLED`, `PREFECT_API_SERVICES_SCHEDULER_ENABLED` ### `loop_seconds` The scheduler loop interval, in seconds. This determines how often the scheduler will attempt to schedule new flow runs, but has no impact on how quickly either flow runs or task runs are actually executed. Defaults to `60`. **Type**: `number` **Default**: `60` **TOML dotted key path**: `server.services.scheduler.loop_seconds` **Supported environment variables**: `PREFECT_SERVER_SERVICES_SCHEDULER_LOOP_SECONDS`, `PREFECT_API_SERVICES_SCHEDULER_LOOP_SECONDS` ### `deployment_batch_size` The number of deployments the scheduler will attempt to schedule in a single batch. If there are more deployments than the batch size, the scheduler immediately attempts to schedule the next batch; it does not sleep for `scheduler_loop_seconds` until it has visited every deployment once. Defaults to `100`. **Type**: `integer` **Default**: `100` **TOML dotted key path**: `server.services.scheduler.deployment_batch_size` **Supported environment variables**: `PREFECT_SERVER_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE`, `PREFECT_API_SERVICES_SCHEDULER_DEPLOYMENT_BATCH_SIZE` ### `max_runs` The scheduler will attempt to schedule up to this many auto-scheduled runs in the future. Note that runs may have fewer than this many scheduled runs, depending on the value of `scheduler_max_scheduled_time`. Defaults to `100`. **Type**: `integer` **Default**: `100` **TOML dotted key path**: `server.services.scheduler.max_runs` **Supported environment variables**: `PREFECT_SERVER_SERVICES_SCHEDULER_MAX_RUNS`, `PREFECT_API_SERVICES_SCHEDULER_MAX_RUNS` ### `min_runs` The scheduler will attempt to schedule at least this many auto-scheduled runs in the future. Note that runs may have more than this many scheduled runs, depending on the value of `scheduler_min_scheduled_time`. Defaults to `3`. **Type**: `integer` **Default**: `3` **TOML dotted key path**: `server.services.scheduler.min_runs` **Supported environment variables**: `PREFECT_SERVER_SERVICES_SCHEDULER_MIN_RUNS`, `PREFECT_API_SERVICES_SCHEDULER_MIN_RUNS` ### `max_scheduled_time` The scheduler will create new runs up to this far in the future. Note that this setting will take precedence over `scheduler_max_runs`: if a flow runs once a month and `scheduler_max_scheduled_time` is three months, then only three runs will be scheduled. Defaults to 100 days (`8640000` seconds). **Type**: `string` **Default**: `P100D` **TOML dotted key path**: `server.services.scheduler.max_scheduled_time` **Supported environment variables**: `PREFECT_SERVER_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME`, `PREFECT_API_SERVICES_SCHEDULER_MAX_SCHEDULED_TIME` ### `min_scheduled_time` The scheduler will create new runs at least this far in the future. Note that this setting will take precedence over `scheduler_min_runs`: if a flow runs every hour and `scheduler_min_scheduled_time` is three hours, then three runs will be scheduled even if `scheduler_min_runs` is 1. Defaults to **Type**: `string` **Default**: `PT1H` **TOML dotted key path**: `server.services.scheduler.min_scheduled_time` **Supported environment variables**: `PREFECT_SERVER_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME`, `PREFECT_API_SERVICES_SCHEDULER_MIN_SCHEDULED_TIME` ### `insert_batch_size` The number of runs the scheduler will attempt to insert in a single batch. Defaults to `500`. **Type**: `integer` **Default**: `500` **TOML dotted key path**: `server.services.scheduler.insert_batch_size` **Supported environment variables**: `PREFECT_SERVER_SERVICES_SCHEDULER_INSERT_BATCH_SIZE`, `PREFECT_API_SERVICES_SCHEDULER_INSERT_BATCH_SIZE` ### `recent_deployments_loop_seconds` The number of seconds the recent deployments scheduler will wait between checking for recently updated deployments. Defaults to `5`. **Type**: `number` **Default**: `5` **TOML dotted key path**: `server.services.scheduler.recent_deployments_loop_seconds` **Supported environment variables**: `PREFECT_SERVER_SERVICES_SCHEDULER_RECENT_DEPLOYMENTS_LOOP_SECONDS` *** ## ServerServicesSettings Settings for controlling server services ### `cancellation_cleanup` **Type**: [ServerServicesCancellationCleanupSettings](#serverservicescancellationcleanupsettings) **TOML dotted key path**: `server.services.cancellation_cleanup` ### `db_vacuum` **Type**: [ServerServicesDBVacuumSettings](#serverservicesdbvacuumsettings) **TOML dotted key path**: `server.services.db_vacuum` ### `event_persister` **Type**: [ServerServicesEventPersisterSettings](#serverserviceseventpersistersettings) **TOML dotted key path**: `server.services.event_persister` ### `event_logger` **Type**: [ServerServicesEventLoggerSettings](#serverserviceseventloggersettings) **TOML dotted key path**: `server.services.event_logger` ### `foreman` **Type**: [ServerServicesForemanSettings](#serverservicesforemansettings) **TOML dotted key path**: `server.services.foreman` ### `late_runs` **Type**: [ServerServicesLateRunsSettings](#serverserviceslaterunssettings) **TOML dotted key path**: `server.services.late_runs` ### `scheduler` **Type**: [ServerServicesSchedulerSettings](#serverservicesschedulersettings) **TOML dotted key path**: `server.services.scheduler` ### `pause_expirations` **Type**: [ServerServicesPauseExpirationsSettings](#serverservicespauseexpirationssettings) **TOML dotted key path**: `server.services.pause_expirations` ### `repossessor` **Type**: [ServerServicesRepossessorSettings](#serverservicesrepossessorsettings) **TOML dotted key path**: `server.services.repossessor` ### `task_run_recorder` **Type**: [ServerServicesTaskRunRecorderSettings](#serverservicestaskrunrecordersettings) **TOML dotted key path**: `server.services.task_run_recorder` ### `triggers` **Type**: [ServerServicesTriggersSettings](#serverservicestriggerssettings) **TOML dotted key path**: `server.services.triggers` *** ## ServerServicesTaskRunRecorderSettings Settings for controlling the task run recorder service ### `enabled` Whether or not to start the task run recorder service in the server application. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `server.services.task_run_recorder.enabled` **Supported environment variables**: `PREFECT_SERVER_SERVICES_TASK_RUN_RECORDER_ENABLED`, `PREFECT_API_SERVICES_TASK_RUN_RECORDER_ENABLED` ### `read_batch_size` The number of task runs the task run recorder will attempt to read from the message broker in one batch. **Type**: `integer` **Default**: `1` **TOML dotted key path**: `server.services.task_run_recorder.read_batch_size` **Supported environment variables**: `PREFECT_SERVER_SERVICES_TASK_RUN_RECORDER_READ_BATCH_SIZE` ### `batch_size` The number of task runs the task run recorder will attempt to insert in one batch. **Type**: `integer` **Default**: `1` **TOML dotted key path**: `server.services.task_run_recorder.batch_size` **Supported environment variables**: `PREFECT_SERVER_SERVICES_TASK_RUN_RECORDER_BATCH_SIZE` ### `flush_interval` The maximum number of seconds between flushes of the task run recorder. **Type**: `number` **Default**: `5` **TOML dotted key path**: `server.services.task_run_recorder.flush_interval` **Supported environment variables**: `PREFECT_SERVER_SERVICES_TASK_RUN_RECORDER_FLUSH_INTERVAL` *** ## ServerServicesTriggersSettings Settings for controlling the triggers service ### `enabled` Whether or not to start the triggers service in the server application. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `server.services.triggers.enabled` **Supported environment variables**: `PREFECT_SERVER_SERVICES_TRIGGERS_ENABLED`, `PREFECT_API_SERVICES_TRIGGERS_ENABLED` ### `read_batch_size` The number of events the triggers service will attempt to read from the message broker in one batch. **Type**: `integer` **Default**: `1` **TOML dotted key path**: `server.services.triggers.read_batch_size` **Supported environment variables**: `PREFECT_SERVER_SERVICES_TRIGGERS_READ_BATCH_SIZE` ### `pg_notify_reconnect_interval_seconds` The number of seconds to wait before reconnecting to the PostgreSQL NOTIFY/LISTEN connection after an error. Only used when using PostgreSQL as the database. Defaults to `10`. **Type**: `integer` **Default**: `10` **TOML dotted key path**: `server.services.triggers.pg_notify_reconnect_interval_seconds` **Supported environment variables**: `PREFECT_SERVER_SERVICES_TRIGGERS_PG_NOTIFY_RECONNECT_INTERVAL_SECONDS` ### `pg_notify_heartbeat_interval_seconds` The number of seconds between heartbeat checks for the PostgreSQL NOTIFY/LISTEN connection to ensure it's still alive. Only used when using PostgreSQL as the database. Defaults to `5`. **Type**: `integer` **Default**: `5` **TOML dotted key path**: `server.services.triggers.pg_notify_heartbeat_interval_seconds` **Supported environment variables**: `PREFECT_SERVER_SERVICES_TRIGGERS_PG_NOTIFY_HEARTBEAT_INTERVAL_SECONDS` *** ## ServerSettings Settings for controlling server behavior ### `logging_level` The default logging level for the Prefect API server. **Type**: `string` **Default**: `WARNING` **Constraints**: * Allowed values: 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL' **TOML dotted key path**: `server.logging_level` **Supported environment variables**: `PREFECT_SERVER_LOGGING_LEVEL`, `PREFECT_LOGGING_SERVER_LEVEL` ### `analytics_enabled` When enabled, Prefect sends anonymous data (e.g. count of flow runs, package version) on server startup to help us improve our product. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `server.analytics_enabled` **Supported environment variables**: `PREFECT_SERVER_ANALYTICS_ENABLED` ### `metrics_enabled` Whether or not to enable Prometheus metrics in the API. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `server.metrics_enabled` **Supported environment variables**: `PREFECT_SERVER_METRICS_ENABLED`, `PREFECT_API_ENABLE_METRICS` ### `log_retryable_errors` If `True`, log retryable errors in the API and it's services. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `server.log_retryable_errors` **Supported environment variables**: `PREFECT_SERVER_LOG_RETRYABLE_ERRORS`, `PREFECT_API_LOG_RETRYABLE_ERRORS` ### `register_blocks_on_start` If set, any block types that have been imported will be registered with the backend on application startup. If not set, block types must be manually registered. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `server.register_blocks_on_start` **Supported environment variables**: `PREFECT_SERVER_REGISTER_BLOCKS_ON_START`, `PREFECT_API_BLOCKS_REGISTER_ON_START` ### `memoize_block_auto_registration` Controls whether or not block auto-registration on start **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `server.memoize_block_auto_registration` **Supported environment variables**: `PREFECT_SERVER_MEMOIZE_BLOCK_AUTO_REGISTRATION`, `PREFECT_MEMOIZE_BLOCK_AUTO_REGISTRATION` ### `memo_store_path` Path to the memo store file. Defaults to \$PREFECT\_HOME/memo\_store.toml **Type**: `string` **TOML dotted key path**: `server.memo_store_path` **Supported environment variables**: `PREFECT_SERVER_MEMO_STORE_PATH`, `PREFECT_MEMO_STORE_PATH` ### `deployment_schedule_max_scheduled_runs` The maximum number of scheduled runs to create for a deployment. **Type**: `integer` **Default**: `50` **TOML dotted key path**: `server.deployment_schedule_max_scheduled_runs` **Supported environment variables**: `PREFECT_SERVER_DEPLOYMENT_SCHEDULE_MAX_SCHEDULED_RUNS`, `PREFECT_DEPLOYMENT_SCHEDULE_MAX_SCHEDULED_RUNS` ### `api` **Type**: [ServerAPISettings](#serverapisettings) **TOML dotted key path**: `server.api` ### `concurrency` Settings for controlling server-side concurrency limit handling **Type**: [ServerConcurrencySettings](#serverconcurrencysettings) **TOML dotted key path**: `server.concurrency` ### `database` **Type**: [ServerDatabaseSettings](#serverdatabasesettings) **TOML dotted key path**: `server.database` ### `deployments` Settings for controlling server deployments behavior **Type**: [ServerDeploymentsSettings](#serverdeploymentssettings) **TOML dotted key path**: `server.deployments` ### `docket` Settings for controlling server Docket behavior **Type**: [ServerDocketSettings](#serverdocketsettings) **TOML dotted key path**: `server.docket` ### `ephemeral` **Type**: [ServerEphemeralSettings](#serverephemeralsettings) **TOML dotted key path**: `server.ephemeral` ### `events` Settings for controlling server events behavior **Type**: [ServerEventsSettings](#servereventssettings) **TOML dotted key path**: `server.events` ### `flow_run_graph` Settings for controlling flow run graph behavior **Type**: [ServerFlowRunGraphSettings](#serverflowrungraphsettings) **TOML dotted key path**: `server.flow_run_graph` ### `logs` Settings for controlling server logs behavior **Type**: [ServerLogsSettings](#serverlogssettings) **TOML dotted key path**: `server.logs` ### `services` Settings for controlling server services behavior **Type**: [ServerServicesSettings](#serverservicessettings) **TOML dotted key path**: `server.services` ### `tasks` Settings for controlling server tasks behavior **Type**: [ServerTasksSettings](#servertaskssettings) **TOML dotted key path**: `server.tasks` ### `ui` Settings for controlling server UI behavior **Type**: [ServerUISettings](#serveruisettings) **TOML dotted key path**: `server.ui` *** ## ServerTasksSchedulingSettings Settings for controlling server-side behavior related to task scheduling ### `max_scheduled_queue_size` The maximum number of scheduled tasks to queue for submission. **Type**: `integer` **Default**: `1000` **TOML dotted key path**: `server.tasks.scheduling.max_scheduled_queue_size` **Supported environment variables**: `PREFECT_SERVER_TASKS_SCHEDULING_MAX_SCHEDULED_QUEUE_SIZE`, `PREFECT_TASK_SCHEDULING_MAX_SCHEDULED_QUEUE_SIZE` ### `max_retry_queue_size` The maximum number of retries to queue for submission. **Type**: `integer` **Default**: `100` **TOML dotted key path**: `server.tasks.scheduling.max_retry_queue_size` **Supported environment variables**: `PREFECT_SERVER_TASKS_SCHEDULING_MAX_RETRY_QUEUE_SIZE`, `PREFECT_TASK_SCHEDULING_MAX_RETRY_QUEUE_SIZE` ### `pending_task_timeout` How long before a PENDING task are made available to another task worker. **Type**: `string` **Default**: `PT0S` **TOML dotted key path**: `server.tasks.scheduling.pending_task_timeout` **Supported environment variables**: `PREFECT_SERVER_TASKS_SCHEDULING_PENDING_TASK_TIMEOUT`, `PREFECT_TASK_SCHEDULING_PENDING_TASK_TIMEOUT` *** ## ServerTasksSettings Settings for controlling server-side behavior related to tasks ### `tag_concurrency_slot_wait_seconds` The number of seconds to wait before retrying when a task run cannot secure a concurrency slot from the server. **Type**: `number` **Default**: `10` **Constraints**: * Minimum: 0 **TOML dotted key path**: `server.tasks.tag_concurrency_slot_wait_seconds` **Supported environment variables**: `PREFECT_SERVER_TASKS_TAG_CONCURRENCY_SLOT_WAIT_SECONDS`, `PREFECT_TASK_RUN_TAG_CONCURRENCY_SLOT_WAIT_SECONDS` ### `max_cache_key_length` The maximum number of characters allowed for a task run cache key. **Type**: `integer` **Default**: `2000` **TOML dotted key path**: `server.tasks.max_cache_key_length` **Supported environment variables**: `PREFECT_SERVER_TASKS_MAX_CACHE_KEY_LENGTH`, `PREFECT_API_TASK_CACHE_KEY_MAX_LENGTH` ### `scheduling` **Type**: [ServerTasksSchedulingSettings](#servertasksschedulingsettings) **TOML dotted key path**: `server.tasks.scheduling` *** ## ServerUISettings ### `enabled` Whether or not to serve the Prefect UI. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `server.ui.enabled` **Supported environment variables**: `PREFECT_SERVER_UI_ENABLED`, `PREFECT_UI_ENABLED` ### `v2_enabled` Whether to serve the experimental V2 UI instead of the default V1 UI. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `server.ui.v2_enabled` **Supported environment variables**: `PREFECT_SERVER_UI_V2_ENABLED` ### `api_url` The connection url for communication from the UI to the API. Defaults to `PREFECT_API_URL` if set. Otherwise, the default URL is generated from `PREFECT_SERVER_API_HOST` and `PREFECT_SERVER_API_PORT`. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `server.ui.api_url` **Supported environment variables**: `PREFECT_SERVER_UI_API_URL`, `PREFECT_UI_API_URL` ### `serve_base` The base URL path to serve the Prefect UI from. **Type**: `string` **Default**: `/` **TOML dotted key path**: `server.ui.serve_base` **Supported environment variables**: `PREFECT_SERVER_UI_SERVE_BASE`, `PREFECT_UI_SERVE_BASE` ### `static_directory` The directory to serve static files from. This should be used when running into permissions issues when attempting to serve the UI from the default directory (for example when running in a Docker container). **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `server.ui.static_directory` **Supported environment variables**: `PREFECT_SERVER_UI_STATIC_DIRECTORY`, `PREFECT_UI_STATIC_DIRECTORY` ### `show_promotional_content` Whether or not to display promotional content in the UI, including upgrade prompts and marketing banners. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `server.ui.show_promotional_content` **Supported environment variables**: `PREFECT_SERVER_UI_SHOW_PROMOTIONAL_CONTENT` *** ## TasksRunnerSettings ### `thread_pool_max_workers` The maximum number of workers for ThreadPoolTaskRunner. **Type**: `integer | None` **Default**: `None` **TOML dotted key path**: `tasks.runner.thread_pool_max_workers` **Supported environment variables**: `PREFECT_TASKS_RUNNER_THREAD_POOL_MAX_WORKERS`, `PREFECT_TASK_RUNNER_THREAD_POOL_MAX_WORKERS` ### `process_pool_max_workers` The maximum number of workers for ProcessPoolTaskRunner. **Type**: `integer | None` **Default**: `None` **TOML dotted key path**: `tasks.runner.process_pool_max_workers` **Supported environment variables**: `PREFECT_TASKS_RUNNER_PROCESS_POOL_MAX_WORKERS` *** ## TasksSchedulingSettings ### `default_storage_block` The `block-type/block-document` slug of a block to use as the default storage for autonomous tasks. **Type**: `string | None` **Default**: `None` **TOML dotted key path**: `tasks.scheduling.default_storage_block` **Supported environment variables**: `PREFECT_TASKS_SCHEDULING_DEFAULT_STORAGE_BLOCK`, `PREFECT_TASK_SCHEDULING_DEFAULT_STORAGE_BLOCK` ### `delete_failed_submissions` Whether or not to delete failed task submissions from the database. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `tasks.scheduling.delete_failed_submissions` **Supported environment variables**: `PREFECT_TASKS_SCHEDULING_DELETE_FAILED_SUBMISSIONS`, `PREFECT_TASK_SCHEDULING_DELETE_FAILED_SUBMISSIONS` *** ## TasksSettings ### `refresh_cache` If `True`, enables a refresh of cached results: re-executing the task will refresh the cached results. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `tasks.refresh_cache` **Supported environment variables**: `PREFECT_TASKS_REFRESH_CACHE` ### `default_no_cache` If `True`, sets the default cache policy on all tasks to `NO_CACHE`. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `tasks.default_no_cache` **Supported environment variables**: `PREFECT_TASKS_DEFAULT_NO_CACHE` ### `disable_caching` If `True`, disables caching on all tasks regardless of cache policy. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `tasks.disable_caching` **Supported environment variables**: `PREFECT_TASKS_DISABLE_CACHING` ### `default_retries` This value sets the default number of retries for all tasks. **Type**: `integer` **Default**: `0` **Constraints**: * Minimum: 0 **TOML dotted key path**: `tasks.default_retries` **Supported environment variables**: `PREFECT_TASKS_DEFAULT_RETRIES`, `PREFECT_TASK_DEFAULT_RETRIES` ### `default_retry_delay_seconds` This value sets the default retry delay seconds for all tasks. **Type**: `string | integer | number | array | None` **Default**: `0` **TOML dotted key path**: `tasks.default_retry_delay_seconds` **Supported environment variables**: `PREFECT_TASKS_DEFAULT_RETRY_DELAY_SECONDS`, `PREFECT_TASK_DEFAULT_RETRY_DELAY_SECONDS` ### `default_persist_result` If `True`, results will be persisted by default for all tasks. Set to `False` to disable persistence by default. Note that setting to `False` will override the behavior set by a parent flow or task. **Type**: `boolean | None` **Default**: `None` **TOML dotted key path**: `tasks.default_persist_result` **Supported environment variables**: `PREFECT_TASKS_DEFAULT_PERSIST_RESULT` ### `runner` Settings for controlling task runner behavior **Type**: [TasksRunnerSettings](#tasksrunnersettings) **TOML dotted key path**: `tasks.runner` ### `scheduling` Settings for controlling client-side task scheduling behavior **Type**: [TasksSchedulingSettings](#tasksschedulingsettings) **TOML dotted key path**: `tasks.scheduling` *** ## TelemetrySettings Settings for configuring Prefect telemetry ### `enable_resource_metrics` Whether to enable OS-level resource metric collection in flow run subprocesses. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `telemetry.enable_resource_metrics` **Supported environment variables**: `PREFECT_TELEMETRY_ENABLE_RESOURCE_METRICS` ### `resource_metrics_interval_seconds` Interval in seconds between resource metric collections. **Type**: `integer` **Default**: `10` **Constraints**: * Minimum: 1 **TOML dotted key path**: `telemetry.resource_metrics_interval_seconds` **Supported environment variables**: `PREFECT_TELEMETRY_RESOURCE_METRICS_INTERVAL_SECONDS` *** ## TestingSettings ### `test_mode` If `True`, places the API in test mode. This may modify behavior to facilitate testing. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `testing.test_mode` **Supported environment variables**: `PREFECT_TESTING_TEST_MODE`, `PREFECT_TEST_MODE` ### `unit_test_mode` This setting only exists to facilitate unit testing. If `True`, code is executing in a unit test context. Defaults to `False`. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `testing.unit_test_mode` **Supported environment variables**: `PREFECT_TESTING_UNIT_TEST_MODE`, `PREFECT_UNIT_TEST_MODE` ### `unit_test_loop_debug` If `True` turns on debug mode for the unit testing event loop. **Type**: `boolean` **Default**: `True` **TOML dotted key path**: `testing.unit_test_loop_debug` **Supported environment variables**: `PREFECT_TESTING_UNIT_TEST_LOOP_DEBUG`, `PREFECT_UNIT_TEST_LOOP_DEBUG` ### `test_setting` This setting only exists to facilitate unit testing. If in test mode, this setting will return its value. Otherwise, it returns `None`. **Type**: `None` **Default**: `FOO` **TOML dotted key path**: `testing.test_setting` **Supported environment variables**: `PREFECT_TESTING_TEST_SETTING`, `PREFECT_TEST_SETTING` *** ## WorkerSettings ### `debug_mode` If True, enables debug mode for the worker only. Unlike PREFECT\_DEBUG\_MODE, this setting does not propagate to flow runs executed by the worker. **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `worker.debug_mode` **Supported environment variables**: `PREFECT_WORKER_DEBUG_MODE` ### `heartbeat_seconds` Number of seconds a worker should wait between sending a heartbeat. **Type**: `number` **Default**: `30` **TOML dotted key path**: `worker.heartbeat_seconds` **Supported environment variables**: `PREFECT_WORKER_HEARTBEAT_SECONDS` ### `query_seconds` Number of seconds a worker should wait between queries for scheduled work. **Type**: `number` **Default**: `10` **TOML dotted key path**: `worker.query_seconds` **Supported environment variables**: `PREFECT_WORKER_QUERY_SECONDS` ### `prefetch_seconds` The number of seconds into the future a worker should query for scheduled work. **Type**: `number` **Default**: `10` **TOML dotted key path**: `worker.prefetch_seconds` **Supported environment variables**: `PREFECT_WORKER_PREFETCH_SECONDS` ### `enable_cancellation` Enable worker-side flow run cancellation for pending flow runs. When enabled, the worker will terminate infrastructure for flow runs that are cancelled while still in PENDING state (before the runner starts). **Type**: `boolean` **Default**: `False` **TOML dotted key path**: `worker.enable_cancellation` **Supported environment variables**: `PREFECT_WORKER_ENABLE_CANCELLATION` ### `cancellation_poll_seconds` Number of seconds between polls for cancelling flow runs. Used as a fallback when the WebSocket connection for real-time cancellation events is unavailable. **Type**: `number` **Default**: `120` **TOML dotted key path**: `worker.cancellation_poll_seconds` **Supported environment variables**: `PREFECT_WORKER_CANCELLATION_POLL_SECONDS` ### `webserver` Settings for a worker's webserver **Type**: [WorkerWebserverSettings](#workerwebserversettings) **TOML dotted key path**: `worker.webserver` *** ## WorkerWebserverSettings ### `host` The host address the worker's webserver should bind to. **Type**: `string` **Default**: `0.0.0.0` **TOML dotted key path**: `worker.webserver.host` **Supported environment variables**: `PREFECT_WORKER_WEBSERVER_HOST` ### `port` The port the worker's webserver should bind to. **Type**: `integer` **Default**: `8080` **TOML dotted key path**: `worker.webserver.port` **Supported environment variables**: `PREFECT_WORKER_WEBSERVER_PORT` *** # Artifacts Source: https://docs.prefect.io/v3/concepts/artifacts Artifacts are persisted outputs designed for human consumption and available in the UI. Prefect artifacts: * are visually rich annotations on flow and task runs * are human-readable visual metadata defined in code * come in standardized formats such as tables, progress indicators, images, Markdown, and links * are stored in Prefect Cloud or Prefect server and rendered in the Prefect UI * make it easy to visualize outputs or side effects that your runs produce, and capture updates over time Markdown artifact sales report screenshot Common use cases for artifacts include: * **Progress** indicators: Publish progress indicators for long-running tasks. This helps monitor the progress of your tasks and flows and ensure they are running as expected. * **Debugging**: Publish data that you care about in the UI to easily see when and where your results were written. If an artifact doesn't look the way you expect, you can find out which flow run last updated it, and you can click through a link in the artifact to a storage location (such as an S3 bucket). * **Data quality checks**: Publish data quality checks from in-progress tasks to ensure that data quality is maintained throughout a pipeline. Artifacts make for great performance graphs. For example, you can visualize a long-running machine learning model training run. You can also track artifact versions, making it easier to identify changes in your data. * **Documentation**: Publish documentation and sample data to help you keep track of your work and share information. For example, add a description to signify why a piece of data is important. ## Artifact types There are five artifact types: * links * Markdown * progress * images * tables Each artifact created within a task is displayed individually in the Prefect UI. This means that each call to `create_link_artifact()` or `create_markdown_artifact()` generates a distinct artifact. Unlike the Python `print()` function (where you can concatenate multiple calls to include additional items in a report), these artifact creation functions must be called multiple times, if necessary. To create artifacts such as reports or summaries using `create_markdown_artifact()`, define your message string and then pass it to `create_markdown_artifact()` to create the artifact. For more information on how to create and use artifacts, see the [how to produce workflow artifacts](/v3/how-to-guides/workflows/artifacts/) guide. # Assets Source: https://docs.prefect.io/v3/concepts/assets Assets represent objects your workflows produce. Assets in Prefect represent any outcome or output of your Prefect workflows. They provide an interface to model all forms of data and model lineage, track dependencies between data transformations, and monitor the health of pipelines at the asset level rather than just the compute level. ## Core concepts An asset is fundamentally defined by its **key**, a URI that uniquely identifies an asset, often specifying an external storage system in which that asset lives. Asset keys serve as both identifiers and organizational structures—assets are automatically grouped by their URI scheme (e.g., `s3://`, `postgres://`, `snowflake://`) and can be hierarchically organized based on their path structure. Assets exist in three primary states within Prefect: * **Materialized**: The asset has been created, updated, or overwritten by a Prefect workflow * **Referenced**: The asset is consumed as input by a workflow but not produced by it * **External**: The asset exists outside the Prefect ecosystem but is referenced as a dependency ## Asset lifecycle ### Materializations A **materialization** occurs when a workflow mutates an asset through creation, updating, or overwriting. Materializations are declared using the `@materialize` decorator, which functions as a specialized task decorator that tracks asset creation intent. The materialization process operates on an "intent to materialize" model: when a function decorated with `@materialize` executes, Prefect records the materialization attempt. Success or failure of the materialization is determined by the underlying task's execution state. ```python theme={null} from prefect.assets import materialize @materialize("s3://data-lake/processed/customer-data.csv") def process_customer_data(): # Asset materialization logic pass ``` ### References A **reference** occurs when an asset appears as an upstream dependency in another asset's materialization. References are automatically inferred from the task execution graph—when the output of one materialization flows as input to another, the dependency relationship is captured. References can also be explicitly declared through the `asset_deps` parameter, which is particularly useful for modeling dependencies on external systems or when the task graph alone doesn't fully capture the data dependencies. ### Metadata Asset definitions include optional metadata about that asset. These asset properties should have one source of truth to avoid conflicts. When you materialize an asset with properties, those properties perform a complete overwrite of all metadata fields for that asset. Updates to asset metadata occur at runtime from any workflow that specifies metadata fields. ## Dependency modeling Asset dependencies are determined through two complementary mechanisms: **Task graph inference**: When materialized assets flow through task parameters, Prefect automatically constructs the dependency graph. Each materialization acts as a dependency accumulation point, gathering all upstream assets and serving as the foundation for downstream materializations. **Explicit declaration**: The `asset_deps` parameter allows direct specification of asset dependencies, enabling modeling of relationships that aren't captured in the task execution flow. ```python theme={null} from prefect.assets import materialize @materialize( "s3://warehouse/enriched-data.csv", asset_deps=["postgres://db/reference-tables", "s3://external/vendor-data.csv"] ) def enrich_data(): # Explicitly depends on external database and vendor data pass ``` The backend will track these dependencies *across workflow boundaries*, exposing a global view of asset dependencies within your workspace. ## Asset metadata and properties Assets support rich metadata through the `AssetProperties` class, which provides organizational context and improves discoverability: * **Name**: Human-readable identifier for the asset * **Description**: Detailed documentation supporting Markdown formatting * **Owners**: Responsible parties, with special UI treatment for Prefect users and teams * **URL**: Web location for accessing or viewing the asset Additionally, assets support dynamic metadata through the `add_asset_metadata()` function, allowing runtime information like row counts, processing times, and data quality metrics to be attached to materialization events. ## Asset health monitoring Currently asset health provides a *visual* indicator of the operational status of data artifacts based on their most recent materialization attempt: * **Green**: Last materialization succeeded * **Red**: Last materialization failed * **Gray**: No materialization recorded, or asset has only been referenced This health model enables data teams to quickly identify problematic data pipelines at the artifact level, complementing traditional task-level monitoring with data-centric observability. Soon these statuses will be backed by a corresponding event. ## Event emission and integration Assets integrate deeply with Prefect's event system, automatically emitting structured events that enable downstream automation and monitoring: ### Event types * **Materialization events**: These events look like `prefect.asset.materialization.{succeeded|failed}` and are emitted when assets are referenced by the `@materialize` decorator, with status determined by the underlying task execution state. * **Reference events**: These events look like `prefect.asset.referenced` and are emitted for all upstream assets when a materialization occurs, independent of success or failure. ### Event emission rules Asset events follow specific emission patterns based on task execution state: * **Completed states**: Emit `prefect.asset.materialization.succeeded` for downstream assets and `prefect.asset.referenced` for upstream assets * **Failed states**: Emit `prefect.asset.materialization.failed` for downstream assets and `prefect.asset.referenced` for upstream assets * **Cached states**: No asset events are emitted, as cached executions don't represent new asset state changes Reference events are always emitted for upstream assets regardless of materialization success, enabling comprehensive dependency tracking even when downstream processes fail. ### Event payloads Materialization events include any metadata added during task execution through `add_asset_metadata()`, while reference events contain basic asset identification information. This enables rich event-driven automation based on both asset state changes and associated metadata. ## Asset organization and discovery Assets are automatically organized in the Prefect UI based on their URI structure: * **Grouping by scheme**: Assets with the same URI scheme (e.g., `s3://`, `postgres://`) are grouped together * **Hierarchical organization**: URI paths create nested organization structures * **Search and filtering**: Asset metadata enables discovery through names, descriptions, and ownership information ## Further Reading * [How to use assets to track workflow outputs](/v3/how-to-guides/workflows/assets) * [How to customize asset metadata](/v3/advanced/assets) # Automations Source: https://docs.prefect.io/v3/concepts/automations Learn how to automatically take action in response to events. Automations enable you to configure [actions](#actions) that execute automatically based on [trigger](#triggers) conditions. Potential triggers include the occurrence of events from changes in a flow run's state—or the absence of such events. You can define your own custom trigger to fire based on a custom [event](/v3/concepts/event-triggers/) defined in Python code. With Prefect Cloud you can even create [webhooks](/v3/automate/events/webhook-triggers/) that can receive data for use in actions. Actions you can take upon a trigger include: * creating flow runs from existing deployments * pausing and resuming schedules or work pools * sending custom notifications ### Triggers Triggers specify the conditions under which your action should be performed. The Prefect UI includes templates for many common conditions, such as: * Flow run state change (Flow Run Tags are only evaluated with `OR` criteria) * Work pool status * Work queue status * Deployment status * Metric thresholds, such as average duration, lateness, or completion percentage * Custom event triggers Importantly, you can configure the triggers not only in reaction to events, but also proactively: in the absence of an expected event. Configuring a trigger for an automation in Prefect Cloud. For example, in the case of flow run state change triggers, you might expect production flows to finish in no longer than thirty minutes. But transient infrastructure or network issues could cause your flow to get “stuck” in a running state. A trigger could kick off an action if the flow stays in a running state for more than 30 minutes. This action could be taken on the flow itself, such as cancelling or restarting it. Or the action could take the form of a notification for someone to take manual remediation steps. Or you could set both actions to take place when the trigger occurs. ### Actions Actions specify what your automation does when its trigger criteria are met. Current action types include: | Action | Type | | -------------------------------------------------------------- | ----------------------- | | Cancel a flow run | `cancel-flow-run` | | Change the state of a flow run | `change-flow-run-state` | | Suspend a flow run | `suspend-flow-run` | | Resume a flow run | `resume-flow-run` | | Run a deployment | `run-deployment` | | Pause a deployment schedule | `pause-deployment` | | Resume a deployment schedule | `resume-deployment` | | Pause a work pool | `pause-work-pool` | | Resume a work pool | `resume-work-pool` | | Pause a work queue | `pause-work-queue` | | Resume a work queue | `resume-work-queue` | | Pause an automation | `pause-automation` | | Resume an automation | `resume-automation` | | Send a [notification](#sending-notifications-with-automations) | `send-notification` | | Call a webhook | `call-webhook` | Configuring an action for an automation in Prefect Cloud. ### Selected and inferred action targets Some actions require you to either select the target of the action, or specify that the target of the action should be inferred. Selected targets are simple and useful for when you know exactly what object your action should act on. For example, the case of a cleanup flow you want to run or a specific notification you want to send. Inferred targets are deduced from the trigger itself. For example, if a trigger fires on a flow run that is stuck in a running state, and the action is to cancel an inferred flow run—the flow run that caused the trigger to fire. Similarly, if a trigger fires on a work queue event and the corresponding action is to pause an inferred work queue, the inferred work queue is the one that emitted the event. Prefect infers the relevant event whenever possible, but sometimes one does not exist. Specify a name and, optionally, a description for the automation. ### Tracing automation actions When an automation fires, it emits events that you can use to trace what happened: * `prefect.automation.triggered` or `prefect.automation.resolved` - emitted when the trigger condition is met * `prefect.automation.action.triggered` - emitted when an action starts * `prefect.automation.action.executed` or `prefect.automation.action.failed` - emitted when an action completes The action events include related resources that link back to their source events: | Related resource role | Description | | ---------------------------- | ---------------------------------------------------------------------------------- | | `triggering-event` | The original event that caused the automation to fire | | `automation-triggered-event` | The `automation.triggered` or `automation.resolved` event that prompted the action | These links help you trace from an action failure back to the specific trigger and original event that caused it. ## Sending notifications with automations Automations support sending notifications through any predefined block that is capable of and configured to send a message, including: * Slack message to a channel * Microsoft Teams message to a channel * Email to an email address Configuring notifications for an automation in Prefect Cloud. For custom notification payloads, see the [custom notifications guide](/v3/how-to-guides/automations/custom-notifications). ## Templating with Jinja You can access templated variables with automation actions through [Jinja](https://palletsprojects.com/p/jinja/) syntax. Templated variables enable you to dynamically include details from an automation trigger, such as a flow or pool name. Jinja templated variable syntax wraps the variable name in double curly brackets, like this: `{{ variable }}`. You can access properties of the underlying flow run objects including: * [flow\_run](https://reference.prefect.io/prefect/server/schemas/core/#prefect.server.schemas.core.FlowRun) * [flow](https://reference.prefect.io/prefect/server/schemas/core/#prefect.server.schemas.core.Flow) * [deployment](https://reference.prefect.io/prefect/server/schemas/core/#prefect.server.schemas.core.Deployment) * [work\_queue](https://reference.prefect.io/prefect/server/schemas/core/#prefect.server.schemas.core.WorkQueue) * [work\_pool](https://reference.prefect.io/prefect/server/schemas/core/#prefect.server.schemas.core.WorkPool) In addition to its native properties, each object includes an `id` along with `created` and `updated` timestamps. The `flow_run|ui_url` token returns the URL to view the flow run in the UI. Here's an example relevant to a flow run state-based notification: ``` Flow run {{ flow_run.name }} entered state {{ flow_run.state.name }}. Timestamp: {{ flow_run.state.timestamp }} Flow ID: {{ flow_run.flow_id }} Flow Run ID: {{ flow_run.id }} State message: {{ flow_run.state.message }} ``` The resulting Slack webhook notification looks something like this: Configuring notifications for an automation in Prefect Cloud. You could include `flow` and `deployment` properties: ``` Flow run {{ flow_run.name }} for flow {{ flow.name }} entered state {{ flow_run.state.name }} with message {{ flow_run.state.message }} Flow tags: {{ flow_run.tags }} Deployment name: {{ deployment.name }} Deployment version: {{ deployment.version }} Deployment parameters: {{ deployment.parameters }} ``` An automation that reports on work pool status might include notifications using `work_pool` properties: ``` Work pool status alert! Name: {{ work_pool.name }} Last polled: {{ work_pool.last_polled }} ``` In addition to those shortcuts for flows, deployments, and work pools, you have access to the automation and the event that triggered the automation. See the [Automations API](https://app.prefect.cloud/api/docs#tag/Automations) for additional details. ``` Automation: {{ automation.name }} Description: {{ automation.description }} Event: {{ event.id }} Resource: {% for label, value in event.resource %} {{ label }}: {{ value }} {% endfor %} Related Resources: {% for related in event.related %} Role: {{ related.role }} {% for label, value in related %} {{ label }}: {{ value }} {% endfor %} {% endfor %} ``` Note that this example also illustrates the ability to use Jinja features such as iterator and for loop [control structures](https://jinja.palletsprojects.com/en/3.1.x/templates/#list-of-control-structures) when templating notifications. For more on the common use case of passing an upstream flow run's parameters to the flow run invoked by the automation, see the [Passing parameters to a flow run](/v3/how-to-guides/automations/access-parameters-in-templates/) guide. ## Further reading * To learn more about Prefect events, which can trigger automations, see the [events docs](/v3/concepts/events/). * See the [webhooks guide](/v3/how-to-guides/cloud/create-a-webhook/) to learn how to create webhooks and receive external events. # Blocks Source: https://docs.prefect.io/v3/concepts/blocks Prefect blocks allow you to manage configuration schemas, infrastructure, and secrets for use with deployments or flow scripts. Prefect blocks store typed configuration that can be used across workflows and deployments. The most common use case for blocks is storing credentials used to access external systems such as AWS or GCP. Prefect supports [a large number of common blocks](#pre-registered-blocks) and provides a Python SDK for creating your own. **Blocks and parameters** Blocks are useful for sharing configuration across flow runs and between flows. For configuration that will change between flow runs, we recommend using [parameters](/v3/develop/write-flows/#parameters). ## How blocks work There are three layers to a block: its *type*, a *document*, and a Python *class*. ### Block type A block *type* is essentially a schema registered with the Prefect API. This schema can be inspected and discovered in the UI on the **Blocks** page. To see block types available for configuration, use `prefect block type ls` from the CLI or navigate to the **Blocks** page in the UI and click **+**. The block catalogue in the UI These types separate blocks from [Prefect variables](/v3/develop/variables/), which are unstructured JSON documents. In addition, block schemas allow for fields of `SecretStr` type which are stored with additional encryption and not displayed by default in the UI. Block types are identified by a *slug* that is not configurable. ```python theme={null} from prefect.blocks.core import Block class Cube(Block): edge_length_inches: float # register the block type under the slug 'cube' Cube.register_type_and_schema() ``` Users should rarely need to register types in this way - saving a block document will also automatically register its type. ### Block document A block *document* is an instantiation of the schema, or block type. A document contains *specific* values for each field defined in the schema. All block types allow for the creation of as many documents as you wish. Building on our example above: ```python theme={null} from prefect.blocks.core import Block class Cube(Block): edge_length_inches: float # instantiate the type with specific values rubiks_cube = Cube(edge_length_inches=2.25) # store those values in a block document # on the server for future use rubiks_cube.save("rubiks-cube") # instantiate and save another block document tiny_cube = Cube(edge_length_inches=0.001) tiny_cube.save("tiny") ``` Block documents can also be created and updated in the UI or API for easy change management. This allows you to work with slowly changing configuration without having to redeploy all workflows that rely on it; for example, you may use this to rotate credentials on a regular basis without touching your deployments. ### Block class A block *class* is the primary user-facing object; it is a Python class whose attributes are loaded from a block document. Most Prefect blocks encapsulate additional functionality built on top of the block document. For example, an `S3Bucket` block contains methods for downloading data from, or upload data to, an S3 bucket; a `SnowflakeConnector` block contains methods for querying Snowflake databases. Returning to our `Cube` example from above: ```python theme={null} from prefect.blocks.core import Block class Cube(Block): edge_length_inches: float def get_volume(self): return self.edge_length_inches ** 3 def get_surface_area(self): return 6 * self.edge_length_inches ** 2 rubiks_cube = Cube.load("rubiks-cube") rubiks_cube.get_volume() # 11.390625 ``` The class itself is *not* stored server-side when registering block types and block documents. For this reason, we highly recommend loading block documents by first importing the block class and then calling its `load` method with the relevant document name. ## Pre-registered blocks ### Built-in blocks Commonly used block types come built-in with Prefect. You can create and use these block types through the UI without installing any additional packages. | Block | Slug | Description | | ----------------------- | -------------------- | ------------------------------------------------------------------------------------------------ | | Custom Webhook | `custom-webhook` | Call custom webhooks. | | Discord Webhook | `discord-webhook` | Call Discord webhooks. | | Local File System | `local-file-system` | Store data as a file on a local file system. | | Mattermost Webhook | `mattermost-webhook` | Send notifications through a provided Mattermost webhook. | | Microsoft Teams Webhook | `ms-teams-webhook` | Send notifications through a provided Microsoft Teams webhook. | | Opsgenie Webhook | `opsgenie-webhook` | Send notifications through a provided Opsgenie webhook. | | Pager Duty Webhook | `pager-duty-webhook` | Send notifications through a provided PagerDuty webhook. | | Remote File System | `remote-file-system` | Access files on a remote file system. | | Secret | `secret` | Store a secret value. The value will be obfuscated when this block is logged or shown in the UI. | | Sendgrid Email | `sendgrid-email` | Send notifications through Sendgrid email. | | Slack Webhook | `slack-webhook` | Send notifications through a provided Slack webhook. | | SMB | `smb` | Store data as a file on a SMB share. | | Twilio SMS | `twilio-sms` | Send notifications through Twilio SMS. | Built-in blocks should be registered the first time you start a Prefect server. If the auto-registration fails, you can manually register the blocks using `prefect blocks register`. For example, to register all built-in notification blocks, run `prefect block register -m prefect.blocks.notifications`. ### Blocks in Prefect integration libraries Some block types that appear in the UI can be created immediately, with the corresponding integration library installed for use. For example, an AWS Secret block can be created, but not used until the [`prefect-aws` library](/integrations/prefect-aws/) is installed. Find available block types in many of the published [Prefect integrations libraries](/integrations/). If a block type is not available in the UI, you can [register it](#register-blocks) through the CLI. | Block | Slug | Integration | | ------------------------------------ | -------------------------------------- | ------------------------------------------------------- | | ECS Task | `ecs-task` | [prefect-aws](/integrations/prefect-aws/) | | MinIO Credentials | `minio-credentials` | [prefect-aws](/integrations/prefect-aws/) | | S3 Bucket | `s3-bucket` | [prefect-aws](/integrations/prefect-aws/) | | Azure Blob Storage Credentials | `azure-blob-storage-credentials` | [prefect-azure](/integrations/prefect-azure/) | | Azure Container Instance Credentials | `azure-container-instance-credentials` | [prefect-azure](/integrations/prefect-azure/) | | Azure Container Instance Job | `azure-container-instance-job` | [prefect-azure](/integrations/prefect-azure/) | | Azure Cosmos DB Credentials | `azure-cosmos-db-credentials` | [prefect-azure](/integrations/prefect-azure/) | | AzureML Credentials | `azureml-credentials` | [prefect-azure](/integrations/prefect-azure/) | | BitBucket Credentials | `bitbucket-credentials` | [prefect-bitbucket](/integrations/prefect-bitbucket/) | | BitBucket Repository | `bitbucket-repository` | [prefect-bitbucket](/integrations/prefect-bitbucket/) | | Databricks Credentials | `databricks-credentials` | [prefect-databricks](/integrations/prefect-databricks/) | | dbt CLI BigQuery Target Configs | `dbt-cli-bigquery-target-configs` | [prefect-dbt](/integrations/prefect-dbt/) | | dbt CLI Profile | `dbt-cli-profile` | [prefect-dbt](/integrations/prefect-dbt/) | | dbt Cloud Credentials | `dbt-cloud-credentials` | [prefect-dbt](/integrations/prefect-dbt/) | | dbt CLI Global Configs | `dbt-cli-global-configs` | [prefect-dbt](/integrations/prefect-dbt/) | | dbt CLI Postgres Target Configs | `dbt-cli-postgres-target-configs` | [prefect-dbt](/integrations/prefect-dbt/) | | dbt CLI Snowflake Target Configs | `dbt-cli-snowflake-target-configs` | [prefect-dbt](/integrations/prefect-dbt/) | | dbt CLI Target Configs | `dbt-cli-target-configs` | [prefect-dbt](/integrations/prefect-dbt/) | | Docker Container | `docker-container` | [prefect-docker](/integrations/prefect-docker/) | | Docker Host | `docker-host` | [prefect-docker](/integrations/prefect-docker/) | | Docker Registry Credentials | `docker-registry-credentials` | [prefect-docker](/integrations/prefect-docker/) | | Email Server Credentials | `email-server-credentials` | [prefect-email](/integrations/prefect-email/) | | BigQuery Warehouse | `bigquery-warehouse` | [prefect-gcp](/integrations/prefect-gcp/) | | GCP Cloud Run Job | `cloud-run-job` | [prefect-gcp](/integrations/prefect-gcp/) | | GCP Credentials | `gcp-credentials` | [prefect-gcp](/integrations/prefect-gcp/) | | GcpSecret | `gcpsecret` | [prefect-gcp](/integrations/prefect-gcp/) | | GCS Bucket | `gcs-bucket` | [prefect-gcp](/integrations/prefect-gcp/) | | Vertex AI Custom Training Job | `vertex-ai-custom-training-job` | [prefect-gcp](/integrations/prefect-gcp/) | | GitHub Credentials | `github-credentials` | [prefect-github](/integrations/prefect-github/) | | GitHub Repository | `github-repository` | [prefect-github](/integrations/prefect-github/) | | GitLab Credentials | `gitlab-credentials` | [prefect-gitlab](/integrations/prefect-gitlab/) | | GitLab Repository | `gitlab-repository` | [prefect-gitlab](/integrations/prefect-gitlab/) | | Kubernetes Cluster Config | `kubernetes-cluster-config` | [prefect-kubernetes](/integrations/prefect-kubernetes/) | | Kubernetes Credentials | `kubernetes-credentials` | [prefect-kubernetes](/integrations/prefect-kubernetes/) | | Kubernetes Job | `kubernetes-job` | [prefect-kubernetes](/integrations/prefect-kubernetes/) | | Shell Operation | `shell-operation` | [prefect-shell](/integrations/prefect-shell/) | | Slack Credentials | `slack-credentials` | [prefect-slack](/integrations/prefect-slack/) | | Slack Incoming Webhook | `slack-incoming-webhook` | [prefect-slack](/integrations/prefect-slack/) | | Snowflake Connector | `snowflake-connector` | [prefect-snowflake](/integrations/prefect-snowflake/) | | Snowflake Credentials | `snowflake-credentials` | [prefect-snowflake](/integrations/prefect-snowflake/) | | Database Credentials | `database-credentials` | [prefect-sqlalchemy](/integrations/prefect-sqlalchemy/) | | SQLAlchemy Connector | `sqlalchemy-connector` | [prefect-sqlalchemy](/integrations/prefect-sqlalchemy/) | Anyone can create a custom block type and, optionally, share it with the community. ## Additional resources # Caching Source: https://docs.prefect.io/v3/concepts/caching Caching refers to the ability of a task run to enter a `Completed` state and return a predetermined value without actually running the code that defines the task. Caching allows you to efficiently reuse [results of tasks](/v3/develop/results/) that may be expensive to compute and ensure that your pipelines are idempotent when retrying them due to unexpected failure. By default Prefect's caching logic is based on the following attributes of a task invocation: * the inputs provided to the task * the code definition of the task * the prevailing flow run ID, or if executed autonomously, the prevailing task run ID These values are hashed to compute the task's *cache key*. This implies that, by default, calling the same task with the same inputs more than once within a flow will result in cached behavior for all calls after the first. This behavior can be configured - see [customizing the cache](/v3/develop/write-tasks#customizing-the-cache) below. **Caching requires result persistence** Caching requires result persistence, which is off by default. To turn on result persistence for all of your tasks use the `PREFECT_RESULTS_PERSIST_BY_DEFAULT` setting: ``` prefect config set PREFECT_RESULTS_PERSIST_BY_DEFAULT=true ``` See [managing results](/v3/develop/results/) for more details on managing your result configuration, and [settings](/v3/develop/settings-and-profiles) for more details on managing Prefect settings. ## Cache keys To determine whether a task run should retrieve a cached state, Prefect uses the concept of a "cache key". A cache key is a computed string value that determines where the task's return value will be persisted within its configured result storage. When a task run begins, Prefect first computes its cache key and uses this key to lookup a record in the task's result storage. If an unexpired record is found, this result is returned and the task does not run, but instead, enters a `Cached` state with the corresponding result value. Cache keys can be shared by the same task across different flows, and even among different tasks, so long as they all share a common result storage location. By default Prefect stores results locally in `~/.prefect/storage/`. The filenames in this directory will correspond exactly to computed cache keys from your task runs. **Relationship with result persistence** Task caching and result persistence are intimately related. Because task caching relies on loading a known result, task caching will only work when your task can persist its output to a fixed and known location. Therefore any configuration which explicitly avoids result persistence will result in your task never using a cache, for example setting `persist_result=False`. ## Cache policies Cache key computation can be configured through the use of *cache policies*. A cache policy is a recipe for computing cache keys for a given task. Prefect comes prepackaged with a few common cache policies: * `DEFAULT`: this cache policy uses the task's inputs, its code definition, as well as the prevailing flow run ID to compute the task's cache key. * `INPUTS`: this cache policy uses *only* the task's inputs to compute the cache key. * `TASK_SOURCE`: this cache policy only considers raw lines of code in the task (and not the source code of nested tasks) to compute the cache key. * `FLOW_PARAMETERS`: this cache policy uses *only* the parameter values provided to the parent flow run to compute the cache key. * `NO_CACHE`: this cache policy always returns `None` and therefore avoids caching and result persistence altogether. These policies can be set using the `cache_policy` keyword on the [task decorator](https://reference.prefect.io/prefect/tasks/#prefect.tasks.task). ## Customizing the cache Prefect allows you to configure task caching behavior in numerous ways. ### Cache expiration All cache keys can optionally be given an *expiration* through the `cache_expiration` keyword on the [task decorator](https://reference.prefect.io/prefect/tasks/#prefect.tasks.task). This keyword accepts a `datetime.timedelta` specifying a duration for which the cached value should be considered valid. Providing an expiration value results in Prefect persisting an expiration timestamp alongside the result record for the task. This expiration is then applied to *all* other tasks that may share this cache key. ### Cache policies Cache policies can be composed and altered using basic Python syntax to form more complex policies. For example, all task policies except for `NO_CACHE` can be *added* together to form new policies that combine the individual policies' logic into a larger cache key computation. Combining policies in this way results in caches that are *easier* to invalidate. For example: ```python theme={null} from prefect import task from prefect.cache_policies import TASK_SOURCE, INPUTS @task(cache_policy=TASK_SOURCE + INPUTS) def my_cached_task(x: int): return x + 42 ``` This task will rerun anytime you provide new values for `x`, *or* anytime you change the underlying code. The `INPUTS` policy is a special policy that allows you to *subtract* string values to ignore certain task inputs: ```python theme={null} from prefect import task from prefect.cache_policies import INPUTS my_custom_policy = INPUTS - 'debug' @task(cache_policy=my_custom_policy) def my_cached_task(x: int, debug: bool = False): print('running...') return x + 42 my_cached_task(1) my_cached_task(1, debug=True) # still uses the cache ``` ### Cache key functions You can configure custom cache policy logic through the use of cache key functions. A cache key function is a function that accepts two positional arguments: * The first argument corresponds to the `TaskRunContext`, which stores task run metadata. For example, this object has attributes `task_run_id`, `flow_run_id`, and `task`, all of which can be used in your custom logic. * The second argument corresponds to a dictionary of input values to the task. For example, if your task has the signature `fn(x, y, z)` then the dictionary will have keys "x", "y", and "z" with corresponding values that can be used to compute your cache key. This function can then be specified using the `cache_key_fn` argument on the [task decorator](https://reference.prefect.io/prefect/tasks/#prefect.tasks.task). For example: ```python theme={null} from prefect import task def static_cache_key(context, parameters): # return a constant return "static cache key" @task(cache_key_fn=static_cache_key) def my_cached_task(x: int): return x + 1 ``` ### Cache storage By default, cache records are collocated with task results and files containing task results will include metadata used for caching. Configuring a cache policy with a `key_storage` argument allows cache records to be stored separately from task results. When cache key storage is configured, persisted task results will only include the return value of your task and cache records can be deleted or modified without effecting your task results. You can configure where cache records are stored by using the `.configure` method with a `key_storage` argument on a cache policy. The `key_storage` argument accepts either a path to a local directory or a storage block. ### Cache isolation Cache isolation controls how concurrent task runs interact with cache records. Prefect supports two isolation levels: `READ_COMMITTED` and `SERIALIZABLE`. By default, cache records operate with a `READ_COMMITTED` isolation level. This guarantees that reading a cache record will see the latest committed cache value, but allows multiple executions of the same task to occur simultaneously. For stricter isolation, you can use the `SERIALIZABLE` isolation level. This ensures that only one execution of a task occurs at a time for a given cache record via a locking mechanism. To configure the isolation level, use the `.configure` method with an `isolation_level` argument on a cache policy. When using `SERIALIZABLE`, you must also provide a `lock_manager` that implements locking logic for your system. #### Recommended Lock Managers by Execution Context We recommend using a locking implementation that matches how you are running your work concurrently. | Execution Context | Recommended Lock Manager | Notes | | ------------------ | ------------------------ | ------------------------------------------------------------ | | Threads/Coroutines | `MemoryLockManager` | In-memory locking suitable for single-process execution | | Processes | `FileSystemLockManager` | File-based locking for multiple processes on same machine | | Multiple Machines | `RedisLockManager` | Distributed locking via Redis for cross-machine coordination | ## Multi-task caching There are some situations in which multiple tasks need to always run together or not at all. This can be achieved in Prefect by configuring these tasks to always write to their caches within a single [*transaction*](/v3/develop/transactions). # Deployments Source: https://docs.prefect.io/v3/concepts/deployments Learn how to use deployments to trigger flow runs remotely. Deployments allow you to run flows on a [schedule](/v3/concepts/schedules) and trigger runs based on [events](/v3/how-to-guides/automations/creating-deployment-triggers/). Deployments are server-side representations of flows. They store the crucial metadata for remote orchestration including when, where, and how a workflow should run. In addition to manually triggering and managing flow runs, deploying a flow exposes an API and UI that allow you to: * trigger new runs, [cancel active runs](/v3/how-to-guides/workflows/write-and-run#cancel-a-flow-run), pause scheduled runs, [customize parameters](/v3/concepts/flows#specify-flow-parameters), and more * remotely configure [schedules](/v3/concepts/schedules) and [automation rules](/v3/how-to-guides/automations/creating-deployment-triggers) * dynamically provision infrastructure with [work pools](/v3/deploy/infrastructure-concepts/work-pools) - optionally with templated guardrails for other users In Prefect Cloud, deployment configuration is versioned, and a new [deployment version](/v3/how-to-guides/deployments/versioning) is created each time a deployment is updated. ### Work pools [Work pools](/v3/concepts/work-pools) allow you to switch between different types of infrastructure and to create a template for deployments. Data platform teams find work pools especially useful for managing infrastructure configuration across teams of data professionals. Common work pool types include [Docker](/v3/how-to-guides/deployment_infra/docker), [Kubernetes](/v3/how-to-guides/deployment_infra/kubernetes), and serverless options such as [AWS ECS](/integrations/prefect-aws/ecs-worker), [Azure ACI](/integrations/prefect-azure/aci_worker), [GCP Vertex AI](/integrations/prefect-gcp/index#run-flows-on-google-cloud-run-or-vertex-ai), or [GCP Google Cloud Run](/integrations/prefect-gcp/gcp-worker-guide). ### Work pool-based deployment requirements Deployments created through the Python SDK that use a work pool require a `name`. This value becomes the deployment name. A `work_pool_name` is also required. Your flow code location can be specified in a few ways: 1. Bake it into your Docker image (for work-pools that use Docker images). As shown in the example above,Prefect facilitates this as the default method for deployments created with the Python SDK. This method requires that you specify the `image` argument in the `deploy` method. 2. Call `from_source` on a flow and specify one of the following: 1. the git-based cloud provider location (for example, GitHub) 2. the cloud provider storage location (for example, AWS S3) 3. the local path (an option for Process work pools) See the [Retrieve code from storage docs](/v3/how-to-guides/deployments/store-flow-code) for more information about flow code storage. ## Run a deployment You can set a deployment to run manually, on a [schedule](/v3/how-to-guides/deployments/create-schedules), or [in response to an event](/v3/how-to-guides/automations/creating-deployment-triggers). The deployment inherits the infrastructure configuration from the work pool, and can be overridden at deployment creation time or at runtime. ### Work pools that require a worker To run a deployment with a hybrid work pool type, such as Docker or Kubernetes, you must start a [worker](/v3/concepts/workers/). A [Prefect worker](/v3/concepts/workers) is a client-side process that checks for scheduled flow runs in the work pool that it matches. When a scheduled run is found, the worker kicks off a flow run on the specified infrastructure and monitors the flow run until completion. ### Work pools that don't require a worker Prefect Cloud offers [push work pools](/v3/how-to-guides/deployment_infra/serverless#automatically-create-a-new-push-work-pool-and-provision-infrastructure) that run flows on Cloud provider serverless infrastructure without a worker and that can be set up quickly. Prefect Cloud also provides the option to run work flows on Prefect's infrastructure through a [Prefect Managed work pool](/v3/how-to-guides/deployment_infra/managed). These work pool types do not require a worker to run flows. However, they do require sharing a bit more information with Prefect, which can be a challenge depending upon the security posture of your organization. ## Static vs. dynamic infrastructure You can deploy your flows on long-lived static infrastructure or on dynamic infrastructure that is able to scale horizontally. The best choice depends on your use case. ### Static infrastructure When you have several flows running regularly, [the `serve` method](/v3/how-to-guides/deployment_infra/run-flows-in-local-processes#serve-a-flow) of the `Flow` object or [the `serve` utility](/v3/how-to-guides/deployment_infra/run-flows-in-local-processes#serve-multiple-flows-at-once) is a great option for managing multiple flows simultaneously. Once you have authored your flow and decided on its deployment settings, run this long-running process in a location of your choosing. The process stays in communication with the Prefect API, monitoring for work and submitting each run within an individual subprocess. Because runs are submitted to subprocesses, any external infrastructure configuration must be set up beforehand and kept associated with this process. Benefits to this approach include: * Users are in complete control of their infrastructure, and anywhere the "serve" Python process can run is a suitable deployment environment. * It is simple to reason about. * Creating deployments requires a minimal set of decisions. * Iteration speed is fast. ### Dynamic infrastructure Consider running flows on dynamically provisioned infrastructure with work pools when you have any of the following: * Flows that require expensive infrastructure due to the long-running process. * Flows with heterogeneous infrastructure needs across runs. * Large volumes of deployments. * An internal organizational structure in which deployment authors or runners are not members of the team that manages the infrastructure. [Work pools](/v3/concepts/work-pools/) allow Prefect to exercise greater control of the infrastructure on which flows run. Options for [serverless work pools](/v3/how-to-guides/deployment_infra/serverless/) allow you to scale to zero when workflows aren't running. Prefect even provides you with the ability to [provision cloud infrastructure via a single CLI command](/v3/how-to-guides/deployment_infra/serverless/#automatically-create-a-new-push-work-pool-and-provision-infrastructure), if you use a Prefect Cloud push work pool option. With work pools: * You can configure and monitor infrastructure configuration within the Prefect UI. * Infrastructure is ephemeral and dynamically provisioned. * Prefect is more infrastructure-aware and collects more event data from your infrastructure by default. * Highly decoupled setups are possible. **You don't have to commit to one approach** You can mix and match approaches based on the needs of each flow. You can also change the deployment approach for a particular flow as its needs evolve. For example, you might use workers for your expensive machine learning pipelines, but use the serve mechanics for smaller, more frequent file-processing pipelines. ## Deployment schema ```python theme={null} class Deployment: """ Structure of the schema defining a deployment """ # required defining data name: str flow_id: UUID entrypoint: str path: str | None = None # workflow scheduling and parametrization parameters: dict[str, Any] | None = None parameter_openapi_schema: dict[str, Any] | None = None schedules: list[Schedule] | None = None paused: bool = False trigger: Trigger | None = None # concurrency limiting concurrency_limit: int | None = None concurrency_options: ConcurrencyOptions( collision_strategy=Literal['ENQUEUE', 'CANCEL_NEW'], grace_period_seconds=int # 60-86400, default 300 ) | None = None # metadata for bookkeeping version: str | None = None version_type: VersionType | None = None description: str | None = None tags: list | None = None # worker-specific fields work_pool_name: str | None = None work_queue_name: str | None = None job_variables: dict[str, Any] | None = None pull_steps: dict[str, Any] | None = None ``` All methods for creating Prefect deployments are interfaces for populating this schema. ### Required defining data Deployments require a `name` and a reference to an underlying `Flow`. The deployment name is not required to be unique across all deployments, but is required to be unique for a given flow ID. This means you will often see references to the deployment's unique identifying name `{FLOW_NAME}/{DEPLOYMENT_NAME}`. You can trigger deployment runs in multiple ways. For a complete guide, see [Run deployments](/v3/how-to-guides/deployments/run-deployments). Quick examples: From the CLI: ```bash theme={null} prefect deployment run my-first-flow/my-first-deployment ``` From Python: ```python theme={null} from prefect.deployments import run_deployment run_deployment(name="my-first-flow/my-first-deployment") ``` The other two fields are: * **`path`**: think of the path as the runtime working directory for the flow. For example, if a deployment references a workflow defined within a Docker image, the `path` is the absolute path to the parent directory where that workflow will run anytime the deployment is triggered. This interpretation is more subtle in the case of flows defined in remote filesystems. * **`entrypoint`**: the entrypoint of a deployment is a reference to a function decorated as a flow that exists on some filesystem. Entrypoints support two formats: * **File path format**: a path to the file and function name separated by a colon, relative to the `path` (for example, `path/to/file.py:function_name`). * **Module path format**: a fully-qualified Python module path to the flow function (for example, `my_module.my_flow.my_func`). When using module paths, the module must be importable in the execution environment. The entrypoint must reference the same flow as the flow ID. Prefect requires that deployments reference flows defined *within Python files*. Flows defined within interactive REPLs or notebooks cannot currently be deployed as such. They are still valid flows that will be monitored by the API and observable in the UI whenever they are run, but Prefect cannot trigger them. **Deployments do not contain code definitions** Deployment metadata references code that exists in potentially diverse locations within your environment. This separation means that your flow code stays within your storage and execution infrastructure. This is key to the Prefect hybrid model: there's a boundary between your proprietary assets, such as your flow code, and the Prefect backend (including [Prefect Cloud](/v3/how-to-guides/cloud/connect-to-cloud)). ### Workflow scheduling and parametrization One of the primary motivations for creating deployments of flows is to remotely *schedule* and *trigger* them. Just as you can call flows as functions with different input values, deployments can be triggered or scheduled with different values through parameters. These are the fields to capture the required metadata for those actions: * **`schedules`**: a list of [schedule objects](/v3/concepts/schedules). Most of the convenient interfaces for creating deployments allow users to avoid creating this object themselves. For example, when [updating a deployment schedule in the UI](/v3/concepts/schedules) basic information such as a cron string or interval is all that's required. * **`parameter_openapi_schema`**: an [OpenAPI compatible schema](https://swagger.io/specification/) that defines the types and defaults for the flow's parameters. This is used by the UI and the backend to expose options for creating manual runs as well as type validation. * **`parameters`**: default values of flow parameters that this deployment will pass on each run. These can be overwritten through a trigger or when manually creating a custom run. * **`enforce_parameter_schema`**: a boolean flag that determines whether the API should validate the parameters passed to a flow run against the schema defined by `parameter_openapi_schema`. **Scheduling is asynchronous and decoupled** Pausing a schedule, updating your deployment, and other actions reset your auto-scheduled runs. ### Concurrency limiting Prefect supports managing concurrency at the deployment level to enable limiting how many runs of a deployment can be active at once. To enable this behavior, deployments have the following fields: * **`concurrency_limit`**: an integer that sets the maximum number of concurrent flow runs for the deployment. * **`concurrency_options`**: an optional `ConcurrencyOptions` object to configure concurrency behavior: * **`collision_strategy`**: configure the behavior for runs once the concurrency limit is reached. Falls back to `ENQUEUE` if unset. * `ENQUEUE`: new runs transition to `AwaitingConcurrencySlot` and execute as slots become available. * `CANCEL_NEW`: new runs are canceled until a slot becomes available. * **`grace_period_seconds`**: the time in seconds to allow infrastructure to start before the concurrency slot is released. This is useful for deployments with slow-starting infrastructure. Must be between 60 and 86400 seconds. If not set, falls back to the server setting (default 300 seconds / 5 minutes). ```sh prefect deploy theme={null} prefect deploy ... --concurrency-limit 3 --collision-strategy ENQUEUE ``` ```python flow.deploy() theme={null} from prefect.client.schemas.objects import ( ConcurrencyLimitConfig, ConcurrencyLimitStrategy ) my_flow.deploy(..., concurrency_limit=3) my_flow.deploy( ..., concurrency_limit=ConcurrencyLimitConfig( limit=3, collision_strategy=ConcurrencyLimitStrategy.CANCEL_NEW, grace_period_seconds=120, # 2 minutes ), ) ``` ```python flow.serve() theme={null} from prefect.client.schemas.objects import ( ConcurrencyLimitConfig, ConcurrencyLimitStrategy ) my_flow.serve(..., global_limit=3) my_flow.serve( ..., global_limit=ConcurrencyLimitConfig( limit=3, collision_strategy=ConcurrencyLimitStrategy.CANCEL_NEW, grace_period_seconds=120, # 2 minutes ), ) ``` ### Metadata for bookkeeping Important information for the versions, descriptions, and tags fields: * **`version`**: versions are always set by the client and can be any arbitrary string. We recommend tightly coupling this field on your deployments to your software development lifecycle and choosing human-readable version strings. If left unset, the version field will be automatically populated in one of two ways: * If deploying from a directory inside a Git repository or from a CI environment on a supported version control provider, `version` will be the first eight characters of your commit hash. * In all other circumstances, `version` will be your flow's version, which if not assigned in the flow decorator (`@flow(version="my-version")`) will be a hash of the file the flow is defined in. * **`version_type`**: When a deployment is created or updated, Prefect will attempt to infer version information from your environment. Providing a `version_type` instructs Prefect to only attempt version information collection from an environment of that type. The following version types are available: `vcs:github`, `vcs:gitlab`, `vcs:bitbucket`, `vcs:azuredevops`, `vcs:git`, or `prefect:simple`. `vcs:git` offers similar versioning detail to officially supported version control platforms, but does not support direct linking to commits from the Prefect Cloud UI. It is meant as a fallback option in case your version control platform is not supported. `prefect:simple` is for any deployment version created where no Git context is available. If left unset, Prefect will automatically select the appropriate `version_type` based on the detected environment. * **`description`**: provide reference material such as intended use and parameter documentation. Markdown is accepted. The docstring of your flow function is the default value. * **`tags`**: group related work together across a diverse set of objects. Tags set on a deployment are inherited by that deployment's flow runs. Filter, customize views, and searching by tag. **Everything has a version** Deployments have a version attached; and flows and tasks also have versions set through their respective decorators. These versions are sent to the API anytime the flow or task runs, allowing you to audit changes. ### Worker-specific fields [Work pools](/v3/concepts/work-pools/) and [workers](/v3/concepts/workers/) are an advanced deployment pattern that allow you to dynamically provision infrastructure for each flow run. The work pool job template interface allows users to create and govern opinionated interfaces to their workflow infrastructure. To do this, a deployment using workers needs the following fields: * **`work_pool_name`**: the name of the work pool this deployment is associated with. Work pool types mirror infrastructure types, which means this field impacts the options available for the other fields. * **`work_queue_name`**: if you are using work queues to either manage priority or concurrency, you can associate a deployment with a specific queue within a work pool using this field. * **`job_variables`**: this field allows deployment authors to customize whatever infrastructure options have been exposed on this work pool. This field is often used for Docker image names, Kubernetes annotations and limits, and environment variables. * **`pull_steps`**: a JSON description of steps that retrieves flow code or configuration, and prepares the runtime environment for workflow execution. Pull steps allow users to highly decouple their workflow architecture. For example, a common use of pull steps is to dynamically pull code from remote filesystems such as GitHub with each run of their deployment. # Define event triggers Source: https://docs.prefect.io/v3/concepts/event-triggers Define a custom trigger to react to many kinds of events and metrics. When you need a trigger beyond what the templates in the UI trigger builder provide, you can define a custom trigger in JSON. With custom triggers, you have access to the full capabilities of Prefect's automation system—allowing you to react to many kinds of events and metrics in your workspace. Each automation has a single trigger that, when fired, causes all of its associated actions to run. That single trigger may be a reactive or proactive event trigger, a trigger monitoring the value of a metric, or a composite trigger that combines several underlying triggers. ### Event triggers Event triggers are the most common type of trigger. They are intended to react to the presence or absence of an event. Event triggers are indicated with `{"type": "event"}`. Viewing a custom trigger for automations in the UI This is the schema that defines an event trigger: | Name | Type | Supports wildcards and negative matching | Description | | ------------------ | ------------------------- | ---------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **match** | object | ✅ | Labels for resources which this Automation will match. Supports trailing wildcards (`*`) and negative matching (`!`). | | **match\_related** | object OR array of object | ✅ | Labels for related resources which this Automation will match. Supports trailing wildcards (`*`) and negative matching (`!`). | | **posture** | string enum | N/A | The posture of this Automation, either Reactive or Proactive. Reactive automations respond to the presence of the expected events, while Proactive automations respond to the absence of those expected events. | | **after** | array of strings | ✅ | Event(s), one of which must have first been seen to start this automation. | | **expect** | array of strings | ✅ | The event(s) this automation expects to see. If empty, this automation will evaluate any matched event. | | **for\_each** | array of strings | ❌ | Evaluate the Automation separately for each distinct value of these labels on the resource. By default, labels refer to the primary resource of the triggering event. You may also refer to labels from related resources by specifying `related::