Skip to content

Welcome to Prefect 2.0

Prefect 2.0 logo

Looking for Prefect 1.0 Core and Server?

Prefect 2.0 is now available for general use. See our Migration Guide if you're ready to move your Prefect 1.0 flows to Prefect 2.0.

If you're looking for the Prefect 1.0 Core and Server documentation, they're available at

Prefect coordinates dataflows

Prefect is air traffic control for the modern data stack. Monitor, coordinate, and orchestrate dataflows between and across your applications. Build pipelines, deploy them anywhere, and configure them remotely. You might just love your workflows again.

Why Prefect?

If you move data, you probably need the following functionality:

Implementing all of these features for your dataflows is a huge pain that takes a lot of time — time that could be better used for functional code.

That's why Prefect 2.0 offers all this functionality and more!

Getting started with Prefect

Prefect 2.0 was designed for incremental adoption into your workflows. The documentation is organized to support your exploration. Here are a few sections you might find helpful:

Getting started

Begin by installing Prefect 2.0 on your machine, then follow one of our friendly tutorials to learn by example. See the Getting Started overview for more.

Even if you have used Prefect 1.0 ("Prefect Core") and are familiar with Prefect workflows, we still recommend reading through these first steps. Prefect 2.0 offers significant new functionality.


Learn more about Prefect 2.0's features and design by reading our in-depth concept docs. The concept docs are intended to introduce the building blocks of Prefect, build up to orchestration and deployment, and finally cover some of the advanced use cases that Prefect makes possible.

Prefect UI & Prefect Cloud

See how Prefect's UI and cloud hosted functionality can make coordinating dataflows a joy.


Prefect integrates with the other tools of the modern data stack. In our collections docs learn about our pre-built integrations and see how to add your own.

Frequently asked questions

Prefect 2.0 represents a fundamentally new way of building and orchestrating dataflows. You can find responses to common questions by reading our FAQ and checking out the Prefect Discourse.

API reference

Prefect 2.0 provides a number of programmatic workflow interfaces, each of which is documented in the API Reference. This section is where you can learn how a specific function works, or see the expected payload for a REST endpoint.


Learn how you can get involved.

Prefect 2.0 is made possible by the fastest-growing community of data practitioners. The Prefect Slack community is a fantastic place to learn more, ask questions, or get help with workflow design.

The Prefect Discourse is an additional community-driven knowledge base to find answers to your Prefect-related questions.

Prefect highlights

Graceful failures

Inevitably dataflows will fail. Prefect helps your code automatically retry on failure.


You can easily set up e-mail or Slack notifications so that the right people are notified when something doesn't go as planned.

Designed for performance

Prefect 2.0 has been designed from the ground up to handle the dynamic, scalable workloads that today's dataflows demands.

Integrates with other modern data tools

Prefect has integrations for all the major cloud providers and modern data tools such as Snowflake, Databricks, dbt, and Airbyte.

Simple concurrency

Prefect provides accessible concurrency. You can configure concurrent processing locally or send tasks to remote clusters with Dask and Ray integrations.

Async first

Prefect 2.0 is built on asynchronous Python and allows you to take advantage of async/await concurrency. Prefect allows you to write workflows mixing synchronous and asynchronous tasks without worrying about the complexity of managing event loops.

Works well with containers

Prefect is often used with Docker and Kubernetes. Prefect can even package your flow directly into a Docker image.

Security first

Prefect helps you keep your data and code secure. Prefect's patented hybrid execution model means your data can stay in your environment while Prefect Cloud orchestrates your flows. Prefect, the company, is SOC2 compliant and our enterprise product makes it easy for you to restrict access to the right people in your organization.

A user friendly, interactive dashboard for your dataflows

In the Prefect Orion UI you can quickly set up notifications, visualize run history, and schedule your dataflows.

Faster and easier than building from scratch

It's estimated that up to 80% of a data engineer's time is spent writing code to guard against edge cases and provide information when a dataflow inevitably fails. Building the functionality that Prefect 2.0 delivers by hand would be a significant cost of engineering time.


Some workflow tools require you to make DAGs (directed acyclic graphs). DAGs represent a rigid framework that is overly constraining for modern, dynamic dataflows. Prefect 2.0 allows you to create dynamic dataflows in native Python - no DAGs required.

Incremental adoption

Prefect 2.0 is designed for incremental adoption. You can decorate as many of your dataflow functions as you like and get all the benefits of Prefect as you go!

Prefect in action

To dive right in and see what Prefect 2.0 can do, simply sprinkle in a few decorators and add a little configuration, like the example below.

Basic example

This code fetches data about GitHub stars for a few repositories. Add the three highlighted lines of code to your functions to use Prefect, and you're off to the races!

from prefect import flow, task
import httpx

def get_stars(repo):
    url = f"{repo}"
    count = httpx.get(url).json()["stargazers_count"]
    print(f"{repo} has {count} stars!")

def github_stars(repos):
    for repo in repos:

# call the flow!
github_stars(["PrefectHQ/Prefect", "PrefectHQ/prefect-aws",  "PrefectHQ/prefect-dbt"])

Run the code:


And see the logger's output in your terminal:

10:56:06.988 | INFO    | prefect.engine - Created flow run 'grinning-crab' for flow 'github-stars'
10:56:06.988 | INFO    | Flow run 'grinning-crab' - Using task runner 'ConcurrentTaskRunner'
10:56:06.996 | WARNING | Flow run 'grinning-crab' - No default storage is configured on the server. Results from this flow run will be stored in a temporary directory in its runtime environment.
10:56:07.027 | INFO    | Flow run 'grinning-crab' - Created task run 'get_stars-2ca9fbe1-0' for task 'get_stars'
PrefectHQ/Prefect has 9579 stars!
10:56:07.190 | INFO    | Task run 'get_stars-2ca9fbe1-0' - Finished in state Completed()
10:56:07.199 | INFO    | Flow run 'grinning-crab' - Created task run 'get_stars-2ca9fbe1-1' for task 'get_stars'
PrefectHQ/prefect-aws has 7 stars!
10:56:07.327 | INFO    | Task run 'get_stars-2ca9fbe1-1' - Finished in state Completed()
10:56:07.337 | INFO    | Flow run 'grinning-crab' - Created task run 'get_stars-2ca9fbe1-2' for task 'get_stars'
PrefectHQ/prefect-dbt has 12 stars!
10:56:07.464 | INFO    | Task run 'get_stars-2ca9fbe1-2' - Finished in state Completed()
10:56:07.477 | INFO    | Flow run 'grinning-crab' - Finished in state Completed('All states completed.')

By adding retries=3 to the @task decorator, the get_stars function automatically reruns up to three times on failure!

Observe your flow runs in the Prefect UI

Fire up the Prefect UI locally by entering this command in your terminal:

prefect orion start

Follow the link in your terminal to see the dashboard.

screenshot of prefect orion dashboard with flow runs in a scatter plot

Click on your flow name to see logs and other details.

screenshot of prefect orion dashboard with logs, radar plot, and flow info

Let's show how the aforementioned basic example can be expanded to run concurrently!

Simple concurrency

By changing the task calls to use the .submit() method, the tasks will be submitted to a worker for execution. This allows multiple tasks to run at once! Prefect 2.0 comes with built-in threaded concurrency and only this one line change is needed to begin using it.

from prefect import flow, task
import httpx

def get_stars(repo):
    url = f"{repo}"
    count = httpx.get(url).json()["stargazers_count"]
    print(f"{repo} has {count} stars!")

def github_stars(repos):
    for repo in repos:

# call the flow!
if __name__ == "__main__":
    github_stars(["PrefectHQ/Prefect", "PrefectHQ/prefect-aws",  "PrefectHQ/prefect-dbt"])

Parallelization with Dask

The worker tasks are submitted to can be configured to support more advanced execution stories. By using the DaskTaskRunner, tasks can be submitted to run in parallel on a local or remote Dask.distributed cluster.

Install the prefect-dask collection package with:

pip install prefect-dask

Import the DaskTaskRunner and configure your flow to use it with the default options.

from prefect import flow, task
from prefect_dask import DaskTaskRunner
import httpx

def get_stars(repo):
    url = f"{repo}"
    count = httpx.get(url).json()["stargazers_count"]
    print(f"{repo} has {count} stars!")

def github_stars(repos):
    for repo in repos:

# call the flow!
if __name__ == "__main__":
    github_stars(["PrefectHQ/Prefect", "PrefectHQ/prefect-aws",  "PrefectHQ/prefect-dbt"])

You should see similar output to the first example, with additional information about your Dask cluster.

Async concurrency

Prefect 2.0 ships with native async support.

from prefect import flow, task
import httpx
import asyncio

async def get_stars(repo):
    async with httpx.AsyncClient() as client:
        response = await client.get(f"{repo}")
    count = response.json()["stargazers_count"]
    print(f"{repo} has {count} stars!")

async def github_stars(repos):
    # You can use asyncio to run the tasks concurrently
    await asyncio.gather(*[get_stars(repo) for repo in repos])
    # Or use Prefect submission for the same outcome
    for repo in repos:
        await get_stars.submit(repo)

# call the flow!["PrefectHQ/Prefect", "PrefectHQ/prefect-aws", "PrefectHQ/prefect-dbt"]))

The above examples just scratch the surface of how Prefect can help you coordinate your dataflows.

Next steps

Follow the Getting Started docs and start building!

While you're at it give Prefect a ⭐️ on GitHub and join the thousands of community members in our Slack community.

Thank you for joining our mission to coordinate the world's dataflow and, of course, happy engineering!