How to deploy a web application powered by background tasks
Learn how to background heavy tasks from a web application to dedicated infrastructure.
This example demonstrates how to use background tasks in the context of a web application using Prefect for task submission, execution, monitoring, and result storage. We’ll build out an application using FastAPI to offer API endpoints to our clients, and task workers to execute the background tasks these endpoints defer.
Refer to the examples repository for the complete example’s source code.
This pattern is useful when you need to perform operations that are too long for a standard web request-response cycle, such as data processing, sending emails, or interacting with external APIs that might be slow.
Overview
This example will build out:
@prefect.task
definitions representing the work you want to run in the background- A
fastapi
application providing API endpoints to:- Receive task parameters via
POST
request and submit the task to Prefect with.delay()
- Allow polling for the task’s status via a
GET
request using itstask_run_id
- Receive task parameters via
- A
Dockerfile
to build a multi-stage image for the web app, Prefect server and task worker(s) - A
compose.yaml
to manage lifecycles of the web app, Prefect server and task worker(s)
You can follow along by cloning the examples repository or instead use uv
to bootstrap a your own new project:
This example application is structured as a library with a src/foo
directory for portability and organization.
This example does not require:
- Prefect Cloud
- creating a Prefect Deployment
- creating a work pool
Useful things to remember
- You can call any Python code from your task definitions (including other flows and tasks!)
- Prefect Results allow you to save/serialize the
return
value of your task definitions to your result storage (e.g. a local directory, S3, GCS, etc), enabling caching and idempotency.
Defining the background task
The core of the background processing is a Python function decorated with @prefect.task
. This marks the function as a unit of work that Prefect can manage (e.g. observe, cache, retry, etc.)
Key details:
@task
: Decorator to define our task we want to run in the background.cache_policy
: Caching based onINPUTS
andTASK_SOURCE
.serve(create_structured_output)
: This function starts a task worker subscribed to newlydelay()
ed task runs.
Building the FastAPI application
The FastAPI application provides API endpoints to trigger the background task and check its status.
Checking Task Status with the Prefect Client
Checking Task Status with the Prefect Client
The get_task_result
helper function (in src/foo/_internal/_prefect.py
) uses the Prefect Python client to interact with the Prefect API:
This function fetches the TaskRun
object from the API and checks its state
to determine if it’s Completed
, Failed
, or still Pending
/Running
. If completed, it attempts to retrieve the result using task_run.state.result()
. If failed, it tries to get the error message.
Building the Docker Image
A multi-stage Dockerfile
is used to create optimized images for each service (Prefect server, task worker, and web API). This approach helps keep image sizes small and separates build dependencies from runtime dependencies.
Dockerfile Key Details
Dockerfile Key Details
- Base Stage (
base
): Sets up Python,uv
, installs all dependencies frompyproject.toml
into a base layer to make use of Docker caching, and copies the source code. - Server Stage (
server
): Builds upon thebase
stage. Sets the default command (CMD
) to start the Prefect server. - Task Worker Stage (
task
): Builds upon thebase
stage. Sets theCMD
to run thesrc/foo/task.py
script, which is expected to contain theserve()
call for the task(s). - API Stage (
api
): Builds upon thebase
stage. Sets theCMD
to start the FastAPI application usinguvicorn
.
The compose.yaml
file then uses the target
build argument to specify which of these final stages (server
, task
, api
) to use for each service container.
Declaring the application services
We use compose.yaml
to define and run the multi-container application, managing the lifecycles of the FastAPI web server, the Prefect API server, database and task worker(s).
In a production use-case, you’d likely want to:
- write a
Dockerfile
for each service - add a
postgres
service and configure it as the Prefect database. - remove the hot-reloading configuration in the
develop
section
Key Service Configurations
Key Service Configurations
-
prefect-server
: Runs the Prefect API server and UI.build
: Uses a multi-stageDockerfile
(not shown here, but present in the example repo) targeting theserver
stage.ports
: Exposes the Prefect API/UI on port4200
.volumes
: Uses a named volumeprefect-data
to persist the Prefect SQLite database (/root/.prefect/prefect.db
) across container restarts.PREFECT_SERVER_API_HOST=0.0.0.0
: Makes the API server listen on all interfaces within the Docker network, allowing thetask
andapi
services to connect.
-
task
: Runs the Prefect task worker process (executingpython src/foo/task.py
which callsserve
).build
: Uses thetask
stage from theDockerfile
.depends_on
: Ensures theprefect-server
service is started before this service attempts to connect.PREFECT_API_URL
: Crucial setting that tells the worker where to find the Prefect API to poll for submitted task runs.PREFECT_LOCAL_STORAGE_PATH=/task-storage
: Configures the worker to store task run results in the/task-storage
directory inside the container. This path is mounted to the host using thetask-storage
named volume viavolumes: - ./task-storage:/task-storage
(or justtask-storage:
if using a named volume without a host path binding).PREFECT_RESULTS_PERSIST_BY_DEFAULT=true
: Tells Prefect tasks to automatically save their results using the configured storage (defined byPREFECT_LOCAL_STORAGE_PATH
in this case).PREFECT_LOGGING_LOG_PRINTS=true
: Configures the Prefect logger to capture output fromprint()
statements within tasks.OPENAI_API_KEY=${OPENAI_API_KEY}
: Passes secrets needed by the task code from the host environment (via a.env
file loaded by Docker Compose) into the container’s environment.
-
api
: Runs the FastAPI web application.build
: Uses theapi
stage from theDockerfile
.depends_on
: Waits for theprefect-server
(required for submitting tasks and checking status) and optionally thetask
worker.PREFECT_API_URL
: Tells the FastAPI application where to send.delay()
calls and status check requests.PREFECT_LOCAL_STORAGE_PATH
: May be needed if the API itself needs to directly read result files (though typically fetching results viatask_run.state.result()
is preferred).
-
volumes
: Defines named volumes (prefect-data
,task-storage
) to persist data generated by the containers.
Running this example
Assuming you have obtained the code (either by cloning the repository or using uv init
as described previously) and are in the project directory:
-
Prerequisites: Ensure Docker Desktop (or equivalent) with
docker compose
support is running. -
Build and Run Services: This example’s task uses marvin, which (by default) requires an OpenAI API key. Provide it as an environment variable when starting the services:
This command will:
--build
: Build the container images if they don’t exist or if the Dockerfile/context has changed.--watch
: Watch for changes in the project source code and automatically sync/rebuild services (useful for development).- Add
--detach
or-d
to run the containers in the background.
-
Access Services:
- If you cloned the existing example, check out the basic htmx UI at http://localhost:8000
- FastAPI docs: http://localhost:8000/docs
- Prefect UI (for observing task runs): http://localhost:4200
Cleaning up
Next Steps
This example provides a repeatable pattern for integrating Prefect-managed background tasks with any python web application. You can:
- Explore the background tasks examples repository for more examples.
- Adapt
src/**/*.py
to define and submit your specific web app and background tasks. - Configure Prefect settings (environment variables in
compose.yaml
) further, for example, using different result storage or logging levels. - Deploy these services to cloud infrastructure using managed container services.