Workers are lightweight polling services that retrieve scheduled runs from a work pool and execute them.
Workers each have a type corresponding to the execution environment to submit flow runs to.
Workers can only poll work pools that match their type.
As a result, when deployments are assigned to a work pool, you know in which execution environment scheduled flow runs for that deployment will run.
The following diagram summarizes the architecture of a worker-based work pool deployment:
The worker is in charge of provisioning the flow run infrastructure.
Worker types
Below is a list of available worker types. Most worker types require installation of an additional package.
| Worker Type | Description | Required Package | 
|---|
| process | Executes flow runs in subprocesses |  | 
| kubernetes | Executes flow runs as Kubernetes jobs | prefect-kubernetes | 
| docker | Executes flow runs within Docker containers | prefect-docker | 
| ecs | Executes flow runs as ECS tasks | prefect-aws | 
| cloud-run-v2 | Executes flow runs as Google Cloud Run jobs | prefect-gcp | 
| vertex-ai | Executes flow runs as Google Cloud Vertex AI jobs | prefect-gcp | 
| azure-container-instance | Execute flow runs in ACI containers | prefect-azure | 
| coiled | Execute flow runs in your cloud with Coiled | prefect-coiled | 
Worker options
Workers poll for work from one or more queues within a work pool. If the worker references a work queue that doesn’t exist, it is created automatically.
The worker CLI infers the worker type from the work pool.
Alternatively, you can specify the worker type explicitly.
If you supply the worker type to the worker CLI, a work pool is created automatically if it doesn’t exist (using default job settings).
Configuration parameters you can specify when starting a worker include:
| Option | Description | 
|---|
| --name,-n | The name to give to the started worker. If not provided, a unique name will be generated. | 
| --pool,-p | The work pool the started worker should poll. | 
| --work-queue,-q | One or more work queue names for the worker to pull from. If not provided, the worker pulls from all work queues in the work pool. | 
| --type,-t | The type of worker to start. If not provided, the worker type is inferred from the work pool. | 
| --prefetch-seconds | The amount of time before a flow run’s scheduled start time to begin submission. Default is the value of PREFECT_WORKER_PREFETCH_SECONDS. | 
| --run-once | Only run worker polling once. By default, the worker runs forever. | 
| --limit,-l | The maximum number of flow runs to execute concurrently. | 
| --with-healthcheck | Start a healthcheck server for the worker. | 
| --install-policy | Install policy to use workers from Prefect integration packages. | 
kubernetes, the worker deploys flow runs to a Kubernetes cluster.
Prefect must be installed in any environment (for example, virtual environment, Docker container) where you intend to run the worker or
execute a flow run.
PREFECT_API_URL and PREFECT_API_KEYsettings for workersPREFECT_API_URL must be set for the environment where your worker is running. When using Prefect Cloud, you must also have a user or service account
with the Worker role, which you can configure by setting the PREFECT_API_KEY.
Worker status
Workers have two statuses: ONLINE and OFFLINE. A worker is online if it sends regular heartbeat messages to the Prefect API.
If a worker misses three heartbeats, it is considered offline. By default, a worker is considered offline a maximum of 90 seconds
after it stopped sending heartbeats, but you can configure the threshold with the PREFECT_WORKER_HEARTBEAT_SECONDS setting.
Worker logs
 Workers send logs to the Prefect Cloud API if you’re connected to Prefect Cloud.
- All worker logs are automatically sent to the Prefect Cloud API
- Logs are accessible through both the Prefect Cloud UI and API
- Each flow run will include a link to its associated worker’s logs
Worker details
 The Worker Details page shows you three key areas of information:
- Worker status
- Installed Prefect version
- Installed Prefect integrations (e.g., prefect-aws,prefect-gcp)
- Live worker logs (if worker logging is enabled)
Access a worker’s details by clicking on the worker’s name in the Work Pool list.Start a worker
Use the prefect worker start CLI command to start a worker. You must pass at least the work pool name.
If the work pool does not exist, it will be created if the --type flag is used.
prefect worker start -p [work pool name]
prefect worker start -p "my-pool"
Discovered worker type 'process' for work pool 'my-pool'.
Worker 'ProcessWorker 65716280-96f8-420b-9300-7e94417f2673' started!
--type flag:
prefect worker start -p "my-pool" --type "process"
Worker 'ProcessWorker d24f3768-62a9-4141-9480-a056b9539a25' started!
06:57:53.289 | INFO    | prefect.worker.process.processworker d24f3768-62a9-4141-9480-a056b9539a25 - Worker pool 'my-pool' created.
--limit flag.
For example, to limit a worker to five concurrent flow runs:
prefect worker start --pool "my-pool" --limit 5
--prefetch-seconds option or the PREFECT_WORKER_PREFETCH_SECONDS setting.
If this value is more than the amount of time it takes for the infrastructure to start, the flow run will wait until its
scheduled start time.
Polling for work
Workers poll for work every 15 seconds by default. You can configure this interval in your profile settings
with the
PREFECT_WORKER_QUERY_SECONDS setting.
Install policy
The Prefect CLI can install the required package for Prefect-maintained worker types automatically. Configure this behavior
with the --install-policy option. The following are valid install policies:
| Install Policy | Description | 
|---|
| always | Always install the required package. Updates the required package to the most recent version if already installed. | 
| if-not-present | Install the required package if it is not already installed. | 
| never | Never install the required package. | 
| prompt | Prompt the user to choose whether to install the required package. This is the default install policy. | 
| If prefect worker startis run non-interactively, thepromptinstall policy behaves the same asnever. |  | 
Further reading