Dask¶
Dask performs two different tasks:
it optimizes dynamic task scheduling, similar to Airflow, Luigi or Celery.
it performs parallel data like arrays, dataframes, and lists with dynamic task scheduling.
Scales from laptops to clusters¶
Dask can be installed on a laptop with uv and expands the size of the datasets from fits in memory to fits on disk. Dask can also scale to a cluster of hundreds of machines. It is resilient, elastic, data-local and has low latency. For more information, see the distributed scheduler documentation. This simple transition between a single machine and a cluster allows users to both start easily and grow as needed.
Install Dask¶
You can install everything that is required for most common applications of Dask (arrays, dataframes, …). This installs both Dask and dependencies such as NumPy, Pandas, etc. that are required for various workloads:
$ uv add "dask[complete]"
However, only individual subsets can be installed with:
$ uv add "dask[array]"
$ uv add "dask[dataframe]"
$ uv add "dask[diagnostics]"
$ uv add "dask[distributed]"
Testing the installation¶
[1]:
import pytest
Familiar operation¶
Dask DataFrame¶
… imitates pandas.
[2]:
import pandas as pd
df = pd.read_csv("tutorials.csv")
grouped = df.groupby("Title")
grouped.agg("mean")
[2]:
Unnamed: 0 | 2021-12 | 2022-01 | 2022-02 | |
---|---|---|---|---|
Title | ||||
Jupyter Tutorial | 0.5 | 18103.5 | 20505.5 | 13099.0 |
PyViz Tutorial | 2.0 | 4873.0 | 3930.0 | 2573.0 |
Python Basics | 4.5 | 261.0 | 251.0 | 341.0 |
[3]:
import dask.dataframe as dd
dd = pd.read_csv("tutorials.csv")
dd.groupby("Title").agg("mean")
[3]:
Unnamed: 0 | 2021-12 | 2022-01 | 2022-02 | |
---|---|---|---|---|
Title | ||||
Jupyter Tutorial | 0.5 | 18103.5 | 20505.5 | 13099.0 |
PyViz Tutorial | 2.0 | 4873.0 | 3930.0 | 2573.0 |
Python Basics | 4.5 | 261.0 | 251.0 | 341.0 |
Dask Array¶
… imitates NumPy.
[4]:
import h5py
import numpy as np
f = h5py.File("mydata.h5")
x = np.array(f["."])
[5]:
import dask.array as da
f = h5py.File("mydata.h5")
x = da.array(f["."])
See also:
Dask Bag¶
… imitates iterators, Toolz und PySpark.
[6]:
import dask.bag as db
b = db.from_sequence([10, 3, 5, 7, 11, 4])
list(b.topk(2))
[6]:
[11, 10]
See also:
Dask Delayed¶
… imitates loops and wraps custom code, see Creating a delayed pipeline.
concurrent.futures
¶
… extends Python’s concurrent.futures
interface and enables the submission of user-defined tasks.
Note:
For the following example, Dask must be installed with the distributed
option, for example
$ uv add dask[distributed]
[7]:
from dask.distributed import Client
[8]:
client = Client()
This starts local workers as processes. To run the local workers as threads, you can pass processes=False
as a parameter:
client = Client(processes=False)
Now you can execute your own tasks and chain dependencies using the submit
method:
[9]:
from math import pi
def inc(x):
return x + 1
def circumference(x):
return 2 * pi * x
increments = client.submit(inc, 10)
circumferences = client.submit(circumference, increments)
[10]:
circumferences.result()
[10]:
69.11503837897544