Classical Core Components:
All embedded in a high-level language
You've already seen and used this sort of thing: NumPy.
x.to("device_name")
When we manifest a node of the compute graph, we can either:
This was the classic distinction between TensorFlow and PyTorch