Here is why Pytorch is more Pythonic than Tensorflow.
Youdiowei Eteimorde
Posted on July 27, 2021
One of the best ways to understand Deep Learning is by learning concepts rather than frameworks, so occasionally I take a piece of code written in TensorFlow and convert it to PyTorch or vice-versa. On a high level, there isn't much difference between these two libraries.
# Sequential model in both Pytorch and Tensorflow
model = keras.Sequential(
[
layers.conv2d(20,5, activation="relu"),
layers.conv2d(64,5, activation="relu"),
]
)
model = nn.Sequential(
nn.Conv2d(1,20,5),
nn.ReLU(),
nn.Conv2d(20,64,5),
nn.ReLU()
)
They both implement Artificial Neural Networks the same way. Thanks to Keras we can say that TensorFlow is quite beginner-friendly. Pytorch can be quite intimidating at first partly because you have to define your own training loop. This is great if you are a researcher because you can easily take control of the entire process. With TensorFlow, you can also define your own training loop too which is pretty much the same as PyTorch. As you go deeper into both frameworks you may start seeing the difference between both of them. If you read any article or listen to an expert on deep learning they usually say stuff like PyTorch is more pythonic but how.
Historical difference
Originally there was a huge difference between TensorFlow and PyTorch even on a high level. In Tensorflow 1.x in other to train a neural network you have to create a computational graph then place it within a session
import tensorflow as tf
# Tensorflow 1.x
x = tf.constant([1., 2., 3.]) # Define a tensor x
y = tf.constant([4., 5., 6.]) # Define a tensor y
add_op = tf.add(x,y) # Add operation
with tf.Session() as sess: #create a session
result = sess.run(add_op) # run the session
print(result) # [ 5.,7., 9.,]
I still remember my first introduction to TensorFlow 1.x it was so terrifying that I had to abandon it and use Keras instead. This wasn't pythonic at all it felt like you were coding in Java and not python, you couldn't test your code without compiling it. Luckily Tensorflow 2.x fixed this issue and introduced Eager Execution which enables programmers to run ML code in a more pythonic way.
#Tensorflow 2.x
x = tf.constant([1., 2., 3.]) # Define a tensor x
y = tf.constant([4., 5., 6.]) # Define a tensor y
result = tf.add(x,y) # Add operation
print(result) # [5,7,9]
There is no need of running your code within sessions everything happened automatically. This is what made PyTorch and TensorFlow become similar in many ways.
import torch
x = torch.tensor([1,2,3]) # Define a tensor x
y = torch.tensor([4,5,6]) # Define a tensor y
result = torch.add(x,y) # Add operation
print(result) # [5,7,9]
Pytorch never made programmers create a computational graph and run it within sessions you can say PyTorch always ran eagerly.
Pytorch is still more pythonic, Why?
Apart from its name, PyTorch feels more pythonic here's why. Pytorch is basically Numpy with only two differences, it runs on the GPU and can perform automatic differentiation. If you're already familiar with NumPy using PyTorch will be very intuitive.
import torch
import numpy as np
torch.tensor([1,2,3])
np.array([1,2,3])
tf.constant([1,2,3])
# OUTPUT: [1,2,3]
torch.ones(5)
np.ones(5)
tf.ones(5)
# OUTPUT: [1, 1, 1, 1, 1]
torch.zeros(5)
np.zeros(5)
# OUTPUT:[0, 0, 0, 0, 0]
t = torch.tensor([[1,2,3],[4,5,6]])
n = np.array([[1,2,3], [4,5,6]])
t.flatten()
n.flatten()
# OUTPUT:[1, 2, 3, 4, 5, 6]
# No TF equivalent
np.arange(4)
torch.arange(4)
tf.range(4) # slightly different from numpy
# OUTPUT: [0,1,2,3]
In general, all three libraries have the same functionalities but PyTorch and NumPy are basically the same. One way TensorFlow differs from NumPy/PyTorch is in TensorFlow tensors are immutable. They are like strings in python.
n = np.array([1,2,3,4])
p = torch.tensor([1,2,3,4])
t = tf.constant([1,2,3,.4])
print(n[0]) # output: 1
print(p[0]) # output: 1
print(t[0]) # output: 1
n[0] = 2 # n = [2,2,3,4]
p[0] = 2 # p = [2,2,3,4]
t[0] = 2 # TypeError
There are ways around this in TensorFlow. You will have to define a tf.variable
instead of tf.constant
but still it is not very straightforward like PyTorch, NumPy, or regular python list. This is just one of the ways that set TensorFlow apart from the others.
Conclusion
Pytorch is a Deep learning library built from the ground up to be very python-centric. If you are familiar with NumPy already PyTorch will feel natural. Tensorflow on the other hand is a bit eccentric in the way it performs operations that some might say are not pythonic.
Posted on July 27, 2021
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.