←back to Blog

A Step by Step Guide to Solve 1D Burgers’ Equation with Physics-Informed Neural Networks (PINNs): A PyTorch Approach Using Autom…

A Step by Step Guide to Solve 1D Burgers’ Equation with Physics-Informed Neural Networks (PINNs): A PyTorch Approach Using Automatic Differentiation and Collocation Methods

In this tutorial, we explore an innovative approach that blends with physical laws by leveraging Physics-Informed Neural Networks (PINNs) to solve the one-dimensional Burgers’ equation. Using PyTorch on Google Colab, we demonstrate how to encode the governing differential equation directly into the neural network’s loss function, allowing the model to learn the solution 𝑢(𝑥,𝑡) that inherently respects the underlying physics. This technique reduces the reliance on large labeled datasets and offers a fresh perspective on solving complex, non-linear partial differential equations using modern computational tools.

Copy Code Copied Use a different Browser

!pip install torch matplotlib

First, we install the PyTorch and matplotlib libraries using pip, ensuring you have the necessary tools for building neural networks and visualizing the results in your Google Colab environment.

Copy Code Copied Use a different Browser

import torch
import as nn
import as optim
import numpy as np
import t as plt

_default_dtype(32)

We import essential libraries: PyTorch for deep learning, NumPy for numerical operations, and matplotlib for plotting. We set the default tensor data type to float32 for consistent numerical precision throughout your computations.

Copy Code Copied Use a different Browser

x_min, x_max = -1.0, 1.0
t_min, t_max = 0.0, 1.0
nu = 0.01 /

N_f = 10000
N_0 = 200
N_b = 200

X_f = (N_f, 2)
X_f[:, 0] = X_f[:, 0] * (x_max — x_min) + x_min # x in [-1, 1]
X_f[:, 1] = X_f[:, 1] * (t_max — t_min) + t_min # t in [0, 1]

x0 = ace(x_min, x_max, N_0)[:, None]
t0 = _like(x0)
u0 = -( * x0)

tb = ace(t_min, t_max, N_b)[:, None]
xb_left = _like(tb) * x_min
xb_right = _like(tb) * x_max
ub_left = _like(tb)
ub_right = _like(tb)

X_f = r(X_f, dtype=32, requires_grad=True)
x0 = r(x0, dtype=32)
t0 = r(t0, dtype=32)
u0 = r(u0, dtype=32)
tb = r(tb, dtype=32)
xb_left = r(xb_left, dtype=32)
xb_right = r(xb_right, dtype=32)
ub_left = r(ub_left, dtype=32)
ub_right = r(ub_right, dtype=32)

We establish the simulation domain for the Burgers’ equation by defining spatial and temporal boundaries, viscosity, and the number of collocation, initial, and boundary points. It then generates random and evenly spaced data points for these conditions and converts them into PyTorch tensors, enabling gradient computation where needed.

Copy Code Copied Use a different Browser

class PINN(nn.Module):
def __init__(self, layers):
super(PINN, self).__init__()
ation = nn.Tanh()

layer_list = []
for i in range(len(layers) — 1):
layer_d(nn.Linear(layers[i], layers[i+1]))
s = nn.ModuleList(layer_list)

def forward(self, x):
for i, layer in enumerate(s[:-1]):
x = ation(layer(x))
return s[-1](x)

layers = [2, 50, 50, 50, 50, 1]
model = PINN(layers)
print(model)

Here, we define a custom Physics-Informed Neural Network (PINN) by extending PyTorch& nn.Module. The network architecture is built dynamically using a list of layer sizes, where each linear layer is followed by a Tanh activation (except for the final output layer). In this example, the network takes a 2-dimensional input, passes it through four hidden layers (each with 50 neurons), and outputs a single value. Finally, the model is instantiated with the specified architecture, and its structure is printed.

Copy Code Copied Use a different Browser

device = e(«cuda» if _available() else «cpu»)
(device)

Here, we check if a CUDA-enabled GPU is available, set the device accordingly, and move the model to that device for accelerated computation during training and inference.

Copy Code Copied Use a different Browser

def pde_residual(model, X):
x = X[:, 0:1]
t = X[:, 1:2]
u = model(([x, t], dim=1))

u_x = (u, x, grad_outputs=_like(u), create_graph=True, retain_graph=True)[0]
u_t = (u, t, grad_outputs=_like(u), create_graph=True, retain_graph=True)[0]
u_xx = (u_x, x, grad_outputs=_like(u_x), create_graph=True, retain_graph=True)[0]

f = u_t + u * u_x — nu * u_xx
return f

def loss_func(model):
f_pred = pde_residual(model, X_(device))
loss_f = (f_pred**2)

u0_pred = model(([(device), (device)], dim=1))
loss_0 = ((u0_pred — (device))**2)

u_left_pred = model(([xb_(device), (device)], dim=1))
u_right_pred = model(([xb_(device), (device)], dim=1))
loss_b = (u_left_pred**2) + (u_right_pred**2)

loss = loss_f + loss_0 + loss_b
return loss

Now, we compute the residual of Burgers’ equation at the collocation points by calculating the required derivatives via automatic differentiation. Then, we define a loss function that aggregates the PDE residual loss, the error from the initial condition, and the errors from the boundary conditions. This combined loss guides the network to learn a solution that satisfies both the physical law and the imposed conditions.

Copy Code Copied Use a different Browser

optimizer = optim.Adam(eters(), lr=1e-3)
num_epochs = 5000

for epoch in range(num_epochs):
_grad()
loss = loss_func(model)
ard()
()

if (epoch+1) % 500 == 0:
print(f’Epoch epoch+1/num_epochs, Loss: ():.5e’)

print(«Training complete!»)

Here, we set up the PINN& training loop using the Adam optimizer with a learning rate of 1×10 −3 . Over 5000 epochs, it repeatedly computes the loss (which includes the PDE residual, initial, and boundary condition errors), backpropagates the gradients, and updates the model parameters. Every 500 epochs, it prints the current epoch and loss to monitor progress and finally announces when training is complete.

Copy Code Copied Use a different Browser

N_x, N_t = 256, 100
x = ace(x_min, x_max, N_x)
t = ace(t_min, t_max, N_t)
X, T = rid(x, t)
XT = k((X.flatten()[:, None], T.flatten()[:, None]))
XT_tensor = r(XT, dtype=32).to(device)

()
with _grad():
u_pred = model(XT_tensor).cpu().numpy().reshape(N_t, N_x)

e(figsize=(8, 5))
urf(X, T, u_pred, levels=100, cmap=’viridis’)
bar(label=’u(x,t)’)
l(‘x’)
l(‘t’)
(«Predicted solution u(x,t) via PINN»)
()

Finally, we create a grid of points over the defined spatial (𝑥) and temporal (𝑡) domain, feed these points to the trained model to predict the solution 𝑢(𝑥, 𝑡), and reshape the output into a 2D array. Also, it visualizes the predicted solution as a contour plot using matplotlib, complete with a colorbar, axis labels, and a title, allowing you to observe how the PINN has approximated the dynamics of the Burgers’ equation.

In conclusion, this tutorial has showcased how PINNs can be effectively implemented to solve the 1D Burgers’ equation by incorporating the physics of the problem into the training process. Through careful construction of the neural network, generation of collocation and boundary data, and automatic differentiation, we achieved a model that learns a solution consistent with the PDE and the prescribed conditions. This fusion of and traditional physics paves the way for tackling more challenging problems in computational science and engineering, inviting further exploration into higher-dimensional systems and more sophisticated neural architectures.

Here is the . Also, don’t forget to follow us on   and join our   and . Don’t Forget to join our  .

The post appeared first on .

#ArtificialIntelligence #MachineLearning #AI #DeepLearning #Robotics