←back to Blog

Step-by-Step Guide to Creating Synthetic Data Using the Synthetic Data Vault (SDV)

«`html

Step-by-Step Guide to Creating Synthetic Data Using the Synthetic Data Vault (SDV)

Real-world data is often costly, messy, and limited by privacy rules. Synthetic data offers a solution and is already widely used in various applications such as training large language models (LLMs) with AI-generated text, simulating edge cases for fraud detection systems, and pretraining vision models on artificial images.

The Synthetic Data Vault (SDV) is an open-source Python library designed to generate realistic tabular data using machine learning. It learns patterns from real data and creates high-quality synthetic data for safe sharing, testing, and model training.

Installation of the SDV Library

To begin, we need to install the SDV library:

pip install sdv

Reading the Dataset

Next, we import the necessary module and connect to our local folder containing the dataset files. This reads the CSV files from the specified folder and stores them as pandas DataFrames. We access the main dataset using data['data'].

from sdv.io.local import CSVHandler

connector = CSVHandler()
FOLDER_NAME = '.'  # If the data is in the same directory

data = connector.read(folder_name=FOLDER_NAME)
salesDf = data['data']

Importing Metadata

Now, we import the metadata for our dataset. This metadata is stored in a JSON file and tells SDV how to interpret your data. It includes:

  • The table name
  • The primary key
  • The data type of each column (e.g., categorical, numerical, datetime)
  • Optional column formats like datetime patterns or ID patterns
  • Table relationships (for multi-table setups)

Here is a sample metadata.json format:

{
  "METADATA_SPEC_VERSION": "V1",
  "tables": {
    "your_table_name": {
      "primary_key": "your_primary_key_column",
      "columns": {
        "your_primary_key_column": { "sdtype": "id", "regex_format": "T[0-9]{6}" },
        "date_column": { "sdtype": "datetime", "datetime_format": "%d-%m-%Y" },
        "category_column": { "sdtype": "categorical" },
        "numeric_column": { "sdtype": "numerical" }
      },
      "column_relationships": []
    }
  }
}

Detecting Metadata Automatically

Alternatively, we can use the SDV library to automatically infer the metadata. However, the results may not always be accurate or complete, so you might need to review and update it if there are any discrepancies:

from sdv.metadata import Metadata

metadata = Metadata.detect_from_dataframes(data)

Generating Synthetic Data

With the metadata and original dataset ready, we can now use SDV to train a model and generate synthetic data. The model learns the structure and patterns in your real dataset and uses that knowledge to create synthetic records. You can control how many rows to generate using the num_rows argument:

from sdv.single_table import GaussianCopulaSynthesizer

synthesizer = GaussianCopulaSynthesizer(metadata)
synthesizer.fit(data=salesDf)
synthetic_data = synthesizer.sample(num_rows=10000)

Evaluating Synthetic Data Quality

The SDV library provides tools to evaluate the quality of your synthetic data by comparing it to the original dataset. A good starting point is to generate a quality report:

from sdv.evaluation.single_table import evaluate_quality

quality_report = evaluate_quality(
    salesDf,
    synthetic_data,
    metadata)

You can also visualize how the synthetic data compares to the real data using SDV’s built-in plotting tools. For example, you can create comparison plots for specific columns:

from sdv.evaluation.single_table import get_column_plot

fig = get_column_plot(
    real_data=salesDf,
    synthetic_data=synthetic_data,
    column_name='Sales',
    metadata=metadata
)

fig.show()

Visualizing Average Monthly Sales Trends

We can further analyze the data by visualizing the average monthly sales trends across both datasets:

import pandas as pd
import matplotlib.pyplot as plt

# Ensure 'Date' columns are datetime
salesDf['Date'] = pd.to_datetime(salesDf['Date'], format='%d-%m-%Y')
synthetic_data['Date'] = pd.to_datetime(synthetic_data['Date'], format='%d-%m-%Y')

# Extract 'Month' as year-month string
salesDf['Month'] = salesDf['Date'].dt.to_period('M').astype(str)
synthetic_data['Month'] = synthetic_data['Date'].dt.to_period('M').astype(str)

# Group by 'Month' and calculate average sales
actual_avg_monthly = salesDf.groupby('Month')['Sales'].mean().rename('Actual Average Sales')
synthetic_avg_monthly = synthetic_data.groupby('Month')['Sales'].mean().rename('Synthetic Average Sales')

# Merge the two series into a DataFrame
avg_monthly_comparison = pd.concat([actual_avg_monthly, synthetic_avg_monthly], axis=1).fillna(0)

# Plot
plt.figure(figsize=(10, 6))
plt.plot(avg_monthly_comparison.index, avg_monthly_comparison['Actual Average Sales'], label='Actual Average Sales', marker='o')
plt.plot(avg_monthly_comparison.index, avg_monthly_comparison['Synthetic Average Sales'], label='Synthetic Average Sales', marker='o')

plt.title('Average Monthly Sales Comparison: Actual vs Synthetic')
plt.xlabel('Month')
plt.ylabel('Average Sales')
plt.xticks(rotation=45)
plt.grid(True)
plt.legend()
plt.ylim(bottom=0)  # y-axis starts at 0
plt.tight_layout()
plt.show()

This chart demonstrates that the average monthly sales in both datasets are very similar, with only minimal differences.

Conclusion

In this tutorial, we demonstrated how to prepare your data and metadata for synthetic data generation using the SDV library. By training a model on your original dataset, SDV can create high-quality synthetic data that closely mirrors the real data’s patterns and distributions. We also explored how to evaluate and visualize the synthetic data, confirming that key metrics like sales distributions and monthly trends remain consistent. Synthetic data offers a powerful way to overcome privacy and availability challenges while enabling robust data analysis and machine learning workflows.

Check out the Notebook on GitHub. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and join our 95k+ ML SubReddit.

«`