Model definition

This model is not intended to demonstrate state-of-the-art performance of the Transolver architecture. We use the Transolver implementation provided in nvidia-physicsnemo.

It is simply a proof-of-concept implementation showing how to load a custom PhysicsNeMo model through transformers.AutoModel using Hugging Face’s auto_map mechanism.

Data

The model was trained for 100 epochs on the fabiencasenave/Rotor37 dataset using the following split:

ds_trainval = ds["train"].train_test_split(test_size=0.2, seed=42, shuffle=True)

where ds is the DatasetDict loaded from fabiencasenave/Rotor37. The validation split was only used for monitoring the loss values and no hyperparameter tuning was performed.

The test split used in the example below does not contain the target values. If you want to compare predictions with target values, use the train split instead and add the column Base_2_3/Zone/PointData/Pressure.

Example of usage

Full inference example given below.

import json

import torch
from huggingface_hub import hf_hub_download
from plaid.bridges import huggingface_bridge as hfb
from torch.utils.data import DataLoader
from transformers import AutoModel

device = "cuda" if torch.cuda.is_available() else "cpu"

# ---- Load model ----
model = AutoModel.from_pretrained("Nionio/transolver-rotor37-small-train").to(device)
model.eval()

# ---- Load normalization stats ----
norm_path = hf_hub_download("Nionio/transolver-rotor37-small-train", "norm_stats.json")
with open(norm_path) as f:
    norms = json.load(f)

# ---- Load a test set ----
ds = (
    hfb.load_dataset_from_hub("fabiencasenave/Rotor37")
    .select_columns(
        [
            "Base_2_3/Zone/GridCoordinates/CoordinateX",
            "Base_2_3/Zone/GridCoordinates/CoordinateY",
            "Base_2_3/Zone/GridCoordinates/CoordinateZ",
            "Base_2_3/Zone/PointData/NormalsX",
            "Base_2_3/Zone/PointData/NormalsY",
            "Base_2_3/Zone/PointData/NormalsZ",
            "Global/Omega",
            "Global/P",
        ]
    )
    .with_format("torch")
)["test"]

p_mean = torch.tensor(norms["p_mean"], device=device)
p_std = torch.tensor(norms["p_std"], device=device)
scalar_mean = torch.tensor(norms["scalar_mean"], device=device)
scalar_std = torch.tensor(norms["scalar_std"], device=device)

# ---- Example batch ----
dataloader = DataLoader(ds, batch_size=4, shuffle=False)
for batch in dataloader:
    coords = torch.stack(
        [
            batch["Base_2_3/Zone/GridCoordinates/CoordinateX"],
            batch["Base_2_3/Zone/GridCoordinates/CoordinateY"],
            batch["Base_2_3/Zone/GridCoordinates/CoordinateZ"],
        ],
        dim=-1,
    ).to(device)  # [B, N, 3]

    normals = torch.stack(
        [
            batch["Base_2_3/Zone/PointData/NormalsX"],
            batch["Base_2_3/Zone/PointData/NormalsY"],
            batch["Base_2_3/Zone/PointData/NormalsZ"],
        ],
        dim=-1,
    ).to(device)  # [B, N, 3]

    # === Normalize global input scalars ===
    input_scalars = torch.concat(
        [batch["Global/Omega"], batch["Global/P"]],
        dim=-1,
    ).to(device)  # [B, 2]
    input_scalars = (input_scalars - scalar_mean) / scalar_std  # normalize
    # Broadcast scalars to nodes
    inputs = input_scalars.unsqueeze(1).expand(-1, normals.shape[1], -1)  # [B, N, 2]

    fx = torch.cat([normals, inputs], dim=-1)  # [B, N, 5]
    pred = model(fx, coords)  # [B, N, 1]

    # Unormalize prediction
    pred = pred * p_std + p_mean  # [B, N, 1]

    break
Downloads last month
82
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train Nionio/transolver-rotor37-small-train