Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

not_implemented error trying to add validation step to lightning module #9315

Open
EliHei2 opened this issue May 13, 2024 · 3 comments
Open
Labels

Comments

@EliHei2
Copy link

EliHei2 commented May 13, 2024

馃悰 Describe the bug

Hello esteemed pyg developers,

Trying to train the following simple model:

class LitSegger(L.LightningModule):
    def __init__(self, model):
        super().__init__()
        self.model = model
        self.validation_step_outputs = []
        
    def training_step(self, batch, batch_idx):
        ...
        loss = criterion(out_values, edge_label)
        self.log("train_loss", loss, prog_bar=True)
        return loss
        
    def validation_step(self, batch, batch_idx):
        ...
        loss = criterion(out_values, edge_label)
        auroc = torchmetrics.AUROC(task="binary")
        auroc_res = auroc(edge_label, out_values)
        self.log("validation_loss", loss, on_step=False, on_epoch=True)
        self.log("validation_score",  auroc_res, on_step=False, prog_bar=True, on_epoch=True)
        self.validation_step_outputs.append(auroc_res)
        return loss
        
    def on_validation_epoch_end(self):
        all_outs = torch.stack(self.validation_step_outputs)
        print(all_outs.sum())
        self.validation_step_outputs.clear()
        
    def configure_optimizers(self):
        optimizer = torch.optim.Adam(self.parameters(), lr=1e-3)
        return optimizer
        
trainer = L.Trainer(
        accelerator="cuda",  strategy='auto', 
        precision="16-mixed",
        devices=1, 
        max_epochs=100#, default_root_dir="./log",
    )
    
trainer.logger._default_hp_metric = None

xe_train_ds     = XeniumDataset(root='data_tidy/pyg_datasets/XeniumDataset_v4_debug/train_tiles')
xe_train_loader = DataLoader(xe_train_ds, batch_size=32, num_workers=0, pin_memory=True)

xe_val_ds     = XeniumDataset(root='data_tidy/pyg_datasets/XeniumDataset_v4_debug/val_tiles')
xe_val_loader = DataLoader(xe_val_ds, batch_size=32, num_workers=0, pin_memory=True)

# trainer.fit_loop.max_epochs += 100
trainer.fit(litsegger, xe_train_loader, xe_val_loader)

Training without using the validation dataloader works fine but when adding the validation dataloader I get the following error.

  File "/home/.conda/envs/py39/lib/python3.9/site-packages/lightning/pytorch/trainer/connectors/logger_connector/result.py", line 406, in log
    batch_size = self._extract_batch_size(self[key], batch_size, meta)
  File "/home/.conda/envs/py39/lib/python3.9/site-packages/lightning/pytorch/trainer/connectors/logger_connector/result.py", line 340, in _extract_batch_size
    batch_size = extract_batch_size(self.batch)
  File "/home/.conda/envs/py39/lib/python3.9/site-packages/lightning/pytorch/utilities/data.py", line 72, in extract_batch_size
    for bs in _extract_batch_size(batch):
  File "/home/.conda/envs/py39/lib/python3.9/site-packages/lightning/pytorch/utilities/data.py", line 51, in _extract_batch_size
    for sample in batch:
  File "/home/.conda/envs/py39/lib/python3.9/site-packages/torch_geometric/data/feature_store.py", line 527, in __iter__
      raise NotImplementedError
NotImplementedError

To my understanding the lighting treats the batches from DataLoader as FeatureStore, but couldn't dig deeper what's exactly happening. Worth to mention that my the graph is in HeteroData. Would very much appreciate it if you could give me an idea what's happening there.

Versions

PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 11.1.0
Clang version: Could not collect
CMake version: version 3.26.3
Libc version: glibc-2.17

Python version: 3.9.19 (main, May 6 2024, 19:43:03) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.114.2.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM3-32GB
GPU 1: Tesla V100-SXM3-32GB
GPU 2: Tesla V100-SXM3-32GB
GPU 3: Tesla V100-SXM3-32GB
GPU 4: Tesla V100-SXM3-32GB
GPU 5: Tesla V100-SXM3-32GB
GPU 6: Tesla V100-SXM3-32GB
GPU 7: Tesla V100-SXM3-32GB
GPU 8: Tesla V100-SXM3-32GB
GPU 9: Tesla V100-SXM3-32GB
GPU 10: Tesla V100-SXM3-32GB
GPU 11: Tesla V100-SXM3-32GB
GPU 12: Tesla V100-SXM3-32GB
GPU 13: Tesla V100-SXM3-32GB
GPU 14: Tesla V100-SXM3-32GB
GPU 15: Tesla V100-SXM3-32GB

Nvidia driver version: 550.54.15
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz
Stepping: 4
CPU MHz: 3083.807
CPU max MHz: 3700.0000
CPU min MHz: 1200.0000
BogoMIPS: 5400.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 33792K
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba rsb_ctxsw ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke md_clear spec_ctrl intel_stibp flush_l1d arch_capabilities

Versions of relevant libraries:
[pip3] flake8==7.0.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.4
[pip3] numpydoc==1.5.0
[pip3] numpyro==0.12.1
[pip3] pytorch-lightning==2.0.2
[pip3] torch==2.0.1
[pip3] torch_cluster==1.6.3+pt23cu121
[pip3] torch_geometric==2.5.2
[pip3] torch_scatter==2.1.2+pt23cu121
[pip3] torch_sparse==0.6.18+pt23cu121
[pip3] torch_spline_conv==1.2.2+pt23cu121
[pip3] torchaudio==0.13.1
[pip3] torchmetrics==0.11.4
[pip3] torchvision==0.18.0
[pip3] triton==2.3.0
[conda] numpy 1.23.4 pypi_0 pypi
[conda] numpy-base 1.26.4 py39h8a23956_0
[conda] numpydoc 1.5.0 py39h06a4308_0
[conda] numpyro 0.12.1 pypi_0 pypi
[conda] pytorch-lightning 2.0.2 pypi_0 pypi
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] torch 2.3.0 pypi_0 pypi
[conda] torch-cluster 1.6.3+pt23cu121 pypi_0 pypi
[conda] torch-geometric 2.5.2 pypi_0 pypi
[conda] torch-scatter 2.1.2+pt23cu121 pypi_0 pypi
[conda] torch-sparse 0.6.18+pt23cu121 pypi_0 pypi
[conda] torch-spline-conv 1.2.2+pt23cu121 pypi_0 pypi
[conda] torchaudio 0.13.1 py39_cpu pytorch
[conda] torchmetrics 0.11.4 pypi_0 pypi
[conda] torchvision 0.18.0 pypi_0 pypi
[conda] triton 2.3.0 pypi_0 pypi

@EliHei2 EliHei2 added the bug label May 13, 2024
@rusty1s
Copy link
Member

rusty1s commented May 13, 2024

Thanks for reporting. Do you have a minimal example that operates from a torch_geometric.dataset provided by PyG? Does this only occur on heterogeneous graphs? You can probably get around this by passing the batch_size to the logger commands.

@EliHei2
Copy link
Author

EliHei2 commented May 13, 2024

Hey @rusty1s, Thanks for your quick repsonse. Adding batch_size indeed solves the problem (see the following code). I overlooked the traceback, was kinda obvious 馃槄.

... 
        self.log("validation_loss", loss, on_step=False, on_epoch=True, batch_size=32)
...

But I'm still wondering when the casting to FeatureStore happens (as I never used it explicitly). I don't know how to share the dataset but I this is one example HeteroData from the dataset:

HeteroData(
  tx={
    pos=[443, 3],
    x=[443, 280],
  },
  nc={ x=[2, 4] },
  (tx, belongs, nc)={
    edge_index=[2, 128],
    edge_label=[256],
    edge_label_index=[2, 256],
  },
  (tx, neighbors, tx)={ edge_index=[2, 6475] }
)

Also never tried with homogenous Data before, so no idea if the error appears also there.

@rusty1s
Copy link
Member

rusty1s commented May 22, 2024

But I'm still wondering when the casting to FeatureStore happens (as I never used it explicitly)

PL uses some internal logic to infer the batch size via iterating over attributes. Since Data inherits from FeatureStore, it does not cast, but just trying to access a method from FeatureStore which isn't available at the Data-level.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants