High-quality single file implementation of Deep Reinforcement Learning algorithms with research-friendly features (PPO, DQN, C51, DDPG, TD3, SAC, PPG)
-
Updated
Jun 10, 2024 - Python
High-quality single file implementation of Deep Reinforcement Learning algorithms with research-friendly features (PPO, DQN, C51, DDPG, TD3, SAC, PPG)
Transformers 3rd Edition
Implementation for the different ML tasks on Kaggle platform with GPUs.
repo for training and experiments with skin-cancer dataset from kaggle
Train and fine-tune diffusion models. Perform image-to-image class transfer experiments.
A convenient way to trigger synchronizations to wandb / Weights & Biases if your compute nodes don't have internet!
Testing ray tune with slurm batch submission and optuna and wandb
In this project we have developed a Deep Autoencoder using Dense Neural Networks to perform dimensionality reduction on MNIST and FMNIST datasets. The project includes training, saving, and evaluating models using PyTorch. Utilized the Weight & Biases library for monitoring and comparison of model performance
All Assignments of the course, Statistical Methods in AI at IIITH, Monsoon 2024
Interactively inspect module inputs, outputs, parameters, and gradients.
PyTorch implementation of the U-Net for image semantic segmentation with high quality images
DL model deployment using docker, API deployment with FastAPI, and MLOps using WandB for overhead-mnist dataset
PyTorch-Lightning Library for Neural News Recommendation
hackable boilerplate for PyTorch Lightning driven deep learning research in Lightning AI Studios
Files needed to run flow-forecast as a container.
Add a description, image, and links to the wandb topic page so that developers can more easily learn about it.
To associate your repository with the wandb topic, visit your repo's landing page and select "manage topics."