Skip to content

a mini Deep Learning framework supporting GPU accelerations written with CUDA

License

Notifications You must be signed in to change notification settings

kartik4949/deepops

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

49 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DeepOps

A Mini Deep learning library, accelerated on GPUs with PyCuda.

No no no.. I havent wrote this library on tensorflow or torch, it is a standalone machine. :P

Implemented backpropogations using reverse traversal, Support gradients and GPU operations, You can make a mini neural network (FNN) and use the in-house optimizer to train it on a dataset (e.g MNIST).

Tip: always give your tensor a funny name! :)

Note: Only for Educational Usage.

alt text

Installation.

pip install deepop

Neural Network

import deepops as dp
from deepops.model import Model


class SeqFC(Model):
    def __init__(self):
        super().__init__()
        self.dense1 = dp.layers.Dense(2, 2, activation="relu", name="dense1")
        self.dense2 = dp.layers.Dense(2, 1, name="dense2")

    def forward(self, x):
        x = self.dense1(x)
        x = self.dense2(x)
        return x

Backward Pass.

sequential = SeqFC()
sequential.forward(x)
sequential.init_backward()
sequential.backward()

print([p.grad for p in sequential.parameters()])

Tensor.

a = Tensor([1,2,3,4,5])
# deepop tensor 

Attach.

a = Tensor([1,2,3,4,5])
a.device("gpu:0") # attach to gpu device.

Check the Device.

a.where
# 'cpu'

Addition.

a = Tensor([1.0,2.0])
print(a + a)
# GPU Operation

Multiplication.

a = Tensor([1.0, 2.0])
print(a.mul(a))
print(a * a)

Calculate Gradients.

Tensor = dp.Tensor
a1 = Tensor([1.0, 3.0, 1.0])
b1 = Tensor([7.0, 3.0, 5.0])

a2 = Tensor([4.0, 3.0, 1.0])
a3 = Tensor([3.0, 3.0, 1.0])
a4 = Tensor([7.0, 1.0, 6.0])
b2 = Tensor([1.0, 21.0, 12.0])

c = a1 * b1 + a3
d = a2 * b2 + a4
out = c * d

# backward
out.backward()

print(out.grad)
print(a1.grad)

Run Tests.

python -m pytest -s

Contribution is highly appreciated.

Please contribute to my work.

TODOs

  • write more tests...
  • need a optimizer.
  • support more operations.

License

MIT