Skip to content

Model MC-dropout in a Deep Reinforcement Learning (DRL) framework, to find the optimial passes needed for producing predefined confidence level.

Notifications You must be signed in to change notification settings

elisim/Optimized-MC-Dropout-Using-DRL

Repository files navigation

Status: Archive (code is provided as-is, no updates expected)

Optimized-MC-Dropout-Using-DRL

MC-dropout estimates uncertainty at test time using the variance statistics extracted from several dropout-enabled forward passes. Unfortunately, the prediction cost of an effective MC-dropout can reach hundreds of feed-forward iterations for each prediction. In this repository, I model MC-dropout in a Deep Reinforcement Learning (DRL) framework, to find the optimial passes needed for producing predefined confidence level.

Acknowledgements

About

Model MC-dropout in a Deep Reinforcement Learning (DRL) framework, to find the optimial passes needed for producing predefined confidence level.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published