Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integration and light repository #113

Open
russelljjarvis opened this issue Feb 15, 2019 · 7 comments
Open

Integration and light repository #113

russelljjarvis opened this issue Feb 15, 2019 · 7 comments

Comments

@russelljjarvis
Copy link
Collaborator

Hey @JustasB, I need to integrate more closely with your work, as it puts the optimizer results in perspective.

I wonder if I can use the source code in the cellmodel repository, to regenerate your plot?
https://github.com/scrook/neuroml-db/blob/master/Import%20Scripts/model-importer/cellmodel.py

Also I am starting to write optimizer outputs to *csv, what should be included in csv files?
https://github.com/russelljjarvis/neuronunit/blob/opt_cluster/neuronunit/optimization/optimization_management.py#L244-L247

@russelljjarvis
Copy link
Collaborator Author

russelljjarvis commented Mar 1, 2019

Hey @JustasB,

I have since done more work on this integration issue.

Here I evaluate the Druckmann tests, on an optimized model. In code I have sourced this model from the Open Science Framework, if it doesn't exist locally. So that part should be machine independent.

https://github.com/russelljjarvis/neuronunit/blob/opt_cluster/neuronunit/examples/PCA_JB.ipynb

I am having trouble getting NeuroML DB to work due to package management conflicts.

I wonder if there is a way to structure the data frame appropriately without building NeuroMLDB?
https://github.com/russelljjarvis/neuronunit/blob/opt_cluster/neuronunit/examples/Clustering_GetEphyzFromDBRJ.ipynb

@JustasB
Copy link
Collaborator

JustasB commented Mar 1, 2019

@russelljjarvis That looks great.

To skip the need to connect to the NMLDB, read this CSV file into a dataframe.

The columns are just the Druckmann test classes' names without "Test" (you have them listed in your "tests" variable).

You can remove all the rows if you want, or keep them if you want to compare to the other NML DB models.

The values in the dataframe are raw values from the Druckmann tests that have been NA filled and log-transformed. You can NA fill & transform the values from the druckmann tests in response to your model by performing steps described in 3 notebook cells of this file. The code of the first cell starts with: "df['AP1Amplitude'].fillna(0, inplace=True)"

Let me know if this helps.

@russelljjarvis
Copy link
Collaborator Author

russelljjarvis commented Mar 1, 2019

@russelljjarvis russelljjarvis changed the title Integration Integration and light repository Apr 11, 2019
@russelljjarvis
Copy link
Collaborator Author

I wonder if it's possible to have a branch were data and source code are separated?

I think a lot of your source code would be very useful, and I should integrate it, but it's a barrier that it is behind a 13GB download. An alternative approach I fork the repository on GH, and then manually delete data files cannot really work, as there are too many data files to manually delete.

@JustasB
Copy link
Collaborator

JustasB commented Apr 11, 2019

I can see the reasons why you would want to split the repo. I can also see reasons for keeping it as one. Spitting it would def be a non-trivial change.

I think you could avoid having to download the whole repo with sparse checkout. Take a look at git sparse checkout. This will let you check out just the sub-folders you want, without having to download the whole repo.

Let me know if you think this will work.

@russelljjarvis
Copy link
Collaborator Author

Thanks for the link to git sparse checkout. As a step to more closely integrating our pipelines I created this repository: https://github.com/russelljjarvis/NeuroML-DB-thinclient

I used the approach of piecemeal adding files to satisfy dependencies, instead of git spare commit, as I had already committed a lot of time to that approach.

I might need to do some manipulations, to register the repository as a fork of NeuroML-db, so I can easily integrate any code updates you make there.

@JustasB
Copy link
Collaborator

JustasB commented Apr 12, 2019

Ok sounds good

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants