Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Frank/ollama module #1099

Open
wants to merge 11 commits into
base: develop
Choose a base branch
from

Conversation

frankhaugen
Copy link

What does this PR do?

Added a new Module project that uses OLLAMA docker images to give local LLM capabilities. This only supports CPU workload, but adding the env. variables to activate GPU if the prerequisites are there, can be done and so CUDA core based processing isn't precluded

Why is it important?

This module is very powerful and can give a lot of value to those that are incorporating OLLAMA models in their projects

Related issues

Link related issues below. Insert the issue link or reference after the word "Closes" if merging this should automatically close it.

How to test this PR

Run the test added

Follow-ups

  • Make more docs to guide the user to be able to use GPU might be a future thing, but right now, no need.

Copy link

netlify bot commented Jan 25, 2024

Deploy Preview for testcontainers-dotnet ready!

Name Link
🔨 Latest commit 6a0f096
🔍 Latest deploy log https://app.netlify.com/sites/testcontainers-dotnet/deploys/65d9b5a671986e0008a6dd03
😎 Deploy Preview https://deploy-preview-1099--testcontainers-dotnet.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

Copy link
Collaborator

@HofmeisterAn HofmeisterAn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR. I had a quick look and added some minor suggestions and improvements.

Comment on lines 22 to 33
public async Task StartOllamaAsync()
{
if (State!= TestcontainersStates.Created && State != TestcontainersStates.Running) {
throw new InvalidOperationException("Cannot start a container that has not been created.");
}
Task.WaitAll(ExecAsync(new List<string>()
{
"ollama", "run", ModelName,
}));

await Task.CompletedTask;
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it make sense to pass the model name as an argument to the member? Run(string modelName, CancellationToken ct = default)? In addition, please do not block the thread; simply return ExecAsync(["ollama", "run", modelName], ct);.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed and resolved about the return. I added two methods one taking only the ct and one taking name and ct. Have a look if you agree

src/Testcontainers.Ollama/OllamaConfiguration.cs Outdated Show resolved Hide resolved
src/Testcontainers.Ollama/OllamaContainer.cs Outdated Show resolved Hide resolved
src/Testcontainers.Ollama/OllamaConfiguration.cs Outdated Show resolved Hide resolved
tests/Testcontainers.Ollama.Tests/OllamaContainerTests.cs Outdated Show resolved Hide resolved
This commit updates the Ollama configuration to allow more customization options and removes unnecessary test helpers. The OllamaConfiguration class was refactored to provide more configurable parameters such as the VolumePath and VolumeName. Additionally, the TestOutputHelperExtensions and TestOutputLogger classes were deleted as they were not providing any significant value.
@frankhaugen
Copy link
Author

@HofmeisterAn I'm updating the branch with my current state as the tests were green on local, but I think it needs another pass. No Rush, I'm busy with my day-job so, no need for you to rush 😄

Copy link
Member

@eddumelendez eddumelendez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your contribution. Testcontainers for Java and Go has recently release a new Ollama module, which allows to detect and enable GPU and allow to commit the container's state and create a new Image. It would be nice to align the .NET implementation along with the others. See java implementation.

/// <summary>
/// The default model name for the OllamaBuilder.
/// </summary>
public const string DefaultModelName = OllamaModels.Llama2;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Models are just like images and should be provided as a string rather than having an specific type. It will also reduce the amount of PRs just to keep it up-to-date

/// <summary>
/// Default volume path.
/// </summary>
public const string DefaultVolumePath = "/root/.ollama";
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Volume mounting should be optional rather than the default, at least for now. We can align along with Java and Go implementation.

@HofmeisterAn HofmeisterAn added enhancement New feature or request module An official Testcontainers module labels Mar 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request module An official Testcontainers module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Enhancement]: Ollama module
3 participants