Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(ai-server): switch to custom runtime #15184

Merged
merged 2 commits into from
May 16, 2024
Merged

Conversation

y3rsh
Copy link
Collaborator

@y3rsh y3rsh commented May 14, 2024

Overview

  • Switch to custom runtime. Due to the size of packages we need and our own data a custom runtime is required. This also allows us to HTTP stream.
  • It is likely that using llama-index in the lambda will not be performant but this gets us started until we may extract it into its own service.

Test - deployed and make live-test passing

  • crt
  • sandbox
  • dev

@y3rsh y3rsh self-assigned this May 14, 2024
@y3rsh y3rsh requested review from a team as code owners May 14, 2024 20:58
Copy link
Contributor

@Elyorcv Elyorcv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good

opentrons-ai-server/README.md Show resolved Hide resolved
@y3rsh y3rsh requested a review from Elyorcv May 16, 2024 18:20
@y3rsh y3rsh merged commit b1eac14 into edge May 16, 2024
6 checks passed
Carlos-fernandez pushed a commit that referenced this pull request May 20, 2024
## Overview

- Switch to custom runtime. Due to the size of packages we need and our
own data a custom runtime is required. This also allows us to HTTP
stream.
- It is likely that using llama-index in the lambda will not be
performant but this gets us started until we may extract it into its own
service.

### Test - deployed and make live-test passing
- [x] crt
- [x] sandbox
- [x] dev
Carlos-fernandez pushed a commit that referenced this pull request Jun 3, 2024
## Overview

- Switch to custom runtime. Due to the size of packages we need and our
own data a custom runtime is required. This also allows us to HTTP
stream.
- It is likely that using llama-index in the lambda will not be
performant but this gets us started until we may extract it into its own
service.

### Test - deployed and make live-test passing
- [x] crt
- [x] sandbox
- [x] dev
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants