Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(langchain): Instrument Bedrock (non-streamed) #1966

Open
jsumners-nr opened this issue Jan 24, 2024 · 3 comments
Open

(langchain): Instrument Bedrock (non-streamed) #1966

jsumners-nr opened this issue Jan 24, 2024 · 3 comments

Comments

@jsumners-nr
Copy link
Contributor

Ideally, any LLMs the LangChain library is capable of interacting with will make use of our existing instrumentation work for those LLMs. This is the case with out OpenAI instrumentation. However, our Bedrock instrumentation is not applicable because LangChain uses a bespoke AWS client. As a result, we need to instrument LangChain's Bedrock implementation.

There are two methods for directly using LLMs with LangChain. LangChain's direct LLM usage for Bedrock is described at https://js.langchain.com/docs/integrations/llms/bedrock and https://js.langchain.com/docs/integrations/chat/bedrock. A resulting instrumented app could look like:

'use strict'

// https://js.langchain.com/docs/integrations/chat/bedrock

require('dotenv').config()
const newrelic = require('newrelic')
// const { BedrockChat: Bedrock } = require('@langchain/community/chat_models/bedrock')
const { Bedrock } = require('@langchain/community/llms/bedrock')

main().then(() => {}).catch(error => {
  console.error(error)
})

async function main () {
  newrelic.startBackgroundTransaction('bedrock-tx', async () => {
    try {
      const chatModel = new Bedrock({
        region: 'us-east-1', // required
        model: 'anthropic.claude-v2' // required
      })
      const res = await chatModel.invoke("what is the answer to life, the universe, and everything?")
      console.log(res)
    } catch (error) {
      console.error(error)
    }

    newrelic.shutdown({ collectPendingData: true }, () => {})
  })
}

Note that the Bedrock import can be interchanged and each is slightly different. For example, the above script will fail unless the llms/bedrock import is swapped for the chat_models/bedrock import. This is because the llms/bedrock invoke will not coerce the given prompt into a proper \n\nHuman: <prompt> format required by the Claude model, but the chat_models/bedrock will. It is unclear if this detail has any bearing on our instrumentation, but it is provided here for clarity.

Both imports utilize BaseChatModel.prototype.invoke as their means of interacting with the LLM. We should be able to instrument this method to cover our needs. Otherwise, we may need to instrument both of:

Spans should be named Llm/agent/Langchain/invoke.

@newrelic-node-agent-team newrelic-node-agent-team added this to Triage Needed: Unprioritized Features in Node.js Engineering Board Jan 24, 2024
@workato-integration
Copy link

@bizob2828 bizob2828 moved this from Triage Needed: Unprioritized Features to To do: Features here are prioritized in Node.js Engineering Board Jan 29, 2024
@kmudduluru kmudduluru moved this from To do: Features here are prioritized to Triage Needed: Unprioritized Features in Node.js Engineering Board Feb 26, 2024
@bizob2828
Copy link
Member

We're going to track usage first before replicating our instrumentation in langchain for bedrock

@bizob2828
Copy link
Member

I have posed a question to see if we can contribute to langchain js instead of adding similar instrumentation for bedrock in langchain.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Node.js Engineering Board
  
Triage Needed: Unprioritized Features
Development

No branches or pull requests

2 participants