You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Ideally, any LLMs the LangChain library is capable of interacting with will make use of our existing instrumentation work for those LLMs. This is the case with out OpenAI instrumentation. However, our Bedrock instrumentation is not applicable because LangChain uses a bespoke AWS client. As a result, we need to instrument LangChain's Bedrock implementation.
'use strict'// https://js.langchain.com/docs/integrations/chat/bedrockrequire('dotenv').config()constnewrelic=require('newrelic')// const { BedrockChat: Bedrock } = require('@langchain/community/chat_models/bedrock')const{ Bedrock }=require('@langchain/community/llms/bedrock')main().then(()=>{}).catch(error=>{console.error(error)})asyncfunctionmain(){newrelic.startBackgroundTransaction('bedrock-tx',async()=>{try{constchatModel=newBedrock({region: 'us-east-1',// requiredmodel: 'anthropic.claude-v2'// required})constres=awaitchatModel.invoke("what is the answer to life, the universe, and everything?")console.log(res)}catch(error){console.error(error)}newrelic.shutdown({collectPendingData: true},()=>{})})}
Note that the Bedrock import can be interchanged and each is slightly different. For example, the above script will fail unless the llms/bedrock import is swapped for the chat_models/bedrock import. This is because the llms/bedrock invoke will not coerce the given prompt into a proper \n\nHuman: <prompt> format required by the Claude model, but the chat_models/bedrock will. It is unclear if this detail has any bearing on our instrumentation, but it is provided here for clarity.
Both imports utilize BaseChatModel.prototype.invoke as their means of interacting with the LLM. We should be able to instrument this method to cover our needs. Otherwise, we may need to instrument both of:
Ideally, any LLMs the LangChain library is capable of interacting with will make use of our existing instrumentation work for those LLMs. This is the case with out OpenAI instrumentation. However, our Bedrock instrumentation is not applicable because LangChain uses a bespoke AWS client. As a result, we need to instrument LangChain's Bedrock implementation.
There are two methods for directly using LLMs with LangChain. LangChain's direct LLM usage for Bedrock is described at https://js.langchain.com/docs/integrations/llms/bedrock and https://js.langchain.com/docs/integrations/chat/bedrock. A resulting instrumented app could look like:
Note that the Bedrock import can be interchanged and each is slightly different. For example, the above script will fail unless the
llms/bedrock
import is swapped for thechat_models/bedrock
import. This is because thellms/bedrock
invoke will not coerce the given prompt into a proper\n\nHuman: <prompt>
format required by the Claude model, but thechat_models/bedrock
will. It is unclear if this detail has any bearing on our instrumentation, but it is provided here for clarity.Both imports utilize
BaseChatModel.prototype.invoke
as their means of interacting with the LLM. We should be able to instrument this method to cover our needs. Otherwise, we may need to instrument both of:Spans should be named
Llm/agent/Langchain/invoke
.The text was updated successfully, but these errors were encountered: