Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Achieve seamless connection between program code and LLM prompt #653

Closed
klb3713 opened this issue May 8, 2024 · 3 comments
Closed

Achieve seamless connection between program code and LLM prompt #653

klb3713 opened this issue May 8, 2024 · 3 comments

Comments

@klb3713
Copy link

klb3713 commented May 8, 2024

Is your feature request related to a problem? Please describe.
My Feature request has nothing to do with the current issue

Describe the solution you'd like
The current capability is one-way, i.e. formatting the output of LLM into program objects. The upgrade I would like to suggest is two-way. For example, you can execute the LLM call as a function, pass in the program object as a parameter, automatically convert the program object into a prompt internally, then execute the LLM call, and then format the return result into a program. object.
This feature will enable seamless execution of program code and LLM prompts. This is great!

Describe alternatives you've considered

Additional context

@AriMKatz
Copy link

AriMKatz commented May 14, 2024

Can't you do manual unpacking pretty easily?

@Mr-Ruben
Copy link

Mr-Ruben commented May 16, 2024

If this is not what you are after, could you show us some code?

class Output(BaseModel):
    name: str
    age: int

Let's imagine we just have the object.

# the object
person=Output(age=33,name="Sam")

# call only with object 
r=call_llm_with_class(   
                  prompt=str(person),
                  response_model=type(person),
                  )

print(r.output)
# {'name': 'Sam', 'age': 33}
# But that is similar to 
type(person)(**dict(r.output)) 

# which internally is
# type(person)(        **dict(r.output)                             )
# Output(                 **dict(r.output)                             )  
# Output(                 **dict({'name': 'Sam', 'age': 33})  )
# Output(                 **{'name': 'Sam', 'age': 33}          )
# Output(                 'name'= 'Sam', 'age'= 33             )

# Which produces
Output(name='Sam', age=33)

This is actual output, not pseudo-code.

call_llm_with_class is just a wrapper

@ivanleomk
Copy link
Collaborator

Closing this issue for now due to a lack of activity.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants