You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have tried asking for help in the community on discord or discussions and have not received a response.
I have tried searching the documentation and have not found an answer.
What Model are you using?
gpt-3.5-turbo
gpt-4-turbo
gpt-4
Other (please specify)
Describe the bug
I have a relatively complex BaseModel, with nested BaseModels. When asking for a generation with that model, it works just fine, but as soon as I want to retrieve multiple within a single call it fails to validate.
I can't share the exact model I used, so I created a working example using the example from https://github.com/wandb/edu/blob/main/llm-structured-extraction/2.tips.ipynb.
This seems to become and issue only once we ask for an Iterable of 3 nested BaseModels.
In the example I've created no models can create age_range at all. If we remove the top level (i.e. use Character directly instead of CharacterAndNickname), it will add the age_range just fine.
In my specific case, the model doesn't appear to be followed at all once I ask for an Iterable.
I'm not entirely sure if this is an actual bug, or just a limitation of capabilities. But the fact that GPT4 is not a bit better in this case than GPT3.5 makes me think it might be a bug.
To Reproduce
classRange(BaseModel):
minimum: Optional[int] =Nonemaximum: Optional[int] =NoneclassCharacterAndNickname(BaseModel):
nickname: strcharacter: CharacterclassCharacter(BaseModel):
id: intname: strfriends_array: List[int] =Field(
description="Relationships to their friends using the id"
)
age_range: Range=Field(
description=(
"The range of ages the character has over the course of the"" series."
)
)
resp=client.chat.completions.create(
model="gpt-4-turbo",
messages=[
{
"role": "user",
"content": (
"5 kids from Harry Potter. Make sure to get all numbers"" right, especially their age ranges!"
),
}
],
response_model=Iterable[CharacterAndNickname],
)
forcharacterinresp:
print(character)
In this case we get the following errors:
ValidationError: 5 validation errors for IterableCharacterAndNickname
tasks.0.character.age_range
Input should be an object [type=model_type, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.7/v/model_type
tasks.1.character.age_range
Input should be an object [type=model_type, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.7/v/model_type
tasks.2.character.age_range
Input should be an object [type=model_type, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.7/v/model_type
tasks.3.character.age_range
Input should be an object [type=model_type, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.7/v/model_type
tasks.4.character.age_range
Input should be an object [type=model_type, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.7/v/model_type
If needed to add detail to get to this failure, but it's interesting that all models failed to generate multiple characters once I added the third nested level (GPT 3.5, 4, and the turbos).
Expected behavior
Multiple results generated regardless of complexity.
The text was updated successfully, but these errors were encountered:
What Model are you using?
Describe the bug
I have a relatively complex BaseModel, with nested BaseModels. When asking for a generation with that model, it works just fine, but as soon as I want to retrieve multiple within a single call it fails to validate.
I can't share the exact model I used, so I created a working example using the example from https://github.com/wandb/edu/blob/main/llm-structured-extraction/2.tips.ipynb.
This seems to become and issue only once we ask for an Iterable of 3 nested BaseModels.
In the example I've created no models can create age_range at all. If we remove the top level (i.e. use Character directly instead of CharacterAndNickname), it will add the age_range just fine.
In my specific case, the model doesn't appear to be followed at all once I ask for an Iterable.
I'm not entirely sure if this is an actual bug, or just a limitation of capabilities. But the fact that GPT4 is not a bit better in this case than GPT3.5 makes me think it might be a bug.
To Reproduce
In this case we get the following errors:
If needed to add detail to get to this failure, but it's interesting that all models failed to generate multiple characters once I added the third nested level (GPT 3.5, 4, and the turbos).
Expected behavior
Multiple results generated regardless of complexity.
The text was updated successfully, but these errors were encountered: