Skip to content

Latest commit

 

History

History
311 lines (253 loc) · 7.6 KB

Request-TextCompletion.md

File metadata and controls

311 lines (253 loc) · 7.6 KB
external help file Module Name online version schema
PSOpenAI-help.xml
PSOpenAI
2.0.0

Request-TextCompletion

SYNOPSIS

Creates a completion for the provided prompt and parameters.

SYNTAX

Request-TextCompletion
    [[-Prompt] <String[]>]
    [-Suffix <String>]
    [-Model <String>]
    [-Temperature <Double>]
    [-TopP <Double>]
    [-NumberOfAnswers <UInt16>]
    [-Stream]
    [-StopSequence <String[]>]
    [-MaxTokens <Int32>]
    [-PresencePenalty <Double>]
    [-FrequencyPenalty <Double>]
    [-LogitBias <IDictionary>]
    [-User <String>]
    [-Echo <Boolean>]
    [-BestOf <UInt16>]
    [-AsBatch]
    [-CustomBatchId <String>]
    [-TimeoutSec <Int32>]
    [-MaxRetryCount <Int32>]
    [-ApiBase <Uri>]
    [-ApiKey <Object>]
    [-Organization <String>]
    [<CommonParameters>]

DESCRIPTION

Given a prompt, the AI model will return one or more predicted completions.
https://platform.openai.com/docs/guides/completion/text-completion

EXAMPLES

Example 1: Estimate the sentences that follow.

Request-TextCompletion -Prompt 'This is a hamburger store.' | select Answer
We serves
-classic hamburgers
-tofu burgers

PARAMETERS

-Prompt

(Required) The prompt(s) to generate completions for

Type: String[]
Aliases: Message
Required: False
Position: 1
Accept pipeline input: True (ByValue)

-Suffix

The suffix that comes after a completion of inserted text.

Type: String
Required: False
Position: Named

-Model

The name of model to use. The default value is gpt-3.5-turbo-instruct.

Type: String
Required: False
Position: Named
Default value: gpt-3.5-turbo-instruct

-Temperature

What sampling temperature to use, between 0 and 2.
Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

Type: Double
Required: False
Position: Named

-TopP

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
So 0.1 means only the tokens comprising the top 10% probability mass are considered.

Type: Double
Aliases: top_p
Required: False
Position: Named

-NumberOfAnswers

How many texts to generate for each prompt. The default value is 1.

Type: UInt16
Aliases: n
Required: False
Position: Named
Default value: 1

-Stream

Whether to stream back partial progress.

Type: System.Management.Automation.SwitchParameter
Required: False
Position: Named
Default value: False

-StopSequence

Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.

Type: String[]
Aliases: stop
Required: False
Position: Named

-MaxTokens

The maximum number of tokens allowed for the generated answer.
The max value depends on models.

Type: Int32
Aliases: max_tokens
Required: False
Position: Named
Default value: 2048

-PresencePenalty

Number between -2.0 and 2.0.
Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.

Type: Double
Aliases: presence_penalty
Required: False
Position: Named

-FrequencyPenalty

Number between -2.0 and 2.0.
Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.

Type: Double
Aliases: frequency_penalty
Required: False
Position: Named

-LogitBias

Modify the likelihood of specified tokens appearing in the completion.
Accepts a maps of tokens to an associated bias value from -100 to 100. You can use ConvertTo-Token to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
As an example, you can pass like so: @{23182 = 20; 88847 = -100}
ID 23182 maps to "apple" and ID 88847 maps to "banana". Thus, this example increases the likelihood of the word "apple" being included in the response from the AI and greatly reduces the likelihood of the word "banana" being included.

Type: IDictionary
Aliases: logit_bias
Required: False
Position: Named

-User

A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.

Type: String
Required: False
Position: Named

-Echo

Echo back the prompt in addition to the completion. The default value is $false.

Type: Boolean
Required: False
Position: Named
Default value: $false

-BestOf

Generates best_of completions server-side and returns the "best" (the one with the highest log probability per token).

Type: UInt16
Aliases: best_of
Required: False
Position: Named

-AsBatch

If this is specified, this cmdlet returns an object for Batch input
It does not perform an API request to OpenAI. It is useful with Start-Batch cmdlet.

Type: SwitchParameter
Required: False
Position: Named
Default value: False

-CustomBatchId

A unique id that will be used to match outputs to inputs of batch. Must be unique for each request in a batch.
This parameter is valid only when the -AsBatch swicth is used. Otherwise, it is simply ignored.

Type: String
Required: False
Position: Named

-TimeoutSec

Specifies how long the request can be pending before it times out. The default value is 0 (infinite).

Type: Int32
Required: False
Position: Named
Default value: 0

-MaxRetryCount

Number between 0 and 100.
Specifies the maximum number of retries if the request fails.
The default value is 0 (No retry).
Note : Retries will only be performed if the request fails with a 429 (Rate limit reached) or 5xx (Server side errors) error. Other errors (e.g., authentication failure) will not be performed.

Type: Int32
Required: False
Position: Named
Default value: 0

-ApiBase

Specifies an API endpoint URL such like: https://your-api-endpoint.test/v1
If not specified, it will use https://api.openai.com/v1

Type: System.Uri
Required: False
Position: Named
Default value: https://api.openai.com/v1

-ApiKey

Specifies API key for authentication.
The type of data should [string] or [securestring].
If not specified, it will try to use $global:OPENAI_API_KEY or $env:OPENAI_API_KEY

Type: Object
Required: False
Position: Named

-Organization

Specifies Organization ID which used for an API request.
If not specified, it will try to use $global:OPENAI_ORGANIZATION or $env:OPENAI_ORGANIZATION

Type: string
Aliases: OrgId
Required: False
Position: Named

INPUTS

OUTPUTS

[pscustomobject]

NOTES

RELATED LINKS

https://platform.openai.com/docs/guides/completion/text-completion

https://platform.openai.com/docs/api-reference/completions/create