Skip to content

Releases: sigoden/aichat

v0.17.0

13 May 22:43
154c1e0
Compare
Choose a tag to compare

Break Changing

  • always use stream unless set --no-stream explicitly (#415)
  • vertexai config changed: replace api_base with project_id/location

Self-Hosted Server

AIChat comes with a built-in lightweight web server:

  • Provide access to all LLMs using OpenAI format API
  • Host LLM playground/arena web applications
$ aichat --serve
Chat Completions API: http://127.0.0.1:8000/v1/chat/completions
LLM Playground:       http://127.0.0.1:8000/playground
LLM ARENA:            http://127.0.0.1:8000/arena

New Clients

bedrock, vertex-claude, cloudflare, groq, perplexity, replicate, deepseek, zhipuai, anyscale, deepinfra, fireworks, openrouter, octoai, together

New REPL Command

.prompt                  Create a temporary role using a prompt
.set max_output_tokens
> .prompt your are a js console

%%> Date.now()
1658333431437

.set max_output_tokens 4096

New CLI Options

--serve [<ADDRESS>]    Serve the LLM API and WebAPP
--prompt <PROMPT>      Use the system prompt

New Configuration Fields

# Set default top-p parameter
top_p: null
# Command that will be used to edit the current line buffer with ctrl+o
# if unset fallback to $EDITOR and $VISUAL
buffer_editor: null

New Features

  • add completion scripts (#411)
  • shell commands support revision
  • add .prompt repl command (#420)
  • customize model's max_output_tokens (#428)
  • builtin models can be overwritten by models config (#429)
  • serve all LLMs as OpenAI-compatible API (#431)
  • support customizing top_p parameter (#434)
  • run without config file by set AICHAT_CLIENT (#452)
  • add --prompt option (#454)
  • non-streaming returns tokens usage (#458)
  • .model repl completions show max tokens and price (#462)

v0.16.0

11 Apr 00:41
a3f63a5
Compare
Choose a tag to compare

New Models

  • openai:gpt-4-turbo
  • gemini:gemini-1.0-pro-latest (replace gemini:gemini-pro)
  • gemini:gemini-1.0-pro-vision-latest (replace gemini:gemini-pro-vision)
  • gemini:gemini-1.5-pro-latest
  • vertexai:gemini-1.5-pro-preview-0409
  • cohere:command-r
  • cohere:command-r-plus

New Config

ctrlc_exit: false                # Whether to exit REPL when Ctrl+C is pressed

New Features

  • use ctrl+enter to newline in REPL (#394)
  • support cohere (#397)
  • -f/--file take one value and do not enter REPL (#399)

Full Changelog: v0.15.0...v0.16.0

v0.15.0

07 Apr 14:12
78d6e1b
Compare
Choose a tag to compare

Breaking Changes

Rename client localai to openai-compatible (#373)

clients:
--  type: localai
++  type: openai-compatible
++  name: localai

Gemini/VertexAI clients add block_threshold configuration (#375)

block_threshold: BLOCK_ONLY_HIGH # Optional field

New Models

  • claude:claude-3-haiku-20240307
  • ernie:ernie-4.0-8k
  • ernie:ernie-3.5-8k
  • ernie:ernie-3.5-4k
  • ernie:ernie-speed-8k
  • ernie:ernie-speed-128k
  • ernie:ernie-lite-8k
  • ernie:ernie-tiny-8k
  • moonshot:moonshot-v1-8k
  • moonshot:moonshot-v1-32k
  • moonshot:moonshot-v1-128k

New Config

save_session: null              # Whether to save the session, if null, asking

CLI Changes

New REPL Commands

.save session [name]                  
.set save_session <null|true|false>   
.role <name> <text...>          # Works in session

New CLI Options

--save-session                  Whether to save the session

Fix Bugs

  • erratic behaviour when using temp role in a session (#347)
  • color on non-truecolor terminal (#363)
  • not dirty session when updating properties (#379)
  • incorrectly render text contains tabs (#384)

Full Changelog: v0.14.0...v0.15.0

v0.14.0

07 Mar 15:34
c3677e3
Compare
Choose a tag to compare

Breaking Changes

Compress session automaticlly (#333)

When the total number of tokens in the session messages exceeds compress_threshold, aichat will automatically compress the session.

This means you can chat forever in the session.

The default compress_threshold is 2000, set this value to zero to disable automatic compression.

Rename max_tokens to max_input_tokens (#339)

To avoid misunderstandings. The max_input_tokens also be referred to as context_window.

    models:
      - name: mistral
--      max_tokens: 8192
++      max_input_tokens: 8192

New Models

  • claude

    • claude:claude-3-opus-20240229
    • claude:claude-3-sonnet-20240229
    • claude:claude-2.1
    • claude:claude-2.0
    • claude:claude-instant-1.2
  • mistral

    • mistral:mistral-small-latest
    • mistral:mistral-medium-latest
    • mistral:mistral-larget-latest
    • mistral:open-mistral-7b
    • mistral:open-mixtral-8x7b
  • ernie

    • ernie:ernie-3.5-4k-0205
    • ernie:ernie-3.5-8k-0205
    • ernie:ernie-speed

Commmand Changes

  • -c/--code generate code only (#327)

Chat-REPL Changes

  • .clear messages to clear session messages (#332)

Miscellences

  • shell integrations (#323)
  • allow overriding execute/code role (#331)

Full Changelog: v0.13.0...v0.14.0

v0.13.0

25 Feb 12:34
Compare
Choose a tag to compare

What's Changed

  • fix: copy on linux wayland by @sigoden in #288
  • fix: deprecation warning of .read command by @Nicoretti in #296
  • feat: supports model capabilities by @sigoden in #297
  • feat: add openai.api_base config by @sigoden in #302
  • feat: add extra_fields to models of localai/ollama clients by @kelvie in #298
  • fix: do not attempt to deserialize zero byte chunks in ollama stream by @JosephGoulden in #303
  • feat: update openai/qianwen/gemini models by @sigoden in #306
  • feat: support vertexai by @sigoden in #308
  • refactor: update vertexai/gemini/ernie clients by @sigoden in #309
  • feat: edit current prompt on $VISUAL/$EDITOR by @sigoden in #314
  • refactor: change header of messages saved to markdown by @sigoden in #317
  • feat: support -e/--execute to execute shell command by @sigoden in #318
  • refactor: improve prompt error handling by @sigoden in #319
  • refactor: improve saving messages by @sigoden in #322

New Contributors

Full Changelog: v0.12.0...v0.13.0

v0.12.0

26 Dec 00:39
Compare
Choose a tag to compare

What's Changed

  • feat: change REPL indicators #263
  • fix: pipe failed on macos #264
  • fix: cannot read image with uppercase ext #270
  • feat: support gemini #273
  • feat: abandon PaLM2 #274
  • feat: support qianwen:qwen-vl-plus #275
  • feat: support ollama #276
  • feat: qianwen vision models support embeded images #277
  • refactor: remove path existence indicator from info #282
  • feat: custom REPL prompt #283

Full Changelog: v0.11.0...v0.12.0

v0.11.0

29 Nov 03:05
Compare
Choose a tag to compare

What's Changed

  • refactor: improve render #235
  • feat: add a spinner to indicate waiting for response #236
  • refactor: qianwen client use incremental_output #240
  • fix: the last reply tokens was not highlighted #243
  • refactor: ernie client system message #244
  • refactor: palm client system message #245
  • refactor: trim trailing spaces from the role prompt #246
  • feat: support vision #249
  • feat: state-aware completer #251
  • feat: add ernie:ernie-bot-8k qianwen:qwen-max #252
  • refactor: sort of some complete type #253
  • feat: allow shift-tab to select prev in completion menu #254

Full Changelog: v0.10.0...v0.11.0

v0.10.0

08 Nov 03:54
Compare
Choose a tag to compare

New features

Use ::: for multi-line editing, deprecate .edit

〉::: This
is
a
multi-line
message
:::

Temporarily use a role to send a message.

coder〉.role shell how to unzip a file
unzip file.zip

coder〉

As shown above, you temporarily switched to the shell role in the coder role and sent a message. After sending, the current role is still coder.

Set default role/session with config.prelude

For those who want aichat to enter a session after startup, you can set it as follows:

prelude: session:mysession

For those who want aichat to use a role after startup, you can set it as follows:

prelude: role:myrole

Use a model that is not in the --list-models

If OpenAI releases a new model in the future, it can be used without upgrading Aichat.

$ aichat --model openai:gpt-4-vision-preview
〉.model openai:gpt-4-vision-preview

Changelog

  • refactor: improve error message for PaLM client by @sigoden in #213
  • refactor: rename Model.llm_name to name by @sigoden in #216
  • refactor: use &GlobalConfig to avoid clone by @sigoden in #217
  • refactor: remove Model.client_index, match client by name by @sigoden in #218
  • feat: allow the use of an unlisted model by @sigoden in #219
  • fix: unable to build on android using termux by @sigoden in #222
  • feat: add config.prelude to allow setting default role/session by @sigoden in #224
  • feat: deprecate .edit, use """ instead by @sigoden in #225
  • refactor: improve repl completer by @sigoden in #226
  • feat: temporarily use a role to send a message by @sigoden in #227
  • refactor: output info contains auto_copy and light_theme by @sigoden in #230
  • fix: unexpected additional newline in REPL by @sigoden in #231
  • refactor: use ::: as multipline input indicator, deprecate """ by @sigoden in #232
  • feat: add openai:gpt-4-1106-preview by @sigoden in #233

Full Changelog: v0.9.0...v0.10.0

v0.9.0

06 Nov 07:48
Compare
Choose a tag to compare

Support multiple LLMs/Platforms

  • OpenAI: gpt-3.5/gpt-4
  • LocalAI: opensource models
  • Azure-OpenAI: user deployed gpt3.5/gpt4
  • PaLM: chat-bison-001
  • Ernie: eb-instant/ernie-bot/ernie-bot-4
  • Qianwen: qwen-turbo/qwen-plus

Enhance session/conversation

New in command mode

      --list-sessions        List all available sessions
  -s, --session [<SESSION>]  Create or reuse a session

New in chat mode

.session                 Start a context-aware chat session
.info session            Show session info
.exit session            End the current session

Other features:

  • Able to start a conversation that incorporates the last question and answer.
  • Deprecate config.conversation_first, use aichat -s instead.
  • Ask for saving session when exit.

Show information

In command mode

aichat --info                     # Show system info
aichat --role shell --info        # Show role info
aichat --session temp  --info     # Show session info

In chat mode

.info                    Print system info
.info role               Show role info
.info session            Show session info

Support textwrap

Configuration:

wrap: no                         # Specify the text-wrapping mode (no*, auto, <max-width>)
wrap_code: false                 # Whether wrap code block

Command:

aichat -w 120          # set max width
aichat -w auto         # use term width
aichat -w no           # no wrap

New Configuration

light_theme: false               # If set true, use light theme
wrap: no                         # Specify the text-wrapping mode (no*, auto, <max-width>)
wrap_code: false                 # Whether wrap code block
auto_copy: false                 # Automatically copy the last output to the clipboard
keybindings: emacs               # REPL keybindings, possible values: emacs (default), vi

Chat REPL changelog

  • Add .copy to Copy the last output to the clipboard
  • Add .read to Read the contents of a file and submit
  • Add .edit for Multi-line editing (CTRL+S to finish)
  • Add .info session to show system info
  • Add .info role to show role info
  • Rename .conversation to .session
  • Rename .clear conversation to .exit session
  • Rename .clear role to .exit role
  • Deprecate .clear
  • Deprecate .prompt
  • Deprecate .hisotry .clear history

Other changes

  • Support bracketed paste, You can directly paste multiple lines of text
  • Suppport customize theme
  • Replace AICHAT_API_KEY with OPENAI_API_KEY, Also support OPENAI_API_BASE
  • Fix duplicate lines in kitty terminal
  • Deprecate prompt, both --prompt and .prompt are removed

v0.8.0

21 Mar 02:00
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.7.0...v0.8.0