parent
e028c84938
commit
f41d934cf3
48
README.md
48
README.md
|
@ -1,3 +1,49 @@
|
||||||
# LLM Chat
|
# LLM Chat
|
||||||
|
|
||||||
A general CLI interface for large language models.
|
This project is currently a CLI interface for OpenAI's [GPT model API](https://platform.openai.com/docs/guides/gpt). The long term goal is to be much more general and interface with any LLM, whether that be a cloud service or a model running locally.
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
### Installation
|
||||||
|
|
||||||
|
The package is not currently published so you will need to build it yourself. To do so you will need [Poetry](https://python-poetry.org/) v1.5 or later. Once you have that installed, run the following commands:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git clone https://code.harrison.sh/paul/llm-chat.git
|
||||||
|
cd llm-chat
|
||||||
|
poetry build
|
||||||
|
```
|
||||||
|
|
||||||
|
This will create a wheel file in the `dist` directory. You can then install it with `pip`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install dist/llm_chat-0.6.1-py3-none-any.whl
|
||||||
|
```
|
||||||
|
|
||||||
|
Note, a much better way to install this package system wide is to use [pipx](https://pypa.github.io/pipx/). This will install the package in an isolated environment and make the `llm` command available on your path.
|
||||||
|
|
||||||
|
### Configuration
|
||||||
|
|
||||||
|
Your OpenAI API key must be set in the `OPENAI_API_KEY` environment variable. The following additional environment variables are supported:
|
||||||
|
|
||||||
|
| Variable | Description |
|
||||||
|
| -------------------- | ------------------------------------------------------------------ |
|
||||||
|
| `OPENAI_MODEL` | The model to use. Defaults to `gpt-3.5-turbo`. |
|
||||||
|
| `OPENAI_TEMPERATURE` | The temperature to use when generating text. Defaults to `0.7`. |
|
||||||
|
| `OPENAI_HISTORY_DIR` | The directory to store chat history in. Defaults to `~/.llm_chat`. |
|
||||||
|
|
||||||
|
The settings can also be configured via command line arguments. Run `llm chat --help` for more information.
|
||||||
|
|
||||||
|
### Usage
|
||||||
|
|
||||||
|
To start a chat session, run the following command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
llm chat
|
||||||
|
```
|
||||||
|
|
||||||
|
This will start a chat session with the default settings looking something like the image below:
|
||||||
|
|
||||||
|
![Chat session](./llm-chat.png)
|
||||||
|
|
||||||
|
The CLI accepts multi-line input, so to send the input to the model press `Esc + Enter`. After each message a running total cost will be displayed. To exit the chat session, send the message `/q`.
|
||||||
|
|
Binary file not shown.
After Width: | Height: | Size: 99 KiB |
Loading…
Reference in New Issue