Build Your Own AI Assistant In CLI – Kel

An open-source AI assistant for developers. Integrate GPT and other models into your CLI workflows.

Kel is an open-source customizable AI assistant that runs directly in your command line interface (CLI).

It allows developers to build their own AI-powered assistant using leading generative models like GPT, Claude, and Llama 2. After configuring API keys, you can have conversations with Kel by typing messages in your terminal. It can provide helpful information, summaries, translations, and more based on the capabilities of the selected model.

How to use it:

1. Clone the Kel repo and install dependencies with a simple pip install.

cd kel
pip install

2. Edit the config.toml file to configure Kel according to your preferences.

[companies]
supported_companies = ["OpenAI", "Anthropic", "Ollama", "Google"]
[general]
protocol = "https"
default_language = "english"
default_company_name = "OpenAI"
copy_to_clipboard = true
display_llm_company_model_name = true
[style]
# Supported colors: https://rich.readthedocs.io/en/stable/appendix/colors.html#appendix-colors
response_color = "green"
warning_color = "yellow"
error_color = "red"
info_color = "cyan"
[stats]
display_cost = true
display_tokens = true
display_response_time = true
[openai]
default_openai_model_name = "gpt-3.5-turbo-1106"
default_openai_org_name = ""
default_openai_endpoint = "api.openai.com"
default_openai_uri = "/v1/chat/completions"
default_openai_max_tokens = 100
default_openai_temperature = 0.9
default_openai_prompt = """
You are an expert in the field of software engineering and command line tools.
If I ask for the commands, please give me only the commands which is enclosed in
quotes.
Do not return anything other than the command. Do not wrap responses in quotes.
"""
[openai_assistant]
enable_openai_assistant = false
openai_assistant_model_name = "gpt-4-1106-preview"
openai_assistant_instructions = """
Analyse the file and answer questions about performance stats. Give me the stats in a table format unless I ask it in a different way.
If you have trouble in understanding the file format, consider it as a CSV file unless I specify the file format.
Keep the answers short and simple unless I ask for a detailed explanation.
"""
openai_assistant_prompt = """
Do not give detailed explanation. If you find any bottleneck, please share that as well.
"""
openai_assistant_choice_1 = "Give me min, max, 95 percentile, 99 percentile elapsed or response time in a table format."
openai_assistant_choice_2 = "Analyze the data and give me the bottleneck in a simple sentence."
openai_assistant_choice_3 = "Give me the passed and failed transactions in a table format."
openai_assistant_choice_4 = "Give me the HTTP response code split by transaction in a table format."
openai_delete_assistant_at_exit = true

[anthropic]
# chat mode is not available for Anthropic yet
anthropic_enable_chat = false
default_anthropic_model_name = "claude-2.1"
default_anthropic_max_tokens = 100
default_anthropic_streaming_response = true
default_anthropic_prompt = """
You are an expert in the field of software engineering and command line tools.
If I ask for the commands, please give me only the commands which is enclosed in
quotes.
Give me the command in one sentence, unless I ask you to give it in a different way.
Do not return anything other than the command. Do not wrap responses in quotes.
"""
[ollama]
ollama_endpoint = "localhost:11434"
# chat mode is not available for Ollama yet
ollama_enable_chat = false
default_ollama_model_name = "llama2"
default_ollama_max_tokens = 100
default_ollama_streaming_response = true
default_ollama_prompt = """
You are an expert in the field of software engineering and command line tools.
If I ask for the commands, please give me only the commands which is enclosed in
quotes.
Give me the command in one sentence, unless I ask you to give it in a different way.
Do not return anything other than the command. Do not wrap responses in quotes.
"""
[google]
default_google_model_name = "models/gemini-pro"
default_google_streaming_response = true
default_google_prompt = """
You are an expert in the field of software engineering and command line tools.
If I ask for the commands, please give me only the commands which is enclosed in
quotes.
Politely decline to answer for other questions.
Give me the command in one sentence, unless I ask you to give it in a different way.
Do not return anything other than the command. Do not wrap responses in quotes.
"""
enable_prompt_feedback = false
view_all_response_candidates = true
[google.safety_settings]
# enter the setting in the format of `category=threshold` e.g. hate_speech=3
# acceptable threshold values are [1, 2, 3, 4]
hate_speech=1
harassment=1

3. Set up your LLM’s API keys in your operating system’s environment variables:

export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."

4. Invoke Kel from your CLI with the syntax:

kel [-h] [-s SHOW] [-p PROMPT] [-m MODEL] [-t TEMPERATURE] [-mt MAX_TOKENS] [-c COMPANY] [-v] question

Leave a Reply

Your email address will not be published. Required fields are marked *