Open Interpreter is a minimal, free, open-source project that brings the power of ChatGPT’s Code Interpreter to your local machine. It lets you chat with GPT-4 in your terminal to run Python code, edit files, control browsers, and more.
How it works:
Open Interpreter enables a conversational interface to Python by streaming a function-calling GPT-4 to your terminal. It equips the AI with Python’s exec() to run code, combining an interactive chatbot with your local environment’s capabilities. This lifts restrictions on runtime, file sizes, and packages compared to OpenAI’s hosted Code Interpreter.
Open Interpreter V.S. Code Interpreter
OpenAI’s GPT-4 with Code Interpreter was a game-changer. But, it came with limitations – no internet access, a limited set of packages, and restrictions on runtime and file size. Enter Open Interpreter. Running on a stateful Jupyter Notebook in your local environment, it offers unrestricted internet access, no time or file size constraints, and the ability to use any package. In essence, it marries the prowess of GPT-4’s Code Interpreter with the adaptability of your local setup.
Getting Started with Open Interpreter
1. Install the via Open Interpreter
pip install open-interpreter.
2. In your terminal, set the
OPENAI_API_KEY environment variable and initiate with:
3. For Python users:
import interpreter interpreter.api_key = "your_openai_api_key" interpreter.chat()
4. To start a new chat, simply reset:
5. Save and restore chats with ease:
messages = interpreter.chat("Hello, I'm Alex.", return_messages=True) interpreter.reset() interpreter.load(messages)
6. Extend Open Interpreter’s functionality by customizing its system message:
interpreter.system_message += """ Run shell commands with -y for user confirmation. """ print(interpreter.system_message)
7. Switch OpenAI models as per your needs: Use
interpreter -f for gpt-3.5-turbo or set it in Python:
interpreter.model = "gpt-3.5-turbo".