Skip to main content

JavaScript quickstart

Capture AI API calls from any Node.js, Next.js, or JavaScript application using the built-in MITM proxy—no code changes required.

1. Download and launch the desktop app

Download the LupisLabs desktop app for macOS or Windows.

The MITM proxy starts automatically when you launch the app on port 9010.

2. Open a proxy-configured terminal

In the LupisLabs desktop app, click "Open Terminal" (or use the keyboard shortcut). This opens a new terminal window with:

  • ✅ Proxy environment variables automatically set (HTTP_PROXY, HTTPS_PROXY)
  • ✅ SSL certificates automatically trusted (no manual installation needed)
  • ✅ Works with Node.js, Python, curl, and other HTTP clients

You'll see a confirmation message like this:

================================================
Lupis Labs - Proxy Environment Configured
================================================

HTTP_PROXY: http://127.0.0.1:9010
HTTPS_PROXY: http://127.0.0.1:9010
CA Certificate: /path/to/ca.pem
NODE_EXTRA_CA_CERTS: Set (Node.js)
REQUESTS_CA_BUNDLE: Set (Python)
SSL_CERT_FILE: Set (Python/curl)

All requests (Node.js, Python) will use the proxy.

================================================

3. Run your JavaScript/Node.js application

In the proxy-configured terminal, run your application normally—no code changes needed:

# Run your Node.js script
node app.js

# Or start your Next.js dev server
npm run dev

# Or start any other server
npm start

Any AI SDK calls (OpenAI, Anthropic, etc.) will be automatically captured:

// Your existing code works as-is
import OpenAI from 'openai';

const client = new OpenAI();

const response = await client.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }]
});

4. View captured requests in the app

Switch to the Proxy Monitor tab in the LupisLabs desktop app. You'll see:

  • 📋 Complete request and response payloads
  • 💰 Token usage and cost analytics
  • ⏱️ Response times and status codes
  • 💾 Save requests for replay later