SuperDuck.AI CLI Documentation

A cross-platform command-line interface for interacting with AI models from SuperDuck.AI

Overview

The SuperDuck.AI CLI provides a seamless way to interact with codycode.ai models from the command line. It allows you to pipe content between different AI models, making it easy to create complex workflows and process data.

Key features include:

Use SuperDuck in vim: vim

Installation

Requirements

The only external dependency is libcurl, which is usually pre-installed on most systems.

Installation Steps

1. Clone the repository


curl https://superduck.ai/superduck-cli.git --output sdcli.bundle
git clone sdcli.git
cd codycli

2. Compile the program

make

3. Set up environment variables

chmod +x setup-env.sh
./setup-env.sh

4. Try it out

source ./codycodes_env.sh  # If you didn't add to .bashrc/.zshrc
./cody 'Write a hello world program in Python'

5. Optional: System-wide installation

sudo make install
Note: If you're having issues with the curl library on macOS, you might need to specify the path:
export CFLAGS="-I/opt/homebrew/include"
export LDFLAGS="-L/opt/homebrew/lib"
make

Configuration

The CLI uses environment variables for configuration. There are three types of environment variables you need to set:

Variable Type Example Purpose
System Prompts cody, emmy Defines the system instruction for each AI persona
Model Names codybrain, emmybrain Specifies which AI model to use for each persona
API Key CODYCODES_API_KEY Your API key for authentication (REQUIRED)

Example Configuration

# System prompts - lowercase names to match command names
export cody="You are Cody, a helpful coding assistant."
export emmy="You are Emmy, specialized in creating clear explanations and presentations."

# Models
export codybrain="llama3.1-8b"
export emmybrain="gpt4o"

# API key (REQUIRED)
export CODYCODES_API_KEY="your-api-key-here"
Important: Variable names must be lowercase and match the command names exactly.

Basic Usage

The basic syntax for using the SuperDuck.AI CLI is:

[command] 'your prompt'

Where [command] is the name of an AI persona (e.g., cody or emmy).

Example: Basic Query

cody 'Write a hello world program in Python'

This will ask Cody to generate a Python hello world program.

Advanced Usage

Piping Content

One of the most powerful features of the SuperDuck.AI CLI is the ability to pipe content between different commands.

Example: Piping a File to Cody

cat complex_code.py | cody 'explain this code'

This will send the contents of complex_code.py to Cody and ask for an explanation.

Example: Chaining AI Personas

cat data.csv | cody 'analyze this data' | emmy 'create a latex presentation summary'

This sends a CSV file to Cody for analysis, then pipes Cody's output to Emmy to create a LaTeX presentation.

Creating New AI Personas

You can easily create new AI personas by creating symbolic links to the main executable:

Example: Creating a New Persona

# Create symbolic link
ln -sf codycli newpersona

# Set environment variables
export newpersona="You are NewPersona, specialized in technical documentation."
export newpersonabrain="gpt4o"

Now you can use newpersona just like cody or emmy.

Common Use Cases

Code Tasks

Content Creation

Multi-step Workflows

Troubleshooting

Common Issues

Issue Solution
Environment variable not set Make sure to set all required environment variables:
source ./codycodes_env.sh
Check that variables are set correctly:
echo $cody
echo $codybrain
echo $CODYCODES_API_KEY
Compilation errors Ensure you have libcurl installed:
# Ubuntu/Debian
sudo apt-get install libcurl4-openssl-dev

# macOS
brew install curl
API errors Check that your API key is valid and properly set:
export CODYCODES_API_KEY="your-api-key-here"

Debugging

To see more information about your configuration, run:

make debug

Examples Gallery

Generate and Explain Code

cody 'Write a Python script that downloads files from multiple URLs in parallel' > downloader.py
cat downloader.py | cody 'Explain how this code works, step by step'

Data Analysis Workflow

cat sales_data.csv | cody 'Analyze this sales data and identify the top 5 performing products' | emmy 'Create a presentation for the marketing team based on this analysis'

Email Assistant

cat meeting_notes.txt | emmy 'Create a follow-up email summarizing the meeting and action items' > email.txt

Language Translation

cat document.txt | cody 'Translate this text to Spanish' > document_es.txt

Code Refactoring

cat legacy_code.js | cody 'Refactor this code to use modern ES6+ syntax and improve performance' > modernized_code.js

Using tee for Branching Workflows

cat data.json | cody 'Parse this JSON and explain the key data points' | tee analysis.txt | emmy 'Create a non-technical summary of this analysis'

This uses tee to both save Cody's analysis to a file and pipe it to Emmy for further processing. This is useful when you want to preserve intermediate results in a pipeline.

AI Collaboration with While Loop

#!/bin/bash
# iterative_refinement.sh

# Initial prompt
echo "Create a basic algorithm for sorting a list of numbers" > current.txt

# Iterative refinement loop with 3 AI personas
for i in {1..5}; do
  echo "Round $i of refinement:"
  
  # Cody improves the algorithm
  cat current.txt | cody "Improve this algorithm focusing on efficiency" > improved.txt
  
  # Emmy explains the improvements
  cat improved.txt | emmy "Explain what improvements were made and why" > explanation.txt
  
  # Third AI persona critiques and suggests next steps
  cat improved.txt explanation.txt | newpersona "Critique this algorithm and suggest one specific improvement for the next iteration" > feedback.txt
  
  # Output results of this round
  echo "----- IMPROVED ALGORITHM -----"
  cat improved.txt
  echo "----- EXPLANATION -----"
  cat explanation.txt
  echo "----- FEEDBACK FOR NEXT ROUND -----"
  cat feedback.txt
  echo "------------------------------"
  
  # Prepare for next iteration
  cat feedback.txt > current.txt
  
  # Optional: Add a delay to avoid API rate limits
  sleep 2
done

This bash script demonstrates an iterative collaboration between three AI personas, working together to refine an algorithm over multiple rounds. Each AI plays a different role in the process.

Web Browsing with curl and AI Interpretation

#!/bin/bash
# web_research.sh

# Define the search topic
TOPIC="quantum computing recent developments"

# Use curl to get search results (using DuckDuckGo for example)
echo "Searching for information on: $TOPIC"
curl -s -A "Mozilla/5.0" "https://html.duckduckgo.com/html/?q=${TOPIC// /+}" | \
  cody "Extract the top 5 relevant article titles and URLs from these search results" > search_results.txt

# Process each result
cat search_results.txt | while read url; do
  # Skip empty lines
  [ -z "$url" ] && continue
  
  echo "Analyzing: $url"
  
  # Get the content of the webpage
  curl -s -A "Mozilla/5.0" "$url" | \
    cody "Extract the main content from this HTML, ignoring navigation, ads, etc." | \
    emmy "Summarize this article about quantum computing in 3-4 paragraphs, highlighting key innovations" >> research_summary.txt
  
  echo "---" >> research_summary.txt
  
  # Avoid hitting rate limits
  sleep 3
done

# Create a final synthesized report
cat research_summary.txt | newpersona "Create a comprehensive research report synthesizing these article summaries about $TOPIC. Include an introduction, key findings, trends, and conclusion." > final_report.md

echo "Web research complete! Report saved to final_report.md"

This script demonstrates how to use curl to fetch web content and use AI personas to process and interpret the information. It creates a research workflow that searches for a topic, extracts and summarizes relevant articles, and synthesizes a final report.