The Silver Bullet Data API for AI-Native Applications
Quipubase Logo

Quipubase

A powerful, flexible, and real-time data API designed from the ground up for AI-native applications. Built on the blazing-fast RocksDB engine with unmatched performance for dynamic data modeling, real-time collaboration, vector search, and generative AI — all in a single, unified backend.

Real-time
Pub/Sub Architecture
Vector Search
Semantic Similarity
Schema-driven
JSON Validation

Everything You Need for AI Applications

Quipubase provides a comprehensive suite of features designed specifically for modern AI-native applications, from real-time data processing to advanced AI capabilities.

Core Engine
Schema-Driven Collections
Model your data dynamically using JSON Schema with collections and objects. Supports flexible document structures while enforcing strong typing — perfect for AI pipelines.
Dynamic schemas
Document-level validation
Granular querying
Live Updates
Real-Time Document Subscriptions
Integrated Pub/Sub engine supports real-time subscriptions to individual documents or collection-level events — ideal for live dashboards, chat apps, and reactive AI systems.
Auto-sync across clients
Reactive AI systems
Built-in WebSocket support
AI-Powered
Native Vector Store
Store and query vector embeddings with ease. Turns your text into embeddings and indexes them for semantic search using the built-in vector resource.
Approximate nearest neighbor search
Common embedding formats
Seamless data pipeline integration
Analytics
Live Relational Engine
Query your data using SQL. Dynamically generates relational schemas from Excel files, NoSQL databases, and nested JSON documents — no ETL required.
Excel/CSV support
NoSQL integration
Real-time analytics
AI Integration
Generative AI Built-In
Extends the OpenAI API surface with chat completions, audio transcription, and image generation. Use as your AI-native backend with integrated inference.
Chat completions
Audio transcription
Image generation
File Processing
Intelligent Blob Storage
Storage engine that transforms, stores, and searches blobs with automatic chunking and transformation capabilities for AI applications.
Automatic chunking
File transformation
Searchable content

Perfect for Modern Use Cases

Built for developers creating AI-native tools that need tight coupling between data and inference

Multimodal AI agents
Realtime collaboration tools
Chatbots & voice interfaces
Vector-based recommendation systems
Live dashboards & BI tools
Data ingestion + SQL querying

Available SDKs & Resources

Get Started in Minutes

Install the Quipubase SDK and start building AI-native applications with just a few lines of code.

1. Install the SDK
npm install quipubase
2. Initialize the Client
import { Quipubase } from 'quipubase'

const client = new Quipubase({
  baseURL: 'https://quipubase.oscarbahamonde.com/v1',
  apiKey: 'your-api-key'
})
3. Create Your First Collection
// Create a schema-driven collection
const collection = await client.collections.create({
  json_schema: {
    title: 'User',
    type: 'object',
    properties: {
      name: { type: 'string' },
      email: { type: 'string', format: 'email' },
      age: { type: 'integer', minimum: 0 }
    },
    required: ['name', 'email']
  }
})

console.log('Collection created:', collection.id)

Complete API Reference

Comprehensive documentation for all Quipubase endpoints with detailed examples for both TypeScript and Python SDKs. Everything you need to build powerful AI-native applications.

Models

List and retrieve available AI models for chat, embeddings, and other operations

GET
/v1/models
List Models
Retrieve a list of all available models
// List all available models
const models = await client.models.list()

models.data.forEach(model => {
  console.log(`Model: ${model.id}`)
  console.log(`Context Window: ${model.context_window}`)
  console.log(`Max Tokens: ${model.max_completion_tokens}`)
  console.log(`Type: ${model.type}`)
  console.log(`Created: ${new Date(model.created * 1000).toISOString()}`)
  console.log('---')
})
GET
/v1/models/{model}
Retrieve Model
Get details about a specific model
// Get specific model details
const model = await client.models.retrieve('gemini-2.5-flash')

console.log(`Model ID: ${model.id}`)
console.log(`Owner: ${model.owned_by}`)
console.log(`Active: ${model.active}`)
console.log(`Type: ${model.type}`)
console.log(`Context Window: ${model.context_window}`)
console.log(`Max Completion Tokens: ${model.max_completion_tokens}`)

Chat Completions

Generate conversational AI responses with support for streaming and function calling

POST
/v1/chat/completions
Create Chat Completion
Generate AI responses for conversational interfaces
// Basic chat completion
const completion = await client.chat.completions.create({
  model: 'gemini-2.5-flash',
  messages: [
    {
      role: 'system',
      content: 'You are a helpful AI assistant specialized in data analysis.'
    },
    {
      role: 'user',
      content: 'Explain the benefits of vector databases for AI applications.'
    }
  ],
  max_tokens: 1000,
  temperature: 0.7
})

console.log(`Response: ${completion.choices[0].message.content}`)
console.log(`Finish Reason: ${completion.choices[0].finish_reason}`)
console.log(`Usage: ${JSON.stringify(completion.usage)}`)
POST
/v1/chat/completions
Streaming Chat Completion
Stream AI responses in real-time for better user experience
// Streaming chat completion
const stream = await client.chat.completions.create({
  model: 'gemini-2.5-flash',
  messages: [
    {
      role: 'user',
      content: 'Write a short story about AI and databases.'
    }
  ],
  stream: true,
  max_tokens: 500
})

for await (const chunk of stream) {
  const content = chunk.choices[0]?.delta?.content
  if (content) {
    process.stdout.write(content)
  }
}

Collections

Schema-driven document stores with JSON Schema validation and real-time capabilities

POST
/v1/collections
Create Collection
Create a new collection with JSON Schema validation
// Create a user collection with schema validation
const collection = await client.collections.create({
  json_schema: {
    title: 'Users',
    type: 'object',
    properties: {
      name: { 
        type: 'string',
        minLength: 1,
        maxLength: 100
      },
      email: { 
        type: 'string', 
        format: 'email' 
      },
      age: { 
        type: 'integer', 
        minimum: 0,
        maximum: 150
      }
    },
    required: ['name', 'email']
  }
})

console.log(`Collection created: ${collection.id}`)
console.log(`Schema SHA: ${collection.sha}`)
GET
/v1/collections
List Collections
Retrieve all collections in your account
// List all collections
const collections = await client.collections.list()

console.log(`Total collections: ${collections.length}`)

collections.forEach((collection, index) => {
  console.log(`Collection ${index + 1}:`)
  console.log(`  ID: ${collection.id}`)
  console.log(`  Title: ${collection.json_schema.title}`)
  console.log(`  Created: ${collection.created_at}`)
  console.log('---')
})
GET
/v1/collections/{collection_id}
Get Collection
Retrieve a specific collection by ID
// Get collection details
const collection = await client.collections.retrieve('col_users123')

console.log(`Collection ID: ${collection.id}`)
console.log(`Title: ${collection.json_schema.title}`)
console.log(`Schema: ${JSON.stringify(collection.json_schema, null, 2)}`)
console.log(`Created: ${collection.created_at}`)
console.log(`Updated: ${collection.updated_at}`)
DELETE
/v1/collections/{collection_id}
Delete Collection
Delete a collection and all its documents
// Delete a collection
const result = await client.collections.delete('col_users123')

console.log(`Collection deleted: ${result.deleted}`)
console.log(`Documents removed: ${result.documents_count}`)

Objects (Real-time)

Real-time document manipulation with pub/sub architecture for live updates

POST
/v1/collections/objects/{collection_id}
Create Document
Create a new document with real-time event publishing
// Create a new document
const createResponse = await client.objects.pub({
  collection_id: 'col_users123',
  event: 'create',
  data: {
    name: 'Alice Johnson',
    email: 'alice@example.com',
    age: 28,
    preferences: {
      theme: 'dark',
      notifications: true
    }
  }
})

console.log('Document created:')
console.log(`  ID: ${createResponse.data._id}`)
console.log(`  Data: ${JSON.stringify(createResponse.data)}`)
PUT
/v1/collections/objects/{collection_id}
Update Document
Update an existing document with real-time notifications
// Update a document
const updateResponse = await client.objects.pub({
  collection_id: 'col_users123',
  event: 'update',
  filter: { _id: 'doc_abc123' },
  data: {
    age: 29,
    preferences: {
      theme: 'light',
      notifications: false
    }
  }
})

console.log('Document updated:')
console.log(`  Modified: ${updateResponse.modified_count}`)
console.log(`  Data: ${JSON.stringify(updateResponse.data)}`)
DELETE
/v1/collections/objects/{collection_id}
Delete Document
Delete documents with real-time event broadcasting
// Delete a document
const deleteResponse = await client.objects.pub({
  collection_id: 'col_users123',
  event: 'delete',
  filter: { _id: 'doc_abc123' }
})

console.log('Document deleted:')
console.log(`  Deleted count: ${deleteResponse.deleted_count}`)
console.log(`  Success: ${deleteResponse.acknowledged}`)
GET
/v1/collections/objects/{collection_id}
Query Documents
Query documents with filtering and pagination
// Query documents with filters
const queryResponse = await client.objects.pub({
  collection_id: 'col_users123',
  event: 'query',
  filter: { 
    age: { $gte: 25 },
    'preferences.notifications': true
  },
  limit: 10,
  sort: { created_at: -1 }
})

console.log('Query results:')
console.log(`  Found: ${queryResponse.data.length} documents`)
queryResponse.data.forEach((doc, index) => {
  console.log(`  Document ${index + 1}: ${doc.name}`)
})

Vector Store

Native vector embeddings with semantic search and similarity matching

POST
/v1/vector/{namespace}
Upsert Embeddings
Store text as vector embeddings for semantic search
// Upsert text embeddings
const response = await client.vector.upsert({
  namespace: 'knowledge-base',
  input: [
    'Quipubase is a powerful real-time database for AI applications',
    'Vector search enables semantic similarity matching across documents',
    'Real-time subscriptions power reactive AI systems'
  ],
  model: 'gemini-embedding-001'
})

console.log('Vector upsert completed:')
console.log(`  Stored: ${response.count} embeddings`)
console.log(`  Processing time: ${response.ellapsed}s`)
PUT
/v1/vector/{namespace}
Query Vectors
Semantic search using vector similarity
// Semantic search query
const results = await client.vector.query({
  namespace: 'knowledge-base',
  input: 'database performance optimization',
  top_k: 5,
  model: 'gemini-embedding-001'
})

console.log('Semantic search results:')
console.log(`  Found: ${results.count} matches`)

results.data.forEach((match, index) => {
  console.log(`  Result ${index + 1}:`)
  console.log(`    Score: ${match.score.toFixed(4)}`)
  console.log(`    Content: ${match.content}`)
})
DELETE
/v1/vector/{namespace}
Delete Vectors
Remove vectors from a namespace
// Delete vectors by IDs
const deleteResult = await client.vector.delete({
  namespace: 'knowledge-base',
  ids: ['vec_123', 'vec_456', 'vec_789']
})

console.log('Vector deletion completed:')
console.log(`  Deleted: ${deleteResult.deleted_count} vectors`)
console.log(`  Success: ${deleteResult.acknowledged}`)
GET
/v1/vector/{namespace}/stats
Vector Statistics
Get statistics about vectors in a namespace
// Get vector namespace statistics
const stats = await client.vector.stats('knowledge-base')

console.log('Vector namespace statistics:')
console.log(`  Total vectors: ${stats.total_vectors}`)
console.log(`  Dimensions: ${stats.dimensions}`)
console.log(`  Index size: ${stats.index_size_mb} MB`)
console.log(`  Last updated: ${stats.last_updated}`)

Live Query Engine

SQL queries over structured and unstructured data with real-time capabilities

POST
/v1/query/live
Create Dataset
Create a queryable dataset from files, MongoDB, or PostgreSQL
// Create dataset from CSV file
const csvDataset = await client.query.create({
  engine: 'file',
  uri: 'https://example.com/sales-data.csv',
  query: 'SELECT * FROM root WHERE revenue > 1000 ORDER BY date DESC',
  key: 'sales-analytics',
  namespace: 'business',
  bucket: 'data-warehouse'
})

console.log('CSV dataset created:')
console.log(`  Key: ${csvDataset.key}`)
console.log(`  Records: ${csvDataset.data.length}`)
console.log(`  Engine: ${csvDataset.engine}`)
PUT
/v1/query/live
Query Dataset
Execute SQL queries on existing datasets
// Query existing dataset
const results = await client.query.retrieve({
  key: 'sales-analytics',
  query: 'SELECT DATE_TRUNC(month, date) as month, SUM(revenue) as total FROM root GROUP BY month',
  namespace: 'business',
  bucket: 'data-warehouse'
})

console.log('Query results:')
console.log(`Total records: ${results.data.length}`)

results.data.forEach(row => {
  console.log(`Month: ${row.month}, Revenue: $${row.total}`)
})
POST
/v1/query/live
MongoDB Dataset
Create dataset from MongoDB collection
// Create dataset from MongoDB
const mongoDataset = await client.query.create({
  engine: 'mongodb',
  uri: 'mongodb://localhost:27017/mydb',
  query: 'SELECT * FROM users WHERE status = "active" LIMIT 100',
  key: 'active-users',
  namespace: 'analytics',
  bucket: 'user-data'
})

console.log('MongoDB dataset created:')
console.log(`  Key: ${mongoDataset.key}`)
console.log(`  Records: ${mongoDataset.data.length}`)
console.log(`  Connection: ${mongoDataset.engine}`)
POST
/v1/query/live
PostgreSQL Dataset
Create dataset from PostgreSQL database
// Create dataset from PostgreSQL
const pgDataset = await client.query.create({
  engine: 'postgresql',
  uri: 'postgresql://user:pass@localhost:5432/mydb',
  query: 'SELECT id, name, email, created_at FROM customers WHERE created_at > NOW() - INTERVAL 30 DAY',
  key: 'recent-customers',
  namespace: 'crm',
  bucket: 'customer-data'
})

console.log('PostgreSQL dataset created:')
console.log(`  Key: ${pgDataset.key}`)
console.log(`  Records: ${pgDataset.data.length}`)
console.log(`  Engine: ${pgDataset.engine}`)
DELETE
/v1/query/live/{key}
Delete Dataset
Remove a dataset from the query engine
// Delete a dataset
const deleteResult = await client.query.delete({
  key: 'sales-analytics',
  namespace: 'business',
  bucket: 'data-warehouse'
})

console.log('Dataset deletion:')
console.log(`  Deleted: ${deleteResult.deleted}`)
console.log(`  Key: ${deleteResult.key}`)

Blob Storage

Intelligent file processing with automatic chunking and transformation capabilities

POST
/v1/blobs
Upload File
Upload and process files with automatic chunking
// Upload and process a file
const fileUpload = await client.blobs.upload({
  file: fileBuffer,
  filename: 'document.pdf',
  chunk_size: 1000,
  overlap: 200,
  metadata: {
    category: 'documentation',
    author: 'John Doe'
  }
})

console.log('File uploaded:')
console.log(`  ID: ${fileUpload.id}`)
console.log(`  Chunks: ${fileUpload.chunks_count}`)
console.log(`  Size: ${fileUpload.size_bytes} bytes`)
GET
/v1/blobs/{blob_id}
Get File Info
Retrieve file metadata and processing status
// Get file information
const fileInfo = await client.blobs.retrieve('blob_abc123')

console.log('File information:')
console.log(`  ID: ${fileInfo.id}`)
console.log(`  Filename: ${fileInfo.filename}`)
console.log(`  Status: ${fileInfo.status}`)
console.log(`  Chunks: ${fileInfo.chunks_count}`)
console.log(`  Created: ${fileInfo.created_at}`)
GET
/v1/blobs/{blob_id}/chunks
Get File Chunks
Retrieve processed chunks from a file
// Get file chunks
const chunks = await client.blobs.chunks('blob_abc123', {
  limit: 10,
  offset: 0
})

console.log('File chunks:')
console.log(`  Total: ${chunks.total}`)
console.log(`  Retrieved: ${chunks.data.length}`)

chunks.data.forEach((chunk, index) => {
  console.log(`  Chunk ${index + 1}: ${chunk.content.substring(0, 100)}...`)
})
DELETE
/v1/blobs/{blob_id}
Delete File
Delete a file and all its chunks
// Delete a file
const deleteResult = await client.blobs.delete('blob_abc123')

console.log('File deletion:')
console.log(`  Deleted: ${deleteResult.deleted}`)
console.log(`  Chunks removed: ${deleteResult.chunks_deleted}`)

Audio Processing

Speech-to-text and text-to-speech capabilities with multiple voice options

POST
/v1/audio/speech
Text-to-Speech
Convert text to natural-sounding speech
// Generate speech from text
const speechResponse = await client.audio.speech.create({
  model: 'tts-1',
  input: 'Welcome to Quipubase, the AI-native database for modern applications.',
  voice: 'alloy',
  response_format: 'mp3'
})

// Save the audio file
const audioBuffer = await speechResponse.arrayBuffer()
console.log('Speech generated:')
console.log(`  Size: ${audioBuffer.byteLength} bytes`)
console.log('  Format: mp3')
POST
/v1/audio/transcriptions
Speech-to-Text
Transcribe audio files to text
// Transcribe audio to text
const transcription = await client.audio.transcriptions.create({
  file: audioFile,
  model: 'whisper-1',
  language: 'en',
  response_format: 'json'
})

console.log('Transcription completed:')
console.log(`  Text: ${transcription.text}`)
console.log(`  Language: ${transcription.language}`)
console.log(`  Duration: ${transcription.duration}s`)
POST
/v1/audio/translations
Audio Translation
Translate audio from any language to English
// Translate audio to English
const translation = await client.audio.translations.create({
  file: foreignAudioFile,
  model: 'whisper-1',
  response_format: 'json'
})

console.log('Translation completed:')
console.log(`  Original language detected: ${translation.detected_language}`)
console.log(`  English translation: ${translation.text}`)

Image Generation

Create high-quality images from text prompts using state-of-the-art AI models

POST
/v1/images/generations
Generate Images
Create images from text descriptions
// Generate images from text prompt
const imageResponse = await client.images.generate({
  prompt: 'A futuristic database server room with glowing blue data streams and AI neural networks',
  model: 'dall-e-3',
  size: '1024x1024',
  quality: 'hd',
  n: 1
})

console.log('Images generated:')
imageResponse.data.forEach((image, index) => {
  console.log(`  Image ${index + 1}: ${image.url}`)
  console.log(`  Revised prompt: ${image.revised_prompt}`)
})
POST
/v1/images/edits
Edit Images
Edit images using AI with masks and prompts
// Edit an existing image
const editResponse = await client.images.edit({
  image: originalImageFile,
  mask: maskImageFile,
  prompt: 'Add floating holographic data visualizations above the servers',
  model: 'dall-e-2',
  size: '1024x1024',
  n: 1
})

console.log('Image edited:')
console.log(`  Result URL: ${editResponse.data[0].url}`)
POST
/v1/images/variations
Create Variations
Generate variations of an existing image
// Create variations of an image
const variationResponse = await client.images.createVariation({
  image: sourceImageFile,
  model: 'dall-e-2',
  size: '1024x1024',
  n: 3
})

console.log('Image variations created:')
variationResponse.data.forEach((variation, index) => {
  console.log(`  Variation ${index + 1}: ${variation.url}`)
})

Embeddings

Generate vector embeddings for text, images, and other content types

POST
/v1/embeddings
Create Embeddings
Generate vector embeddings for text content
// Generate text embeddings
const embeddingResponse = await client.embeddings.create({
  model: 'text-embedding-3-large',
  input: [
    'Quipubase provides real-time vector search capabilities',
    'AI-native databases enable semantic similarity matching',
    'Modern applications require intelligent data processing'
  ],
  encoding_format: 'float'
})

console.log('Embeddings generated:')
console.log(`  Model: ${embeddingResponse.model}`)
console.log(`  Embeddings count: ${embeddingResponse.data.length}`)
console.log(`  Dimensions: ${embeddingResponse.data[0].embedding.length}`)
console.log(`  Usage: ${JSON.stringify(embeddingResponse.usage)}`)
POST
/v1/embeddings
Batch Embeddings
Generate embeddings for large batches of text
// Generate embeddings in batches
const batchTexts = [
  'Database performance optimization techniques',
  'Real-time data synchronization methods',
  'Vector similarity search algorithms',
  'AI model integration patterns',
  'Scalable architecture design principles'
]

const batchEmbeddings = await client.embeddings.create({
  model: 'text-embedding-3-small',
  input: batchTexts,
  encoding_format: 'float'
})

console.log('Batch embeddings completed:')
console.log(`  Processed: ${batchEmbeddings.data.length} texts`)
console.log(`  Total tokens: ${batchEmbeddings.usage.total_tokens}`)