Quipubase
A powerful, flexible, and real-time data API designed from the ground up for AI-native applications. Built on the blazing-fast RocksDB engine with unmatched performance for dynamic data modeling, real-time collaboration, vector search, and generative AI — all in a single, unified backend.
Everything You Need for AI Applications
Quipubase provides a comprehensive suite of features designed specifically for modern AI-native applications, from real-time data processing to advanced AI capabilities.
Perfect for Modern Use Cases
Built for developers creating AI-native tools that need tight coupling between data and inference
Available SDKs & Resources
Get Started in Minutes
Install the Quipubase SDK and start building AI-native applications with just a few lines of code.
npm install quipubase
import { Quipubase } from 'quipubase'
const client = new Quipubase({
baseURL: 'https://quipubase.oscarbahamonde.com/v1',
apiKey: 'your-api-key'
})
// Create a schema-driven collection
const collection = await client.collections.create({
json_schema: {
title: 'User',
type: 'object',
properties: {
name: { type: 'string' },
email: { type: 'string', format: 'email' },
age: { type: 'integer', minimum: 0 }
},
required: ['name', 'email']
}
})
console.log('Collection created:', collection.id)
Complete API Reference
Comprehensive documentation for all Quipubase endpoints with detailed examples for both TypeScript and Python SDKs. Everything you need to build powerful AI-native applications.
Models
List and retrieve available AI models for chat, embeddings, and other operations
/v1/models
// List all available models
const models = await client.models.list()
models.data.forEach(model => {
console.log(`Model: ${model.id}`)
console.log(`Context Window: ${model.context_window}`)
console.log(`Max Tokens: ${model.max_completion_tokens}`)
console.log(`Type: ${model.type}`)
console.log(`Created: ${new Date(model.created * 1000).toISOString()}`)
console.log('---')
})
/v1/models/{model}
// Get specific model details
const model = await client.models.retrieve('gemini-2.5-flash')
console.log(`Model ID: ${model.id}`)
console.log(`Owner: ${model.owned_by}`)
console.log(`Active: ${model.active}`)
console.log(`Type: ${model.type}`)
console.log(`Context Window: ${model.context_window}`)
console.log(`Max Completion Tokens: ${model.max_completion_tokens}`)
Chat Completions
Generate conversational AI responses with support for streaming and function calling
/v1/chat/completions
// Basic chat completion
const completion = await client.chat.completions.create({
model: 'gemini-2.5-flash',
messages: [
{
role: 'system',
content: 'You are a helpful AI assistant specialized in data analysis.'
},
{
role: 'user',
content: 'Explain the benefits of vector databases for AI applications.'
}
],
max_tokens: 1000,
temperature: 0.7
})
console.log(`Response: ${completion.choices[0].message.content}`)
console.log(`Finish Reason: ${completion.choices[0].finish_reason}`)
console.log(`Usage: ${JSON.stringify(completion.usage)}`)
/v1/chat/completions
// Streaming chat completion
const stream = await client.chat.completions.create({
model: 'gemini-2.5-flash',
messages: [
{
role: 'user',
content: 'Write a short story about AI and databases.'
}
],
stream: true,
max_tokens: 500
})
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content
if (content) {
process.stdout.write(content)
}
}
Collections
Schema-driven document stores with JSON Schema validation and real-time capabilities
/v1/collections
// Create a user collection with schema validation
const collection = await client.collections.create({
json_schema: {
title: 'Users',
type: 'object',
properties: {
name: {
type: 'string',
minLength: 1,
maxLength: 100
},
email: {
type: 'string',
format: 'email'
},
age: {
type: 'integer',
minimum: 0,
maximum: 150
}
},
required: ['name', 'email']
}
})
console.log(`Collection created: ${collection.id}`)
console.log(`Schema SHA: ${collection.sha}`)
/v1/collections
// List all collections
const collections = await client.collections.list()
console.log(`Total collections: ${collections.length}`)
collections.forEach((collection, index) => {
console.log(`Collection ${index + 1}:`)
console.log(` ID: ${collection.id}`)
console.log(` Title: ${collection.json_schema.title}`)
console.log(` Created: ${collection.created_at}`)
console.log('---')
})
/v1/collections/{collection_id}
// Get collection details
const collection = await client.collections.retrieve('col_users123')
console.log(`Collection ID: ${collection.id}`)
console.log(`Title: ${collection.json_schema.title}`)
console.log(`Schema: ${JSON.stringify(collection.json_schema, null, 2)}`)
console.log(`Created: ${collection.created_at}`)
console.log(`Updated: ${collection.updated_at}`)
/v1/collections/{collection_id}
// Delete a collection
const result = await client.collections.delete('col_users123')
console.log(`Collection deleted: ${result.deleted}`)
console.log(`Documents removed: ${result.documents_count}`)
Objects (Real-time)
Real-time document manipulation with pub/sub architecture for live updates
/v1/collections/objects/{collection_id}
// Create a new document
const createResponse = await client.objects.pub({
collection_id: 'col_users123',
event: 'create',
data: {
name: 'Alice Johnson',
email: 'alice@example.com',
age: 28,
preferences: {
theme: 'dark',
notifications: true
}
}
})
console.log('Document created:')
console.log(` ID: ${createResponse.data._id}`)
console.log(` Data: ${JSON.stringify(createResponse.data)}`)
/v1/collections/objects/{collection_id}
// Update a document
const updateResponse = await client.objects.pub({
collection_id: 'col_users123',
event: 'update',
filter: { _id: 'doc_abc123' },
data: {
age: 29,
preferences: {
theme: 'light',
notifications: false
}
}
})
console.log('Document updated:')
console.log(` Modified: ${updateResponse.modified_count}`)
console.log(` Data: ${JSON.stringify(updateResponse.data)}`)
/v1/collections/objects/{collection_id}
// Delete a document
const deleteResponse = await client.objects.pub({
collection_id: 'col_users123',
event: 'delete',
filter: { _id: 'doc_abc123' }
})
console.log('Document deleted:')
console.log(` Deleted count: ${deleteResponse.deleted_count}`)
console.log(` Success: ${deleteResponse.acknowledged}`)
/v1/collections/objects/{collection_id}
// Query documents with filters
const queryResponse = await client.objects.pub({
collection_id: 'col_users123',
event: 'query',
filter: {
age: { $gte: 25 },
'preferences.notifications': true
},
limit: 10,
sort: { created_at: -1 }
})
console.log('Query results:')
console.log(` Found: ${queryResponse.data.length} documents`)
queryResponse.data.forEach((doc, index) => {
console.log(` Document ${index + 1}: ${doc.name}`)
})
Vector Store
Native vector embeddings with semantic search and similarity matching
/v1/vector/{namespace}
// Upsert text embeddings
const response = await client.vector.upsert({
namespace: 'knowledge-base',
input: [
'Quipubase is a powerful real-time database for AI applications',
'Vector search enables semantic similarity matching across documents',
'Real-time subscriptions power reactive AI systems'
],
model: 'gemini-embedding-001'
})
console.log('Vector upsert completed:')
console.log(` Stored: ${response.count} embeddings`)
console.log(` Processing time: ${response.ellapsed}s`)
/v1/vector/{namespace}
// Semantic search query
const results = await client.vector.query({
namespace: 'knowledge-base',
input: 'database performance optimization',
top_k: 5,
model: 'gemini-embedding-001'
})
console.log('Semantic search results:')
console.log(` Found: ${results.count} matches`)
results.data.forEach((match, index) => {
console.log(` Result ${index + 1}:`)
console.log(` Score: ${match.score.toFixed(4)}`)
console.log(` Content: ${match.content}`)
})
/v1/vector/{namespace}
// Delete vectors by IDs
const deleteResult = await client.vector.delete({
namespace: 'knowledge-base',
ids: ['vec_123', 'vec_456', 'vec_789']
})
console.log('Vector deletion completed:')
console.log(` Deleted: ${deleteResult.deleted_count} vectors`)
console.log(` Success: ${deleteResult.acknowledged}`)
/v1/vector/{namespace}/stats
// Get vector namespace statistics
const stats = await client.vector.stats('knowledge-base')
console.log('Vector namespace statistics:')
console.log(` Total vectors: ${stats.total_vectors}`)
console.log(` Dimensions: ${stats.dimensions}`)
console.log(` Index size: ${stats.index_size_mb} MB`)
console.log(` Last updated: ${stats.last_updated}`)
Live Query Engine
SQL queries over structured and unstructured data with real-time capabilities
/v1/query/live
// Create dataset from CSV file
const csvDataset = await client.query.create({
engine: 'file',
uri: 'https://example.com/sales-data.csv',
query: 'SELECT * FROM root WHERE revenue > 1000 ORDER BY date DESC',
key: 'sales-analytics',
namespace: 'business',
bucket: 'data-warehouse'
})
console.log('CSV dataset created:')
console.log(` Key: ${csvDataset.key}`)
console.log(` Records: ${csvDataset.data.length}`)
console.log(` Engine: ${csvDataset.engine}`)
/v1/query/live
// Query existing dataset
const results = await client.query.retrieve({
key: 'sales-analytics',
query: 'SELECT DATE_TRUNC(month, date) as month, SUM(revenue) as total FROM root GROUP BY month',
namespace: 'business',
bucket: 'data-warehouse'
})
console.log('Query results:')
console.log(`Total records: ${results.data.length}`)
results.data.forEach(row => {
console.log(`Month: ${row.month}, Revenue: $${row.total}`)
})
/v1/query/live
// Create dataset from MongoDB
const mongoDataset = await client.query.create({
engine: 'mongodb',
uri: 'mongodb://localhost:27017/mydb',
query: 'SELECT * FROM users WHERE status = "active" LIMIT 100',
key: 'active-users',
namespace: 'analytics',
bucket: 'user-data'
})
console.log('MongoDB dataset created:')
console.log(` Key: ${mongoDataset.key}`)
console.log(` Records: ${mongoDataset.data.length}`)
console.log(` Connection: ${mongoDataset.engine}`)
/v1/query/live
// Create dataset from PostgreSQL
const pgDataset = await client.query.create({
engine: 'postgresql',
uri: 'postgresql://user:pass@localhost:5432/mydb',
query: 'SELECT id, name, email, created_at FROM customers WHERE created_at > NOW() - INTERVAL 30 DAY',
key: 'recent-customers',
namespace: 'crm',
bucket: 'customer-data'
})
console.log('PostgreSQL dataset created:')
console.log(` Key: ${pgDataset.key}`)
console.log(` Records: ${pgDataset.data.length}`)
console.log(` Engine: ${pgDataset.engine}`)
/v1/query/live/{key}
// Delete a dataset
const deleteResult = await client.query.delete({
key: 'sales-analytics',
namespace: 'business',
bucket: 'data-warehouse'
})
console.log('Dataset deletion:')
console.log(` Deleted: ${deleteResult.deleted}`)
console.log(` Key: ${deleteResult.key}`)
Blob Storage
Intelligent file processing with automatic chunking and transformation capabilities
/v1/blobs
// Upload and process a file
const fileUpload = await client.blobs.upload({
file: fileBuffer,
filename: 'document.pdf',
chunk_size: 1000,
overlap: 200,
metadata: {
category: 'documentation',
author: 'John Doe'
}
})
console.log('File uploaded:')
console.log(` ID: ${fileUpload.id}`)
console.log(` Chunks: ${fileUpload.chunks_count}`)
console.log(` Size: ${fileUpload.size_bytes} bytes`)
/v1/blobs/{blob_id}
// Get file information
const fileInfo = await client.blobs.retrieve('blob_abc123')
console.log('File information:')
console.log(` ID: ${fileInfo.id}`)
console.log(` Filename: ${fileInfo.filename}`)
console.log(` Status: ${fileInfo.status}`)
console.log(` Chunks: ${fileInfo.chunks_count}`)
console.log(` Created: ${fileInfo.created_at}`)
/v1/blobs/{blob_id}/chunks
// Get file chunks
const chunks = await client.blobs.chunks('blob_abc123', {
limit: 10,
offset: 0
})
console.log('File chunks:')
console.log(` Total: ${chunks.total}`)
console.log(` Retrieved: ${chunks.data.length}`)
chunks.data.forEach((chunk, index) => {
console.log(` Chunk ${index + 1}: ${chunk.content.substring(0, 100)}...`)
})
/v1/blobs/{blob_id}
// Delete a file
const deleteResult = await client.blobs.delete('blob_abc123')
console.log('File deletion:')
console.log(` Deleted: ${deleteResult.deleted}`)
console.log(` Chunks removed: ${deleteResult.chunks_deleted}`)
Audio Processing
Speech-to-text and text-to-speech capabilities with multiple voice options
/v1/audio/speech
// Generate speech from text
const speechResponse = await client.audio.speech.create({
model: 'tts-1',
input: 'Welcome to Quipubase, the AI-native database for modern applications.',
voice: 'alloy',
response_format: 'mp3'
})
// Save the audio file
const audioBuffer = await speechResponse.arrayBuffer()
console.log('Speech generated:')
console.log(` Size: ${audioBuffer.byteLength} bytes`)
console.log(' Format: mp3')
/v1/audio/transcriptions
// Transcribe audio to text
const transcription = await client.audio.transcriptions.create({
file: audioFile,
model: 'whisper-1',
language: 'en',
response_format: 'json'
})
console.log('Transcription completed:')
console.log(` Text: ${transcription.text}`)
console.log(` Language: ${transcription.language}`)
console.log(` Duration: ${transcription.duration}s`)
/v1/audio/translations
// Translate audio to English
const translation = await client.audio.translations.create({
file: foreignAudioFile,
model: 'whisper-1',
response_format: 'json'
})
console.log('Translation completed:')
console.log(` Original language detected: ${translation.detected_language}`)
console.log(` English translation: ${translation.text}`)
Image Generation
Create high-quality images from text prompts using state-of-the-art AI models
/v1/images/generations
// Generate images from text prompt
const imageResponse = await client.images.generate({
prompt: 'A futuristic database server room with glowing blue data streams and AI neural networks',
model: 'dall-e-3',
size: '1024x1024',
quality: 'hd',
n: 1
})
console.log('Images generated:')
imageResponse.data.forEach((image, index) => {
console.log(` Image ${index + 1}: ${image.url}`)
console.log(` Revised prompt: ${image.revised_prompt}`)
})
/v1/images/edits
// Edit an existing image
const editResponse = await client.images.edit({
image: originalImageFile,
mask: maskImageFile,
prompt: 'Add floating holographic data visualizations above the servers',
model: 'dall-e-2',
size: '1024x1024',
n: 1
})
console.log('Image edited:')
console.log(` Result URL: ${editResponse.data[0].url}`)
/v1/images/variations
// Create variations of an image
const variationResponse = await client.images.createVariation({
image: sourceImageFile,
model: 'dall-e-2',
size: '1024x1024',
n: 3
})
console.log('Image variations created:')
variationResponse.data.forEach((variation, index) => {
console.log(` Variation ${index + 1}: ${variation.url}`)
})
Embeddings
Generate vector embeddings for text, images, and other content types
/v1/embeddings
// Generate text embeddings
const embeddingResponse = await client.embeddings.create({
model: 'text-embedding-3-large',
input: [
'Quipubase provides real-time vector search capabilities',
'AI-native databases enable semantic similarity matching',
'Modern applications require intelligent data processing'
],
encoding_format: 'float'
})
console.log('Embeddings generated:')
console.log(` Model: ${embeddingResponse.model}`)
console.log(` Embeddings count: ${embeddingResponse.data.length}`)
console.log(` Dimensions: ${embeddingResponse.data[0].embedding.length}`)
console.log(` Usage: ${JSON.stringify(embeddingResponse.usage)}`)
/v1/embeddings
// Generate embeddings in batches
const batchTexts = [
'Database performance optimization techniques',
'Real-time data synchronization methods',
'Vector similarity search algorithms',
'AI model integration patterns',
'Scalable architecture design principles'
]
const batchEmbeddings = await client.embeddings.create({
model: 'text-embedding-3-small',
input: batchTexts,
encoding_format: 'float'
})
console.log('Batch embeddings completed:')
console.log(` Processed: ${batchEmbeddings.data.length} texts`)
console.log(` Total tokens: ${batchEmbeddings.usage.total_tokens}`)