Research
Platforms
TorinAI EdgeMED
Models
Obsidian
Stewardship
Trust & Safety Initiatives
Company
About
Login / Sign up
Research & Development

Research & Development

Solving Real Problems with Advanced AI

Organizations face escalating AI costs, vendor lock-in, and deployment complexity. Dominion Labs delivers production-ready AI solutions that reduce operational costs by 70-90% while maintaining enterprise-grade performance and security.

Enterprise AI spending reached $154 billion in 2023, yet 85% of AI projects fail to reach production. Organizations struggle with:

  • Unsustainable API Costs - Cloud-based LLM APIs cost $0.002-0.06 per 1K tokens, translating to $200K-$3M annually for mid-sized deployments
  • Vendor Lock-In - Dependence on external providers creates strategic risk and eliminates cost optimization opportunities
  • Data Privacy Concerns - Sending sensitive business data to third-party APIs violates compliance requirements and exposes intellectual property
  • Limited Customization - Generic models lack industry-specific knowledge and cannot be fine-tuned to organizational needs
  • Performance Bottlenecks - Network latency and API rate limits create unacceptable response times for real-time applications

Traditional enterprise AI solutions force a choice between cost, performance, and control. Dominion Labs eliminates this tradeoff.

Healthcare organizations waste 35% of clinician time on administrative documentation. Our OE-1 healthcare module delivers:

  • Automated Clinical Documentation - Generate SOAP notes, HPI narratives, and discharge summaries from voice or text input
  • ICD-10 Coding Assistance - Suggest accurate diagnostic codes with symptom-to-code mapping
  • Triage Automation - Emergency severity classification with clinical justification
  • HIPAA Compliance - On-premise deployment ensures patient data never leaves facility networks
  • EHR Integration - Direct integration with Epic, Cerner, and other major systems

Impact: Reduce documentation time by 40%, allowing physicians to see 3-5 additional patients daily. For a 50-physician practice, this represents $2-3M in additional annual revenue.

Organizations struggle with fragmented knowledge across documents, wikis, and tribal knowledge. Obsidian-32B-Instruct delivers:

  • Intelligent Document Search - Semantic search across all company documents, finding answers not just keywords
  • Automated Customer Support - Level 1 support automation resolving 60-70% of tickets without human intervention
  • Employee Onboarding Acceleration - New hire AI assistant providing instant answers to company policies, procedures, and systems
  • Legal & Compliance Review - Contract analysis, regulatory compliance checking, and risk assessment
  • Technical Support Augmentation - Code review, debugging assistance, and technical documentation generation

Impact: 50% reduction in support ticket resolution time, 70% decrease in escalations, and 85% improvement in employee onboarding efficiency.

Educational institutions face teacher shortages and limited resources for personalized instruction. TorinEd family provides:

  • 24/7 Learning Support - Students receive instant assistance on homework, concepts, and test preparation
  • Adaptive Difficulty Scaling - Content automatically adjusts to student comprehension level
  • Socratic Teaching Method - Guides students to solutions rather than simply providing answers
  • Multi-Subject Mastery - Coverage across STEM, humanities, and standardized test preparation
  • Parental Oversight - Progress tracking and learning analytics for parents and educators

Impact: Schools report 25-35% improvement in student test scores and 60% reduction in tutoring costs. On-device deployment ensures student privacy and eliminates internet connectivity requirements.

Financial institutions face escalating regulatory requirements and fraud detection challenges. Our solutions deliver:

  • Automated Compliance Review - Analyze transactions and communications for regulatory violations
  • Fraud Detection - Real-time anomaly detection across transaction patterns and customer behavior
  • Document Analysis - Extract and verify information from financial statements, contracts, and disclosures
  • Risk Assessment - Credit risk scoring, portfolio analysis, and exposure calculation
  • Regulatory Reporting - Automated generation of compliance reports for SEC, FINRA, and other regulators

Impact: 80% reduction in compliance review time, 45% improvement in fraud detection accuracy, and elimination of regulatory fine risk through comprehensive monitoring.

Security: Threat Intelligence & Incident Response

Security teams drown in alerts while sophisticated threats go undetected. Our security-focused models provide:

  • Log Analysis & Correlation - Analyze millions of security events to identify genuine threats
  • Incident Response Recommendations - Suggest remediation actions based on attack patterns and organizational context
  • Threat Intelligence Synthesis - Aggregate and analyze threat feeds to identify relevant risks
  • Vulnerability Assessment - Code review and infrastructure analysis for security weaknesses
  • Security Training - Simulate phishing attacks and provide tailored security awareness training

Impact: 65% reduction in mean-time-to-detect (MTTD), 50% decrease in false positives, and 40% faster incident response. On-premise deployment protects security intelligence from external exposure.

Deployment Flexibility

Unlike cloud-only solutions, Dominion Labs supports deployment wherever your data and security requirements demand:

  • On-Premise - Full control and data sovereignty for regulated industries
  • Edge Devices - Mobile, tablet, and IoT deployment for disconnected environments
  • Private Cloud - Dedicated infrastructure with cloud scalability
  • Hybrid - Sensitive operations on-premise with non-critical workloads in cloud
  • Air-Gapped - Complete network isolation for defense and critical infrastructure

All deployment models receive identical capabilities, ensuring consistent performance regardless of infrastructure choices.

Obsidian-32B-Instruct [Production]

A 32-billion parameter language model designed for general-purpose text generation and reasoning tasks. Obsidian-32B-Instruct serves as the reasoning engine that can be plugged into platforms like TorinAI, delivering multi-step reasoning, technical problem-solving, and contextual understanding across various domains.

Obsidian-32B-Instruct's 32B parameter architecture supports chain-of-thought reasoning and multi-step inference. The model can break down problems into steps, maintain context across conversations, and combine information from different sources.

  • Multi-Step Processing - Breaks tasks into sequential reasoning steps
  • Context Maintenance - Tracks reasoning across extended conversations with 128K token context window
  • Contextual Analysis - Processes nuanced relationships in text
  • Analytical Capability - Supports critical thinking and evaluation tasks
  • Dense Transformer Architecture - Optimized parameter distribution for reasoning efficiency

The 32B parameter model handles mathematical reasoning, scientific problem-solving, and technical analysis. Obsidian-32B-Instruct has been trained on mathematical, scientific, and technical content spanning 36 trillion tokens.

  • Mathematics - Calculus, linear algebra, differential equations, and advanced mathematical reasoning
  • Scientific Content - Physics, chemistry, computational science concepts, and research methodology
  • Formal Logic - Symbolic reasoning, logical analysis, and proof construction
  • Algorithm Understanding - Code analysis, algorithmic concepts, and optimization strategies
  • Technical Documentation - API references, system architecture, and comprehensive technical writing

Native support for 119 languages with consistent performance across diverse linguistic contexts. The model maintains high accuracy across major language families including Indo-European, Sino-Tibetan, Afro-Asiatic, and more.

  • Language Coverage - 119 languages spanning global regions and language families
  • Cross-Lingual Transfer - Knowledge transfers effectively across related languages
  • Cultural Context - Understanding of cultural nuances and region-specific contexts
  • Translation Quality - High-quality bidirectional translation capabilities
  • Code-Switching - Handles mixed-language inputs naturally
  • 32.8 Billion Parameters - Large language model architecture optimized for reasoning
  • 128,000 Token Context Window - Extended conversation history and document processing
  • Training Data - 36 trillion tokens across diverse domains and languages
  • Few-Shot Learning - Adapts to new formats with examples in the prompt
  • Domain Flexibility - Trained on code, mathematics, science, and general knowledge
  • Instruction Following - Processes multi-step instructions and complex task descriptions
  • Reasoning Output - Can show step-by-step reasoning when prompted
  • API Compatibility - Standard OpenAI-compatible API format

Status: Production-ready with multiple deployment options.

Obsidian-32B-Instruct is deployed across multiple platforms including TorinChat, DreamDev, and the Dominion Labs API. The model can be integrated as a pluggable reasoning engine into various AI platforms and applications.

  • TorinAI Platform Integration - Primary reasoning engine for TorinChat and TorinAI services
  • API Deployment - Available via Dominion Labs API with $5/$15 per million tokens pricing
  • Cloudflare Workers AI - Edge deployment for low latency global access
  • DreamDev Integration - Available in cloud IDE for development assistance
  • Streaming Support - Real-time token streaming for responsive user experiences
  • Function Calling - Native tool use and structured output capabilities
  • Enterprise SLA - 99.9% uptime guarantee for production workloads

Model Training & Optimization [Validated]

Status: Completed training run with documented performance metrics.

We conducted fine-tuning of the Obsidian-14B-Instruct model using LoRA (Low-Rank Adaptation) for parameter efficiency. A 500-iteration training run on specialized datasets showed consistent loss reduction, indicating successful knowledge acquisition.

Training Metrics

  • Training Loss Reduction - 3.415 → 0.168 (95% reduction over 500 iterations)
  • Validation Loss Reduction - 3.727 → 0.201 (94.6% reduction with strong generalization)
  • Parameter Efficiency - 11.469M trainable parameters (0.078% of 14.77B total model)
  • Processing Speed - 77-90 tokens/second sustained throughout training
  • Memory Usage - 18.092-18.474 GB peak memory usage with stable performance
  • Learning Rate - Consistent 1.0e-05 with stable convergence
  • Training Dataset - 26,198 examples across 4 specialized domains (90/10 train/validation split)

The training run showed improvement on both training and validation sets with low overfitting, suggesting the model learned from the provided examples. The parameter-efficient LoRA approach allowed fine-tuning with modest computational resources.

TorinAI Research & Development

Our modular AI platform that provides the infrastructure, tools, and orchestration layer for deploying AI capabilities. TorinAI allows different language models (like Obsidian-32B-Instruct) to be plugged in as the underlying reasoning engine. Research explores system capabilities across quantum computing integration, autonomous reasoning, multi-modal processing, and continuous learning.

Status: Early-stage research exploring quantum computing applications in AI systems.

We are investigating quantum computing integration through IBM Quantum's cloud platform, exploring whether quantum processing could benefit specific AI workloads. This research is experimental and not yet production-ready.

  • IBM Quantum Access - Cloud access to IBM Quantum Experience for basic quantum circuit testing
  • Basic Quantum Circuits - Simple operations including Bell state creation tested on simulators and limited hardware runs
  • Performance Analysis - Preliminary investigation into potential use cases for quantum-enhanced AI tasks
  • Safety Mechanisms - Error handling and fallback systems for experimental quantum operations
  • Research Integration - Exploratory connections to infrastructure and learning subsystems for testing

This early-stage research contributes to understanding how quantum hardware might eventually benefit AI systems as the technology matures.

Status: Active development of automated system monitoring and task scheduling.

We are developing continuous operation capabilities that allow the system to run scheduled tasks and automated monitoring cycles. This includes self-assessment routines and periodic system evaluations.

  • Scheduled Operations - Configurable background tasks for system monitoring and maintenance
  • Task Prioritization - Framework for balancing different types of automated tasks and resource allocation
  • Automated Tasks - System-initiated operations including document indexing and knowledge base updates
  • Evaluation Cycles - Periodic self-assessment and performance monitoring with logging
  • Resource Management - Optimization strategies for managing computational resources across concurrent tasks

These automation features are designed to reduce manual intervention while maintaining system stability through monitoring and safeguards.

Status: Developing multi-tiered memory storage and retrieval system.

We are implementing a layered memory architecture for storing and retrieving information across different contexts and time scales. The system includes various storage tiers optimized for different access patterns.

  • Tiered Storage - Multiple storage layers including short-term buffers and long-term databases
  • Information Synthesis - Methods for combining related information using vector similarity and metadata matching
  • Pattern Detection - Algorithms for identifying relationships in stored data using embeddings and temporal analysis
  • Iterative Refinement - Ongoing optimization of storage strategies based on usage patterns
  • Retrieval Optimization - Database indexing and caching strategies for improved query performance

The memory system is being tested and refined to improve information storage efficiency and retrieval accuracy.

Status: Developing framework for coordinating multiple AI agents on complex tasks.

We are building a system where multiple AI agents can work together on problems by taking different perspectives and combining their outputs. This approach explores whether multi-agent reasoning can improve solution quality.

  • Agent Specialization - Different agents configured with distinct system prompts for varied perspectives
  • Structured Interaction - Agents process the same input and generate independent responses that are then combined
  • Output Synthesis - Aggregation methods for combining multiple agent responses into final answers
  • Response Comparison - Analysis of differences between agent outputs to identify diverse viewpoints
  • Shared Context - Common memory and conversation history accessible to all agents

Early testing explores using this approach for technical decision-making where multiple perspectives may be valuable.

Status: Production deployment with ongoing refinement.

The system integrates text, vision, voice, and speech capabilities through separate specialized models that can share context through a common memory layer.

  • Multiple Modalities - Text (Obsidian-14B-Instruct), Vision (Lumen3-VL 8B), Voice (Fish-Speech TTS), Speech (Whisper STT)
  • Context Passing - Vision analysis results stored in memory can be referenced in subsequent text interactions
  • Cross-Modal Reference - Text model can access and discuss previously analyzed images through shared memory
  • Shared Memory Layer - Database-backed memory system accessible across different modality endpoints
  • Modality Switching - Users can switch between text, voice, and vision inputs while maintaining conversation context

These modalities have been deployed and tested with continued improvements to integration quality and performance.

Status: Core systems deployed and actively maintained.

The platform's foundational components are operational and undergo regular testing and updates to maintain reliability and performance.

  • Vision Processing - Vision-language model for image analysis using Lumen3-VL
  • Vector Embeddings - 384-dimensional sentence embeddings for semantic search and retrieval
  • Model Fine-Tuning - LoRA-based parameter-efficient fine-tuning pipeline for domain adaptation
  • Performance Monitoring - Logging and metrics collection for system health and optimization
  • API Infrastructure - RESTful endpoints with FastAPI supporting concurrent request handling
  • Security Updates - Ongoing security assessments and patches for authentication and API security

These core components support production workloads with regular maintenance and incremental improvements.

Status: Internal framework for tracking system development progress.

We use a self-assessment framework to evaluate progress across 12 technical domains. This internal tool helps identify which areas are mature and which require additional development work.

Assessment Criteria

Each domain receives a subjective rating across five categories:

  • Basic Functionality (80%) - Core features implemented and operational
  • Advanced Features (60%) - Additional capabilities deployed beyond basic functionality
  • Optimization Level (50%) - Performance tuning and efficiency work in progress
  • Integration Quality (70%) - How well components work together across the system
  • Innovation Potential (40%) - Room identified for future development and research

Evaluated Domains

  • Reasoning - Logical inference, causal analysis, and multi-step problem solving
  • Learning - Continuous learning pipeline with multiple approaches and automated knowledge acquisition
  • Memory - Hierarchical memory architecture with pattern detection and efficient retrieval
  • Perception - Multi-modal input processing across text, vision, and voice
  • Communication - Natural language generation with context-aware conversation management
  • Creativity - Novel solution generation and creative synthesis across modalities
  • Problem Solving - Complex task decomposition with multi-agent collaboration
  • Meta-Cognition - Performance monitoring, system introspection, and automated assessment
  • Integration - Cross-system coherence and unified architecture
  • Prediction - Future state modeling and uncertainty quantification
  • Optimization - Resource allocation, efficiency improvements, and performance tuning
  • Abstraction - Conceptual hierarchy formation and high-level reasoning

This self-assessment indicates basic functionality is in place across domains, with varying levels of optimization and integration. The scores reflect our internal view of development progress and areas needing additional work.

OE1 [Production]

A compact edge model designed for local deployment on resource-constrained hardware. OE1 enables on-device inference for industry-specific applications including healthcare, security, finance, education, and business use cases. Optimized for privacy-focused deployments.

OE1 is specifically engineered for edge deployment, delivering enterprise-grade AI performance on local hardware without requiring cloud connectivity. The model's compact architecture (3.1GB) enables fast inference times while maintaining high accuracy across specialized domains.

  • Compact Model Size - 3.1GB optimized for local deployment
  • Fast Inference - Sub-2-second response times on modern hardware
  • Low Memory Footprint - ~4GB RAM requirement for operation
  • Privacy-First Design - All processing occurs on-device, no data leaves premises
  • Offline Operation - Full functionality without internet connectivity

OE1 is transferable across industries through specialized fine-tuning, with domain-specific variants optimized for healthcare, security, finance, education, and general business applications. Each variant maintains the core edge-optimized architecture while incorporating industry-specific knowledge and compliance requirements.

Healthcare

Clinical workflow automation, patient intake, medical documentation, symptom assessment, and HIPAA-compliant record management. Trained on medical terminology, clinical protocols, and healthcare-specific communication patterns.

Security

Threat intelligence analysis, security log interpretation, incident response recommendations, and vulnerability assessment. Specialized in cybersecurity terminology, attack pattern recognition, and security best practices.

Finance

Financial analysis, risk assessment, regulatory compliance support, and transaction processing assistance. Fine-tuned on financial regulations, accounting principles, and industry-specific reporting requirements.

Education

Instructional support, student assessment, curriculum assistance, and learning analytics. Optimized for educational content across grade levels with age-appropriate communication and pedagogical best practices.

General

Versatile business applications including customer service, document processing, workflow automation, and general knowledge tasks. Broad-domain training for multi-purpose enterprise deployment.

  • Cross-Platform Support - Windows, macOS, Linux deployment options
  • API Integration - RESTful API for seamless system integration
  • Containerized Deployment - Docker support for simplified installation
  • Hardware Flexibility - Optimized for both CPU and GPU acceleration
  • Compliance Ready - Industry-specific regulatory compliance built-in
  • Customizable Fine-Tuning - Organization-specific adaptation capabilities

Healthcare Performance Testing [Validated]

Status: Benchmark testing completed on medical documentation tasks.

We tested the OE-1 healthcare configuration across 8 common medical documentation task types to measure response quality and generation speed. These benchmarks assess the system's ability to produce medically-relevant text for clinical workflows.

Clinical Task Performance

  • SOAP Note Generation - Structured clinical notes with 600-token depth (23.5s, 25.6 tokens/sec)
  • History of Present Illness (HPI) - Symptom synthesis and clinical narrative generation (7.3s, 29.2 tokens/sec)
  • ICD-10 Coding - Diagnostic code suggestions with symptom mapping (10.5s, 28.5 tokens/sec)
  • Triage Assessment - Severity classification with clinical reasoning (12.2s, 24.5 tokens/sec)
  • Discharge Summary - Post-procedure documentation with medication details (20.5s, 24.4 tokens/sec)
  • Review of Systems - Systematic symptom review across body systems (16.9s, 23.7 tokens/sec)
  • Vital Signs Documentation - Structured vital sign recording with clinical context (11.5s, 25.1 tokens/sec)
  • Medication List Formatting - Medication reconciliation with dosing and indications (12.3s, 24.3 tokens/sec)

Performance metrics show consistent 23-29 tokens/second generation across medical documentation tasks, with inference times suitable for clinical workflow integration on edge devices used in EdgeMED deployments.

TorinEd

A comprehensive family of specialized education models designed to democratize learning and make high-quality AI-assisted education accessible across all age groups and learning environments. The TorinEd family includes TorinJr, TorinEd, TorinSr, and TorinMED, each optimized for specific educational contexts and learner needs.

TorinJr is our compact AI assistant designed for young learners, bringing sophisticated yet age-appropriate conversational AI capabilities to early childhood and elementary education environments. Optimized for edge deployment on tablets and smartphones, TorinJr runs directly on-device without constant internet connectivity, ensuring safe, private, and reliable educational assistance.

  • Age-Appropriate Content - Carefully calibrated responses suitable for K-5 learners
  • Interactive Learning - Engaging, conversational approach to foundational skills
  • On-Device Privacy - Operates offline to protect young learners' data
  • Adaptive Difficulty - Adjusts complexity based on learner progress

TorinEd serves as the core educational model for middle and high school students, providing comprehensive support across STEM subjects, humanities, and critical thinking skills. Built with advanced reasoning capabilities and subject matter expertise, TorinEd helps students develop deeper understanding while fostering independent learning.

  • Multi-Subject Mastery - Deep knowledge across mathematics, sciences, literature, and history
  • Socratic Methodology - Guides students through questions rather than simply providing answers
  • Homework Assistance - Helps students understand concepts without completing work for them
  • Test Preparation - Comprehensive support for standardized tests and assessments

TorinSr targets college students, graduate learners, and adult education, offering advanced academic support for complex subjects, research assistance, and professional development. With enhanced reasoning capabilities and access to specialized knowledge domains, TorinSr serves as an intellectual companion for higher-level learning.

  • Advanced Subject Matter - Expertise in specialized fields including advanced mathematics, computer science, engineering, and philosophy
  • Research Support - Assists with literature reviews, methodology design, and analytical frameworks
  • Critical Analysis - Helps develop sophisticated arguments and evaluate complex ideas
  • Professional Development - Supports career-oriented learning and skill acquisition

TorinMED is a specialized model focused on medical education, healthcare professional training, and clinical knowledge support. Designed for medical students, nursing students, and healthcare practitioners, TorinMED provides accurate, evidence-based information while maintaining the highest standards for medical accuracy and safety.

  • Clinical Knowledge Base - Comprehensive medical information across specialties and disciplines
  • Case-Based Learning - Interactive clinical scenarios for diagnostic reasoning practice
  • Evidence-Based Information - Grounded in current medical literature and clinical guidelines
  • Safety-First Design - Emphasizes patient safety and appropriate scope of practice
  • Continuing Education - Supports ongoing professional development for healthcare practitioners

All TorinEd family models share a common technical foundation while being optimized for their specific educational contexts:

  • Efficient Architecture - Model compression and knowledge distillation for resource-efficient deployment
  • Edge-Ready Design - Capable of on-device operation for privacy and accessibility
  • Adaptive Learning - Personalization capabilities that respect learner privacy
  • Safe & Responsible - Built-in safety mechanisms and age-appropriate guardrails

The TorinEd family represents our commitment to democratizing education through AI, making sophisticated learning assistance accessible across all ages, environments, and educational contexts. From early childhood through professional development, these models work to expand educational opportunity and support lifelong learning.