Core Technology

AI Agent Orchestration Platform

Not just an LLM wrapper — a production-grade agent lifecycle management platform that enforces accuracy, consistency, and full observability across every AI agent, regardless of domain.

Accuracy & Consistency
AI output always matches predefined JSON Schema — hallucination structurally prevented
Version Control
All prompts managed via CI/CD — rollback available at any time
Explainable AI (XAI)
Every pipeline stage logged and visualized in real time via Live Broadcast Mode
14+
Agents in Production
4
Domains Deployed
100%
Schema-Validated
5
Pipeline Stages
Overview

Platform Overview

The AI Agent Orchestration Platform is the common backend engine powering all AI features across MSoftech's products. Rather than building AI logic per-project, every agent — regardless of domain — runs through the same standardized pipeline.

The platform's core challenge is the gap between what LLMs produce (free-form text) and what applications need (structured JSON). The solution is a 3-artifact architecture — Registry, Template, Schema — where each artifact is independently versioned and CI/CD managed, eliminating the need for code deployments when prompts change.

With Live Broadcast Mode, every pipeline stage is streamed in real time — showing exactly which prompt template was used, how data was injected, what the LLM received, and whether the output passed schema validation. This makes AI behavior fully traceable and auditable.

AI Engine
Multi-Provider — Gemini / GPT-4
Deployed In
4 Projects · 8 Agents
Core Guarantee
100% Schema-Validated Output
Pipeline

5-Stage Core Pipeline

모든 에이전트가 동일하게 통과하는 표준 흐름

① Request Intake
Client sends data + request info
② Prompt Generation
Registry lookup + data injection
③ LLM Call
Send assembled prompt to AI provider
④ Schema Validation
100% schema match check
⑤ Result Delivery
Validated result to client
① Request Intake
The client application sends the request data + agent parameters to the Spring Boot API server via REST API over HTTPS. This includes the data to be analyzed and the target agent identifier.
② Prompt Generation
The engine queries the Prompt Registry to retrieve the latest template version for the requested agent. It then loads the Template (.md), collects the incoming request data, and performs Placeholder substitution — injecting live request data into the CONTEXT section of the template — to produce the final Assembled Prompt.
③ LLM Call
The fully assembled prompt is sent to the AI provider via multi-provider routing. The platform supports runtime switching between Gemini and GPT-4 without changing agent logic.
④ Schema Validation
The AI response is automatically validated against the predefined Output Schema (JSON Schema draft-07). Validation checks three things: required field enforcement, type validation (string, array, object), and component reuse via $ref / $defs. If validation fails, the output is flagged and logged — it never reaches the client in a broken state.
⑤ Result Delivery
The validated, schema-compliant final data is mapped to a DTO and sent back to the client application. Results are persisted to the domain-specific database before delivery when applicable.
Architecture

3-Layer Artifact Structure

registry.yaml
Registry
The agent directory. Maps each agent to its Template and Schema files, and manages versioning. A single entry per agent — enabling CI/CD-based prompt management with rollback at any time.
id name version template schema
agent_name.md
Template
The agent's behavioral specification. Written in a strict 5-section structure: MISSION → CONTEXT → RULES → Analysis Guidelines → DATA & FORMAT. The CONTEXT section contains Placeholders that receive live data at runtime.
MISSION CONTEXT RULES GUIDELINES DATA/FORMAT
agent_name_schema.json
Output Schema
The contract for AI output structure. Uses JSON Schema draft-07 with $ref / $defs for component-based type reuse. Enforces required fields, type validation, and structural consistency — guaranteeing the AI never returns a broken format.
required type $ref $defs draft-07
Registry registry.yaml templateFile outputSchemaFile Template agent_name.md Output Schema agent_name_schema.json Engine Assembles prompt
Prompt Engine

Template 5-Section Structure

MISSION
Purpose Definition
Defines the agent's purpose and the scope of analysis.
CONTEXT
Data Reference
Contains Placeholders replaced with live data at runtime.
RULES
Analysis Rules
Hard constraints — what the agent must always or never do.
GUIDELINES
Judgment Criteria
Domain-specific judgment criteria and thresholds.
DATA/FORMAT
Output Structure
Specifies the exact JSON output structure.
Why This Structure?

Every agent uses this exact 5-section format — no exceptions. This uniformity is what guarantees consistent output quality across all domains. A diabetes analysis agent and a geological analysis agent have completely different content, but structurally identical templates.

Placeholder Substitution

The engine reads the template, finds all Placeholder markers in the CONTEXT section, and replaces them with the actual request data. The result is the Assembled Prompt — a complete, data-rich instruction ready for the LLM.

Output Validation

Schema Validation

AI 출력 정확도·일관성을 100% 구조적으로 보장하는 게이트

AI Response Free-form JSON Schema Validation Pass ✓ Deliver Send to client Fail Retry / Log Broken output never reaches client
LLMs produce free-form text by nature — but applications need exact JSON structures. Schema Validation is the gate that automatically closes this gap 100% of the time.
Required Field Enforcement
Every field marked required in the schema must be present in the AI output. Missing fields cause an immediate validation failure — the output is never silently incomplete.
Type Validation
Each field's data type is strictly enforced — string, array, object, number, boolean. A field that should be an array cannot arrive as a string without triggering validation failure.
$ref / $defs Component Reuse
Repeated type structures are defined once in $defs and referenced via $ref. For example, a TimelineItem type shared across blood glucose records, lab results, and medication history.
Observability · XAI

Live Broadcast Mode

파이프라인 전 단계 실시간 시각화 — Explainable AI 구현

When enabled, every stage of the pipeline streams in real time. Each step can be expanded to reveal the actual artifact used — the Registry entry, the injected Input Data, the full Template, the final Assembled Prompt, and the Output Schema used for validation. This makes AI behavior completely transparent and auditable.

Live Broadcast Mode — AI Agent Console
Post-Deploy Prompt Verification
After deploying a new template, confirm the actual data-merged result in real time before it reaches production users.
AI Response Quality Audit
Compare the Assembled Prompt side-by-side with the AI response to evaluate output quality and prompt effectiveness.
Schema Mismatch Debugging
When validation fails, instantly see the exact diff between the Output Schema and the actual AI response to pinpoint the issue.
New Agent Development
Preview the data-injected prompt result during template authoring — validate the injection logic before the agent goes live.
Deployment

Agent Deployment Map

현재 운영 중인 에이전트 현황 — 4개 프로젝트 · 8개 에이전트

Orchestration Platform Clinical Nursing EMR Medical · Healthcare My Health Coach Health Dashboard · Mobile AI Geological Analysis Geo · Earth Science AI AquaLab Water Quality Analysis NoteEvaluation VsEvaluation Medication Diabetes Hypertension GeoAnalysis HazardScan MineralAnalysis Same pipeline · same structure · any domain — Medical · Geo · Water

Regardless of domain, every agent uses the identical Registry → Template → Schema onboarding structure. Adding a new domain requires only creating three files — no changes to the platform itself. This is the architectural principle that makes the platform reusable across Medical, Geo, and Water domains with zero modifications to the pipeline engine.

Results

Expected Impact

AI Output Reliability
Schema Validation ensures AI output always maintains a predefined structure. Hallucination-induced irregular outputs are structurally prevented, guaranteeing accuracy and consistency across every agent invocation.
Fast Domain Onboarding
The Registry → Template → Schema structure is fully reusable. New projects launch on the same battle-tested pipeline — no matter how different the domain, the infrastructure is already in place.
Complete Audit Trail
Every pipeline stage is logged. Through XAI (Live Broadcast Mode), it is always possible to verify exactly how the AI made its decisions — providing the auditability foundation that enterprise AI systems require.
Technology

Tech Stack

Runtime
Java 17 Spring Boot 3.x Flutter Dart
AI
Google Vertex AI Gemini GPT-4
Architecture
YAML Registry Markdown Templates JSON Schema draft-07
← Back to Core Technology