Trident Regression: Local AI Prompt Testing Tool (Enterprise)
£79+
https://schema.org/InStock
gbp
Peter Rankin
🛑 Did you just break your AI Agent?
You changed one word in your System Prompt to improve the tone. But did you accidentally break the JSON output format? Did you degrade the reasoning capabilities?
You won't know until a user complains. Unless you have a Time Machine.
Introducing Trident Regression. The local-first "pytest" for AI Prompts. Catch regressions before you deploy.
âš¡ Key Features:
- Visual Diffs: See side-by-side comparisons of "Before" vs "After" responses.
- Semantic Scoring: Uses local embeddings to detect drift even if the wording changes slightly (e.g. "Hi" vs "Hello" is 99% similar, but "Hi" vs "Error" is 0%).
- Golden Sets: Run 50+ test cases automatically every time you tweak a prompt.
- Provider Agnostic: Works with OpenAI, Anthropic, Gemini, and Local Ollama models.
📦 What's Inside (v1.0 Enterprise ZIP):
- Full Python Source Code (Streamlit Dashboard + CLI).
- The Semantic Diff Engine.
- Commercial License.
- 100% Offline Capabilities (No data leaks).
Requirements: Python 3.11+
The "pytest" for AI Prompts. Catch regressions, visualize diffs, and stop breaking your agents before you deploy. 100% Local & Private.
v1.0 Enterprise
Commercial / Lifetime
Streamlit + LiteLLM
Size
48.3 KB
Add to wishlist