Discover Forge

01 | What is Forge?

Behind each Clearpill stands a powerful multilayered Large Language Model Operations (LLMOps) infrastructure. LLMOps encompasses the practices, techniques and tools used for the operational management of large language models in production environments.

Since most LLMs are not built from scratch but rather, existing foundation models are fine-tuned by developers for various purposes, these practices are extremely important.

Some of the processes include: deployment, monitoring and maintenance. Its objective is to oversee that the response model remains efficient and accurate by a combination of high quality prompt engineering and verifying LLMs hallucinations with the RAG module.

02 | Why is this important?

Usually, LLMOps processes are handled by entire teams made of data scientists, DevOps engineers, and IT professionals where they collaborate on data exploration, prompt engineering, and pipeline management.

We provide an automated alternative that integrates all processes starting from reproducible microservices containing Clearpill, fine-tuning of LLM models, Retrieval-Augmented Generation (RAG), and APIs.

We offer a self-hosted open source version of our LLMOps platform created based on open source frameworks, which are equivalents to commercial frameworks.

03 | How do you do it?

To handle LLMOps smoothly, we use the BentoML framework, an integral part of ZenML, an extensible, open-source MLOps framework for creating portable, production-ready machine learning pipelines.

BentoML assists in handling computation tasks such as data preprocessing, model training, inference, and deployment, ensuring that the Seraphnet ecosystem can scale and adapt to increasing demand and evolving requirements.

In V1, Forge is compatible with OpenAI’s GPT4 transformer-based model but in the future we plan to expand it to be compatible with more LLMs.

04 | What can I do with Forge?

Within the LLMOps platform, we can create Clearpills based on a prototype provided by us, allowing for its customization to tasks through our own fine-tuning or RAG, as well as the replacement of LLMs or Data Sources.
Forge is set to go live later this year. Find out when by joining our Early Access program: link here

05 | Where can I find more information?

Please read docs.seraphnet.ai for more information.
Also, make sure to follow us on X (Twitter) and visit us on Discord! Links below.