ChatGPT users send 2.5 billion requests daily, with over 750 million active users weekly. This massive number indicates a significant boost in productivity across industry sectors in the use of large language models (LLMs), covering every possible use case, from individual users to enterprises, organizations, and governments. The significant concerns and substantial risks associated with existing frameworks of LLM-based services include data privacy, security, and intellectual property (IP) protection. These concerns and risks are rooted in the fact that an AI service provider (cloud) observes a client’s prompts, which often include confidential information such as legal documents, insurance claims, electronic health records, proprietary software code, and patents, etc., sent to LLMs hosted on the AI service providers’ clouds to generate desired responses for daily and professional tasks. Without proper protection in a complicated LLMs’ operation pipelines, leaks and data misuse have occurred and can occur everywhere. Also, a recent lawsuit spotlights that AI service providers typically store users’ chat history indefinitely, with an uncertain future of whether this chat history can be used against individuals, enterprises, and organizations in legal incidents. Existing technology and solutions, such as data deletion, non-recording, data encryption and decryption, and trusted execution environments (TEEs) frameworks, have not been designed to adequately address these risks, with theoretical guarantees, while inevitably introducing unaffordable costs to users.
We have validated the technology to establish an end-to-end, neural-encrypted communication protocol between users and LLMs at scale, making it deployable to devices, computers, and cloud-based enterprises with significantly lower cost compared with existing competitors.
Our project develops the World’s First Private and Secure AI Router (NoirVPAI), offering an end-to-end neural encrypted communication protocol for individuals and LLMs at scale based on our proprietary IP. In this protocol, the user is the only recipient of their prompt and decrypted response when communicating with the LLM on the cloud through a neural-encrypted message and response. No LLM, and no person can understand the neural-encrypted message and the response, with a theoretical guarantee (the chance to crack it is smaller than 1 in, 100 billion). In fact, the LLM observes meaningless content while it can still generate a response, which only the client can decrypt. The beauty of neural crypto is that it is incredibly lightweight, allowing it to scale to a massive number of users without interfering with the LLM’s operation process. This property enables us to significantly reduce the cost by at least 30% compared with all existing techniques and ~1,000 times compared with operating a local LLM. Neural Crypto-powered NoirVPAI will revolutionize the way humans communicate with AI models, using our secure protocol. We have validated the technology with powerful code generation, coding at an agentic level 3, secure chat with confidential document analysis, and API services using a Qwen3 32B model, yielding highly competitive performance on the market. The TITA-2026 funds seed grant will bring this revolution closer to reality, enabling NoirVPAI to soon be available to a mass market of individual users, enterprises, and organizations.