ZERLES – The AI Gatekeeper
AI thinks fast. Zerles thinks ahead!
AI is changing the way we work.
Zerles makes sure we stay in control.
We use AI. But rarely do we question how.
We rely on generative AI every day.
For notes, ideas, emails, contracts, applications, and communication.
Along the way, we often reveal more than we realize:
personal data, internal documents, legally sensitive information.
No tool warns us. No platform asks.
There’s no layer of control between human and machine.
Data leakage through prompts – an emerging risk for companies.
AI systems like ChatGPT or Copilot are already part of everyday workflows.
But prompts are never reviewed, never classified, never controlled.
And yet, prompts often contain more than expected:
GDPR-relevant data, intellectual property, or critical strategic information.
Companies are liable — even if AI was “just a tool.”
What if there were an instance in between?
A layer — between humans and AI.
An instance that reviews prompts before they are sent.
That detects when content is too sensitive.
That responds when data needs protection.Not as a blocker, but as a thinking companion.
Not as control, but as a layer of responsibility.That’s exactly what ZERLES is.
Zerles is not a tool.
Zerles is a thought:
That control over AI shouldn’t happen afterward – but in between.
A digital layer of sorts:
Light, local, invisible — yet effective.
Not restrictive. But thoughtful.
For companies. For parents.
For anyone who takes responsibility —
even when no one is watching.
Zerles is looking for co-thinkers
You want to be part of the solution —
for one of the most urgent challenges in the age of AI?
We’re looking for people who offer more than just code:
People with a sense for responsibility, privacy, compliance, and digital ethics.
Zerles is a young project with a clear vision.
And we need collaborators who want to think with us, build with us,
secure and test what truly matters.
Business Angels & Strategic Investors
-
Interest in AI-Security, Digital Responsibility oder PrivacyTech
-
Experience in product development, go-to-market, or IT security
-
Willingness to support and shape a project from an early stage
-
Focus on sparring, networks, and asking the right questions — not just capital
Developers
-
Browser Extension Development (Chrome, Firefox, Manifest V3)
-
JavaScript / TypeScript / Node.js
-
Frontend frameworks like React / Vue / Electron
-
Prompt Interception / WebRequest APIs
-
Client-side security / policy enforcement
-
Local LLM integration / NLP-based filtering logic
-
Security-first UI/UX & privacy-by-design
Students for Pilot Projects
-
Backgrounds in computer science, cybersecurity, human-centered computing, or AI ethics
-
Interest in data protection and digital youth/family safety
-
Independent, creative, and capable of designing and testing MVPs


