We're building a team that explores vulnerabilities in AI systems to make them safer, stronger, and more transparent.
At KalkiNetra, we believe security research thrives on curiosity and collaboration.
We encourage independent thinkers who can question assumptions, experiment fearlessly, and translate complex findings into real-world impact.
Our work blends AI security research, engineering, and creative problem solving — because protecting the future of intelligence demands more than one perspective.
Every experiment contributes to the safety of future AI systems.
Work on prompt injections, adversarial ML, and LLM integrity.
Partner with developers, researchers, and enterprises.
Collaborate globally while maintaining creative autonomy.
We actively collaborate with organizations, institutions, and independent researchers.
Whether it's co-authoring research, testing prototypes, or deploying defense tools — partnership drives our mission.
"We move fast, question deeply, and learn relentlessly. The only hierarchy here is between ideas that work — and those we haven't tested yet."
Whether you're looking for a career, research collaboration, or partnership opportunity — let's connect.