Exploring vulnerabilities, uncovering risks, and developing solutions to secure the AI systems of tomorrow.
Explore ResearchUnderstanding how adversarial inputs transfer between AI models and designing methods to reduce susceptibility.
Investigating hidden triggers that compromise model integrity under specific conditions.
Examining how sensitive enterprise data can be exposed through LLM interactions.
Studying crafted prompt attacks that exploit model reasoning and building evaluation frameworks.
KalkiNetra's research focuses on understanding how AI systems fail — and how to make them resilient. We study adversarial behaviors, data manipulation, and model vulnerabilities, transforming findings into open research and real-world security tools.
Every paper, prototype, and experiment represents a step toward a future where AI is not just powerful — but protected.
Exploring the vulnerabilities that shape the next generation of secure intelligence systems.
Understanding how adversarial inputs transfer between AI models — and designing methods to reduce susceptibility to such cross-system threats.
Read ResearchInvestigating hidden triggers that compromise model integrity under specific conditions. Developing detection pipelines to identify and neutralize malicious model behaviour.
Read ResearchExamining how sensitive enterprise data can be exposed through LLM interactions. Prototyping automated detection and mitigation for API-level information leaks.
Read ResearchStudying crafted prompt attacks that exploit model reasoning. Building evaluation frameworks for injection resistance and prompt sanitization.
Read ResearchSecuring the digital ecosystem beyond AI — from software dependencies to malware resilience.
Analyzing emerging stealth malware techniques and memory resident threats. Developing detection insights for modern enterprise defense.
Read ResearchMapping the risks hidden in open-source dependencies and third-party integrations. Designing continuous audit and patch frameworks for secure development pipelines.
Read ResearchCollaborate with us on cutting-edge AI security research or get early access to our latest tools and prototypes.