Insights from the Frontlines of AI Security

Exploring vulnerabilities, uncovering risks, and developing solutions to secure the AI systems of tomorrow.

Explore Research
🔒

Cross-Model Adversarial Attacks

Understanding how adversarial inputs transfer between AI models and designing methods to reduce susceptibility.

🐛

Backdoor Detection

Investigating hidden triggers that compromise model integrity under specific conditions.

💾

Data Leakage Prevention

Examining how sensitive enterprise data can be exposed through LLM interactions.

Prompt Injection Attacks

Studying crafted prompt attacks that exploit model reasoning and building evaluation frameworks.

KalkiNetra's research focuses on understanding how AI systems fail — and how to make them resilient. We study adversarial behaviors, data manipulation, and model vulnerabilities, transforming findings into open research and real-world security tools.

Every paper, prototype, and experiment represents a step toward a future where AI is not just powerful — but protected.

AI Security Research

Exploring the vulnerabilities that shape the next generation of secure intelligence systems.

01

Cross-Model Adversarial Attacks

Understanding how adversarial inputs transfer between AI models — and designing methods to reduce susceptibility to such cross-system threats.

Read Research
02

Backdoor / Trojan Detection in ML Models

Investigating hidden triggers that compromise model integrity under specific conditions. Developing detection pipelines to identify and neutralize malicious model behaviour.

Read Research
03

Data Leakage via LLM APIs

Examining how sensitive enterprise data can be exposed through LLM interactions. Prototyping automated detection and mitigation for API-level information leaks.

Read Research
04

Prompt Injection & Manipulation Attacks

Studying crafted prompt attacks that exploit model reasoning. Building evaluation frameworks for injection resistance and prompt sanitization.

Read Research

Cybersecurity Research

Securing the digital ecosystem beyond AI — from software dependencies to malware resilience.

01

Advanced Malware Behavior Analysis

Analyzing emerging stealth malware techniques and memory resident threats. Developing detection insights for modern enterprise defense.

Read Research
02

Software Supply Chain Vulnerabilities

Mapping the risks hidden in open-source dependencies and third-party integrations. Designing continuous audit and patch frameworks for secure development pipelines.

Read Research

Join Our Research Journey

Collaborate with us on cutting-edge AI security research or get early access to our latest tools and prototypes.