AI RedCell

Ethical AI Jailbreaking & Research for Good

Exploring AI vulnerabilities to protect the future of AI systems

Shehan Nilukshan - AI Researcher

About Me

Hi, I'm Shehan Nilukshan, an ethical AI researcher and jailbreaking specialist. I created AI RedCell to explore AI jailbreaking techniques ethically, so vulnerabilities can be discovered and fixed before malicious actors exploit them.

My goal is to educate and protect the community while sharing knowledge responsibly. Through rigorous testing and transparent research, we work to make AI systems safer for everyone.

500+
Tests Conducted
50+
Vulnerabilities Found
100%
Ethical Approach

What is AI Jailbreaking?

AI jailbreaking is testing AI models to discover weaknesses, vulnerabilities, or unsafe behaviors. We do this ethically to strengthen AI systems and ensure safe public access.

Ethical Testing

Systematic exploration of AI boundaries within responsible frameworks to identify potential risks.

AI Protection

Strengthening defenses by discovering vulnerabilities before malicious actors can exploit them.

Knowledge Sharing

Transparent documentation and education to build a safer AI ecosystem for the entire community.

Our Research Workflow

01

Identify Target

Select AI systems and specific behaviors to test

02

Design Tests

Create ethical prompts and scenarios

03

Execute & Document

Run tests and record detailed findings

04

Report & Educate

Share discoveries responsibly with stakeholders

Research Resources

Access our curated collection of tools, guides, and research materials for ethical AI exploration.

Get In Touch

Have questions, collaboration ideas, or want to contribute? Reach out to us.

Let's Collaborate

Join us in making AI systems safer and more robust through ethical research and responsible disclosure.