As a Security Research Engineer at Maze, you'll be at the forefront of defining what constitutes real security risk in the age of AI-powered vulnerability detection. This is a unique opportunity to join our growing security research team at a well-funded startup building at the intersection of generative AI and cybersecurity, where your security expertise directly shapes how our AI models understand and prioritize cloud security threats.
You'll spend the majority of your time as the expert human-in-the-loop, analyzing cloud vulnerability findings from our AI systems, conducting deep research to validate and contextualize threats, and creating the authoritative labels that train our models to distinguish critical risks from noise. Working alongside other security researchers, you'll help scale our labeling operations while providing critical input into product development decisions based on real-world threat patterns you discover.
This role is perfect for a security researcher who wants to pioneer the future of AI-assisted threat detection, loves diving deep into cloud security vulnerabilities, and wants to see their security insights amplified through cutting-edge technology while contributing to a growing team.
Your Contributions to Our Journey:Scale Expert Data Labeling Operations: Lead high-volume vulnerability labeling and validation work as the authoritative voice on threat severity, reviewing and categorizing cloud security findings from our AI models to create the high-quality training data that powers our platform
Drive Product Development Through Research Insights: Translate patterns and insights from your labeling and research work into actionable product improvements, working directly with engineering and product teams to enhance detection capabilities and user experience
Collaborate with Security Research Team: Work closely with fellow Security Research Engineers to maintain consistency in labeling standards, share research findings, and collectively improve our vulnerability assessment methodologies
Deep Vulnerability Research: Conduct comprehensive research into cloud vulnerabilities affecting EC2 images, Docker containers, and cloud infrastructure, investigating true/false positives, analyzing business impact, and building proof-of-concepts to validate threat scenarios
Enhance AI Model Accuracy: Provide expert feedback through our labeling tools that improves our AI models' understanding of vulnerability context, helping them learn to prioritize threats like a seasoned security researcher
Technical Investigation and Analysis: Create detailed technical writeups about exploitation techniques, attack vectors, and remediation strategies for cloud vulnerabilities, turning complex security research into actionable intelligence
Leverage External Security Intelligence: Integrate insights from CVE databases, security advisory feeds, and threat intelligence sources to enrich vulnerability findings with broader context and emerging threat patterns
Contribute to Thought Leadership: Support our external presence through technical blog posts, security videos/podcasts, and occasional conference presentations, sharing insights from your research
Security Research Expertise: 5+ years of hands-on security experience with proven vulnerability research background, comfortable investigating complex security issues and building proof-of-concepts to validate findings
Cloud Security Mastery: Deep knowledge of AWS security, cloud infrastructure vulnerabilities, container security, and cloud-native attack vectors, with hands-on experience securing cloud environments at scale
Technical Investigation Skills: Strong coding and scripting abilities (Python, Go, or similar) for automating research tasks, building validation tools, and creating proof-of-concept exploits
Analytical Excellence: Proven ability to analyze complex security data, distinguish between critical threats and false positives, and communicate technical findings to both technical and business audiences
Product Mindset: Experience translating security insights into product requirements, with ability to identify patterns across vulnerabilities that inform strategic product decisions
External Intelligence Integration: Experience working with vulnerability databases, security advisory feeds, and threat intelligence sources to contextualize and prioritize security findings
Collaborative Mindset: Strong communication skills and ability to work effectively with security research peers, AI/ML teams, and product stakeholders, translating security domain knowledge into actionable improvements
High-Volume Execution: Comfort with systematic labeling work while maintaining accuracy and attention to detail, balancing speed with quality in fast-paced environments
Nice to haves:
Experience with AI/ML security or working with AI-generated security findings
Background at security tooling companies or building security products
Expertise in specific vulnerability research methodologies and frameworks
Open source contributions to security tools or research projects
Previous content creation experience in security (blogs, talks, research papers)
Industry certifications (CISSP, OSCP, AWS Security, etc.)
Ambitious Challenge: We're using generative AI (LLMs and agents) to solve some of the most pressing challenges in cloud security today. You'll be defining how AI understands and prioritizes vulnerabilities, working at the cutting edge of AI-powered threat detection.
Expert Team: We are a team of hands-on leaders with experience in Big Tech and Scale-ups. Our team has been part of the leadership teams behind multiple acquisitions and an IPO.
Growing Security Research Function: Join a collaborative security research team where you'll work alongside other experts, share insights, and collectively shape how our AI platform understands security threats at scale.
Impactful Work: Your security research and labeling work will directly improve how thousands of organizations understand and respond to cloud security threats, scaling expert security knowledge through AI to protect the entire ecosystem.
Product Influence: Your day-to-day research insights will directly influence product strategy and development, giving you a voice in building the next generation of AI-powered security tools.
Pioneer AI-Native Security: Help establish the gold standard for AI-assisted vulnerability research, defining how human security expertise enhances machine learning models in the cybersecurity domain.

