👨🏻‍💻 postech.work

Engineering Analyst, Content Adversarial Red Team

Google • 🌐 In Person

In Person Posted 1 day, 13 hours ago

Job Description

Minimum qualifications:

Bachelor's degree or equivalent practical experience.

7 years of experience in trust and safety, risk mitigation, cybersecurity, or related fields.

7 years of experience with one or more of the following languages: SQL, R, Python, or C++.

6 years of experience in adversarial testing, red teaming, jailbreaking for trust and safety, or a related field, with a focus on AI safety.

Experience with Google infra/tech stack and tooling, Application Programming Interface (API) and web service experience, Collab deployment, SQL and data handling, Machine Learning Operations (MLOps) or other AI infrastructure experience.

Preferred qualifications:

Master's or PhD in a relevant quantitative or engineering field.

Experience in an individual contributor role within a technology company, focused on product safety or risk management.

Experience working closely with both technical and non-technical teams on dynamic solutions or automations to improve user safety.

Understanding of AI systems/architecture including specific vulnerabilities, machine learning, and AI responsibility principles.

Ability to influence cross-functionally at various levels and with the ability to effectively articulate technical concepts to both technical and non-technical stakeholders.

Excellent written and verbal communication and presentation skills.

About the job

Fast-paced, dynamic, and proactive, YouTube’s Trust \& Safety team is dedicated to making YouTube a safe place for users, viewers, and content creators around the world to create, and express themselves. Whether understanding and solving their online content concerns, navigating within global legal frameworks, or writing and enforcing worldwide policy, the Trust \& Safety team is on the frontlines of enhancing the YouTube experience, building internet safety, and protecting free speech in our ever-evolving digital world.

We are seeking a pioneering expert in Artificial Intelligence (AI) Red Teaming to shape and lead our content safety strategy.

In this pivotal role, you will come with considerable direct experience in adversarial testing and red teaming, particularly of Generative AI, so that you design and direct red teaming operations, creating innovative methodologies to uncover novel content abuse risks. You will act as a key advisor to executive leadership, leveraging your influence across Product, Engineering, and Policy teams to drive safety initiatives.

As a senior member of the team, you will mentor analysts, fostering a culture of continuous learning and sharing your expertise in adversarial techniques. You will also represent Google's AI safety efforts in external forums, collaborating with industry partners to develop best practices for responsible AI and solidifying our position as a thought leader in the field.

At Google we work hard to earn our users’ trust every day. Trust \& Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.Responsibilities

Design, develop, and oversee the execution of innovative red teaming strategies to uncover content abuse risks. Create and refine new red teaming methodologies, strategies and tactics.

Influence across Product, Engineering, Research and Policy to drive the implementation of safety initiatives. Be a key advisor to executive leadership on content safety issues, providing actionable insights and recommendations.

Mentor and guide junior and senior analysts, fostering excellence and continuous learning within the team. Act as a subject matter expert, sharing knowledge of adversarial and red teaming techniques, and risk mitigation.

Represent Google's AI safety efforts in external forums and conferences. Contribute to the development of industry-wide best practices for responsible AI development.

Be comfortable to be exposed to graphic, controversial or upsetting content.

Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.

Get job updates in your inbox

Subscribe to our newsletter and stay updated with the best job opportunities.