Bachelor's degree or equivalent practical experience.
5 years of experience in data analysis and working with datasets.
Experience working on counter abuse strategies for online platforms.
Experience working with Large Language Models.
Preferred qualifications:
Experience working on product policy analysis and identifying policy risks.
Experience with modeling, experimentation, and causal inference.
Experience with adversarial testing of online consumer products.
Experience in Data Analysis and SQL.
Exceptional communication and presentation skills to deliver findings of analysis.
Excellent problem-solving skills with attention to detail in an ever-changing environment.
Responsibilities
Write scripts that will help multiply the impact of the team, including through systematized/automated prompt creation and scraping of content.
Monitor and research emerging abuse vectors for generative AI from open web and specialized sources, and work individually and collaboratively to promptly uncover new risk vectors in Google’s main Generative AI products.
Apply insights for creative prompting of Google generative AI tools such as Gemini, Search Generative Experience, and Vertex API. Support the creation of persona-based adversarial playbooks to guide the team’s red teaming.
Develop repeatable processes that can yield valuable insights regardless of topic and attack vector. Annotate and cluster harm types detected in structured prompting exercises.