Our client, a leading financial services company is hiring an AI Security and Controls Consultant on a long-term contract basis.
Job ID 83509
Work Location:
New York, NY – Hybrid
Summary:
We’re seeking someone to join our team as a consultant to work in the technology audit team, within Internal Audit, to manage/execute risk-based assurance activities for Firms use of GenAI or Artificial Intelligence in general.
Responsibilities:
- Conduct Model Audits: Execute a wide range of assurance activities focused on the controls, governance, and risk management of generative AI models used within the organization.
- Model Security & Privacy Reviews: Review and assess privacy controls, data protection measures, and security protocols applied to AI models, including data handling, access management, and compliance with regulatory standards.
- Familiarity with GenAI Model: Good understanding of current and upcoming GenAI models.
- Adopt New Audit Tools: Stay current with and implement new audit tools and techniques relevant to AI/ML systems, including model interpretability, fairness, and robustness assessment tools.
- Risk Communication: Develop clear and concise messages regarding risks and business impact related to AI models, including model bias, drift, and security vulnerabilities.
- Data-Driven Analysis: Identify, collect, and analyze data relevant to model performance, privacy, and security, leveraging both structured and unstructured sources.
- Control Testing: Test controls over AI model development, deployment, monitoring, and lifecycle management, including data lineage, model versioning, and access controls.
- Issue Identification: Identify control gaps and open risks, raise insightful questions to identify root causes and business impact, and draw appropriate conclusions.
Required Skills:
- Experience: At least 3-4+ years relevant experience in technology audit, AI/ML, data privacy, or information security.
- Audit Knowledge: Understanding of audit principles, tools, and processes (risk assessments, planning, testing, reporting, and continuous monitoring), with a focus on AI/ML systems.
- Communication: Ability to communicate clearly and concisely, adapting messages for technical and non-technical audiences.
- Analytical Skills: Ability to identify patterns, anomalies, and risks in model behaviors and data.
- Certifications: CISA, CISSP, or relevant AI/ML certifications (preferred, not required).
- Technical Knowledge: Strong understanding of AI/ML model development and deployment processes and model interpretability, fairness, and robustness concepts
Education:
Masters or bachelor’s degree (Computer Science, Data Science, Information Security, or related field preferred).
Pay: $94-$123 per hour.