Technology Enablement
Consultant — AI Strategy, Governance & Security
McLean, VA
About the Role
We are building a differentiated advisory practice at the intersection of AI strategy, governance, and security. As organizations move from AI experimentation to enterprise-scale deployment, they face urgent questions: How do we govern AI responsibly? How do we secure agentic architectures? How do we align technical capabilities with business strategy and regulatory requirements?
This role is designed for a technically grounded early-career professional who wants to help answer those questions alongside senior practitioners. You will work directly with the practice lead on client-facing engagements, contribute to the development of proprietary frameworks and tools, and build expertise in one of the fastest-growing areas of consulting.
This is not a traditional strategy consulting role, and it is not a pure engineering role. It sits at the intersection—you need to be comfortable reading Python, understanding how LLMs and agentic systems work, AND translating that knowledge into governance frameworks, risk assessments, and executive-ready deliverables. If you thrive at the border between technical depth and business impact, this is for you.
What You Will Do
Client Delivery (50–60%)
- Support the design and delivery of AI governance assessments for clients across industries, leveraging frameworks such as ISO/IEC 42001, NIST AI RMF, and the EU AI Act.
- Conduct technical reviews of client AI systems, architectures, and deployments to identify governance gaps, security vulnerabilities, and risk exposure.
- Develop AI risk registers, control mappings, and maturity assessments tailored to each client’s organizational context and regulatory landscape.
- Prepare client-facing deliverables including assessment reports, executive briefings, implementation roadmaps, and policy recommendations.
- Participate in workshops and stakeholder interviews with client teams ranging from data scientists to C-suite executives.
Practice Development (25–30%)
- Help build and refine the practice’s proprietary AI governance and security framework, integrating industry standards with practical implementation experience.
- Research emerging AI risks, including those specific to agentic AI systems, LLM-based applications, MCP architectures, and AI supply chains.
- Develop reusable tools, templates, and accelerators (e.g., assessment questionnaires, control libraries, risk scoring models) to scale the practice.
- Contribute to thought leadership content: draft LinkedIn posts, white papers, blog articles, and conference presentation materials.
- Monitor regulatory and standards developments (EU AI Act, state-level AI legislation, ISO/IEC updates, OWASP LLM Top 10) and maintain a current knowledge base.
Technical Contribution (15–20%)
- Build proof-of-concept tools and demos using Python to illustrate governance and security concepts for clients (e.g., prompt injection demonstrations, model evaluation dashboards, automated compliance checks).
- Evaluate and test AI platforms, tools, and vendor solutions from a governance and security perspective.
- Support the practice lead’s technical fluency development by preparing technical briefings, annotated code walkthroughs, and “translation” materials that bridge technical and executive audiences.
- Stay hands-on with AI/ML development trends: experiment with agentic frameworks (LangChain, LangGraph, CrewAI), RAG architectures, and model evaluation techniques.