Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft’s mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers’ heterogeneous environments, as well as ensuring the security of our own internal estate.
Large Language Models (LLMs) are transforming the fields of natural language processing, computer vision, and beyond. However, they also pose significant challenges for security and safety, such as malicious attacks, unintended biases, and harmful outputs. How can we ensure that LLMs are robust, reliable, and trustworthy in the face of these threats? How can we design LLMs that are aligned with human values and ethical principles? How can we leverage LLMs to enhance the security and safety of other systems and applications?
We are looking for a Principal Applied Scientist M anager who will lead research teams in development of approaches to harden LLMs against security and safety threats in a durable manner. This will require deeply technical innovation across M icrosoft, academia and industry. You will collaborate and lead a team of world-class researchers and engineers to develop novel methods, tools, and frameworks for LLM security and safety. You will also collaborate with internal and external partners to apply your research to real-world scenarios and challenges. You will have the opportunity to publish your work in top-tier venues and contribute to the scientific community and the broader society.
Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world.
Join us in advancing the state-of-the-art in LLM security and safety research
Qualifications:
Required Skills:
- Bachelor’s Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 8+ years related experience (e.g., statistics, predictive analytics, research)
- OR Master’s Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 6+ years related experience (e.g., statistics, predictive analytics, research)
- OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 5+ years related experience (e.g., statistics, predictive analytics, research)
- OR equivalent experience.
- 3+ years people management experience
- Experience in Python, PyTorch , TensorFlow, or other machine learning frameworks.
Other Requirements:
- Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.
Preferred Qualifications:
- PhD in computer science, electrical engineering, mathematics, or related fields, with a focus on machine learning, natural language processing, computer vision, or security.
- Experience conducting high-quality research and publishing .
- Experience in working with large-scale datasets and LLMs (Large Language Models) , such as GPT-3 (Generative Pre-trained Transformer), BERT ( Bidirectional Encoder Representations from Transformers) etc .
- Experience in applying machine learning to security and safety domains, such as malware detection, fraud prevention, or cyber-physical systems.
- Experience in leading and mentoring researchers and engineers.
Applied Sciences M6 – The typical base pay range for this role across the U.S. is USD $158,500 – $276,600 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $202,800 – $304,200 per year.
Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: https://careers.microsoft.com/us/en/us-corporate-pay
Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request via the Accommodation request form .
Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work.
#MSFTSecurity #MSecR #AMBITION
Responsibilities:
- Lead research projects to develop and evaluate novel approaches for LLM security and safety, such as adversarial defense, robustness verification, bias mitigation, and ethical alignment.
- Design and implement prototypes and experiments to demonstrate the effectiveness and scalability of your research.
- Communicate and disseminate your research findings and insights to internal and external stakeholders, including academic peers, product teams, and customers.
- Manage a team of applied scientists and developers that will develop and sustain new approaches , tools and techniques to advance AI safety