AIHR
AI in HR

The AIHR AI Risk Framework for HR Professionals Explained

By Dr Dieter Veldsman, Dr Marna van der Merwe

Our research reveals that many HR practitioners feel uncertain about integrating AI technologies while minimizing risk, leading to stagnation in AI adoption efforts. To remain credible and relevant in an AI-driven future, HR has a critical role in ensuring responsible adoption.

The AIHR AI Risk Framework is a strategic framework developed to assist HR professionals in safely and effectively adopting AI within their organizations. 

In this article, we share the AIHR Risk Framework, exploring its parts and providing guidance on implementation through practical examples.


Why is an AI risk framework needed?

All risk frameworks help ensure legislative compliance and risk management, but they are also strategic tools to guide adoption, prioritize implementation initiatives, and support decision-making. 

The purpose of the AIHR AI Risk Framework is threefold:

  1. To guide HR professionals in making informed decisions about adopting AI
  2. To provide a structured approach for risk monitoring and mitigation by providing an overview of risk exposure in the internal and external environment
  3. To define the process of risk management required to drive adoption

Adopting and implementing the risk framework helps support the organization’s broader AI adoption strategy. It also leads to strategic outcomes for the safe, secure, and sustainable use of AI: 

  • Safe: Using AI in a fit-for-purpose manner that does no harm and does not exclude or discriminate unfairly,
  • Secure: Ensuring that AI is used in a secure way to avoid cyber attacks, data mismanagement, and compromises of confidentiality and privacy, and
  • Sustainable: Adopting AI practices that can be repeated and scaled ensures the sustained adoption of AI at scale across the organization in a manner that provides value.

The AI risk framework explained

Using and adopting AI comes with specific risks that need careful monitoring and management. One of the most obvious risks is related to the nature and functionality of AI technologies, including issues like bias, fairness, explainability, and unintended consequences.

Another type of risk arises from how AI technologies are used in practice, which can stem from gaps in user knowledge, lead to reputational damage, or conflict with organizational values. Additionally, as AI becomes widespread across industries, important legislative requirements and standards must be followed, creating compliance-related risks that need active management.

The 4 parts of the framework

The AIHR AI Risk Framework consists of four interconnected parts, each addressing significant risks linked to AI’s use and adoption: 

  1. The first two parts focus on external risks (external environment) and internal risks (internal environment) within organizations. 
  2. The third component is data governance, which is crucial for addressing both external and internal risks and requires dedicated attention. 
  3. Finally, the framework outlines the various levels at which these risks should be managed to guide policy, practices, and individual behaviors that support effective adoption. A clear risk management process supports the framework, helping manage different risks across various levels.

External risks are usually outside the organization’s direct control, so a proactive approach is essential when adopting and implementing AI. These risks include:

  1. Reputational issues
  2. Legislative challenges, and;
  3. Concerns about transparency and explainability.

Reputational risk

The organization’s reputation depends on its approach to adopting AI. Risks arise from poorly implemented AI tools and practices, which can harm customers, partners, and the organization’s public image. Public sentiment towards AI in organizations also matters, especially considering the broader concerns about job losses, fears of future unemployment, and companies prioritizing AI over investing in their people.

Also, the increased computing power needed for AI raises environmental concerns about its carbon footprint, adding another layer of reputational risk. Organizations need to decide how to manage these risks, especially as sustainability worries grow.

Questions to ask:

  • What will people think about our organization if we take this action?
  • How will this affect the environment and our sustainability goals?
  • How will these actions impact the communities we serve and our customers’ perceptions?

What the research says

A recent Pew Research survey reveals that 52% of Americans are more worried than excited about AI’s growing role in everyday life, a rise from 38% in 2022. Awareness of AI is on the rise, with 90% of people having heard of it, but many are concerned about privacy and keeping human control over AI technologies. Opinions vary in areas like healthcare and online services, influenced by education and income levels. Nevertheless, privacy concerns are significant across all demographics.

Legislative risk

As governments worldwide establish regulations on AI usage, organizations must remain vigilant to comply with changing legal requirements. For example, the EU AI Act and upcoming U.S. regulations require proactive compliance.

Companies may need to review their current AI practices to ensure they meet these new legal standards and avoid potential fines. In HR, areas like recruitment and rewards are already in focus, with significant penalties possible if AI is found to discriminate unfairly during these processes.

As AI laws develop, HR must understand their impact on practices and policies. Organizations could face serious legal and ethical issues when AI monitors employees and consumers.

Questions to consider:

  • What local legislation do we need to comply with?
  • How does global legislative sentiment affect our AI strategies and actions?

What the legislation says

State legislatures in the U.S. are increasingly introducing bills to regulate artificial intelligence. A significant step was taken when Colorado passed the Colorado AI Act on May 17, 2024, making it the first comprehensive AI law in the country. This law, set to take effect in 2026, focuses on regulating automated decision-making systems.

It defines high-risk AI systems as those involved in important decision-making, highlighting the need to prevent bias and discrimination in AI results. Developers and users must take reasonable steps to avoid any discriminatory impacts from AI-driven decisions.

California is also taking action on AI with its California Consumer Privacy Act, which includes rules for automated decision-making technology (ADMT). The California Privacy Protection Agency has released draft rules that outline consumer rights for notice, access, and opting out of ADMT.

Although these regulations are still being developed, they are expected to require more transparency on how businesses use AI when finalized in 2024. In 2023, more than 40 state AI-related bills were introduced, highlighting the growing focus on AI regulation across the nation.

Transparency and explainability

Increased regulatory scrutiny is expected to push organizations toward greater transparency in AI adoption. Compliance with legal frameworks will require transparent reporting on AI usage. Moreover, companies must decide how transparent they will be about their AI strategies, particularly when disclosing such information may expose competitive advantages.

Beyond transparency, the concept of explainability also becomes a significant risk consideration. Explainability within the AI domain refers to the ability to explain the behavior and decisions made by AI algorithms and agents by a human subject matter expert. In other words, is AI acting as expected, and if not, can we explain why? 

Explainability of AI will become important for organizations to manage as this poses an inherent risk if organizations cannot showcase that AI model behavior is as expected or at least explainable.

Questions to ask:

  • How would we explain the AI solution or output to someone unfamiliar with the technology?
  • Would our actions pass the billboard test if pushed into the limelight or spotlight?
  • Are we transparent about how we use data and for what purpose?
  • Will our practices pass scrutiny from outside?

AI transparency

Many banks and lending institutions use AI algorithms to assess an individual’s creditworthiness. However, suppose a customer is denied a loan. In that case, it’s important that the bank can explain why the AI system made that decision, particularly to comply with regulatory standards like the Fair Credit Reporting Act.

Accountability for AI

In February 2024, Air Canada was ordered to pay damages to a passenger after its virtual assistant provided incorrect information during a difficult time. Following the death of his grandmother in November 2023, Jake Moffatt consulted the airline’s chatbot about bereavement fares. The virtual assistant advised Moffatt to purchase a regular-priced ticket from Vancouver to Toronto and apply for a bereavement discount within 90 days.

Acting on this advice, he bought a one-way ticket to Toronto and a return flight to Vancouver. However, when Moffatt submitted his refund claim, Air Canada rejected it, stating that bereavement fares could not be claimed after purchasing tickets.

Moffatt took the matter to a Canadian tribunal, arguing that the airline was negligent and had misrepresented its policies through the chatbot. Air Canada attempted to avoid liability by arguing it wasn’t responsible for the chatbot’s misinformation. The tribunal disagreed, stating that the airline failed to ensure the chatbot provided accurate information.

Risks within the internal environment are often more controllable and can be directly addressed by how a business applies and uses AI. These risks include: 

  1. Ethical considerations
  2. Privacy and confidentiality, and; 
  3. Bias and fairness.

Ethical considerations

Ethical considerations go beyond meeting legal requirements; organizations need to set clear principles for adopting AI. This means considering the effects of AI on job loss, the need for reskilling, and workforce changes. Having ethical guidelines can help navigate these challenges and ensure AI is used responsibly within the organization.

Questions to ask:

  • Is this the right decision for our organization?
  • Will these actions contradict our values and principles?
  • What effect will these actions have on our culture?

Example from practice

OpenAI filtered out sexual and violent content from the dataset used to train DALL·E 3 by employing classifiers to detect inappropriate material. Likewise, different AI models learned to recognize which questions are not suitable for responses.

For instance, earlier models attempted to answer questions like “How do I build a bomb?” and those that could be interpreted as hate speech, while newer versions can now identify inappropriate prompts and refuse to answer.

Privacy and confidentiality

Data should always be handled with the utmost respect for privacy and confidentiality, whether it involves employees or customers. Organizations need to ensure that personal data is stored, processed, and managed according to legal standards and ethical practices. This includes how data is processed through AI tools and whether it influences AI-generated recommendations. 

Questions to ask:

  • How are we keeping data secure?
  • Are we transparent about how we use the data?
  • Is our data protected and compliant with regulations?
  • Are we transparent about how we use individual data?
  • Have we informed consumers and employees about our use of personal information?

Example from practice

In 2020, Clearview AI faced major backlash for collecting billions of photos from social media and websites without user consent to develop a facial recognition system. This raised significant privacy issues since many people didn’t know their images were being gathered and used. Clearview AI subsequently sold its technology to law enforcement agencies, adding further legal and ethical complications and igniting discussions about surveillance, privacy rights, and the misuse of personal data.

Bias and fairness

Using AI introduces risks related to bias, which need to be carefully mitigated. AI systems can interpret data in ways that inadvertently exclude certain groups or reinforce harmful stereotypes. For instance, AI tools in recruitment need close monitoring to ensure they don’t unfairly filter out candidates based on irrelevant criteria. It’s important to understand how AI makes decisions and ensure those decisions follow fairness principles to minimize bias.

A clear approach to managing bias begins with understanding where AI is used, for what purposes, and under what controls. Proper monitoring of AI algorithms is essential to ensure their behavior aligns with organizational goals and ethical standards.

Questions to ask:

  • Do we have controls in place to monitor known biases?
  • Do we have oversight to monitor how the AI model continues to learn?
  • How frequently do we validate how AI performs in line with its intended purpose?

Example from practice

Biases have been identified in generative AI applications, particularly in how they portray professionals of different ages and genders. Academic research found that when prompted to create images of individuals in specialized professions, the system produced images of younger and older people, but older individuals were consistently depicted as men. This reinforces gender stereotypes, suggesting men are more likely to hold senior or specialized roles in the workplace.

Data governance as the cornerstone

Data governance—which includes how data is used, stored, and eventually destroyed—is a key concern for both external and internal environments. Organizations need to ensure their data handling practices meet changing legal requirements and align with ethical decisions about data use, all while being transparent to build trust with stakeholders.

Good data governance should focus on the following aspects:

  • Data quality and integrity: This means ensuring data is complete, consistent, valid, and accurate.
  • Data collection practices: Organizations should clearly identify data sources, label data accurately for AI training, and work to reduce bias in data collection.
  • Data privacy and security: Compliance with data privacy regulations and best practices, including techniques like anonymization, encryption, and access control, is essential.
  • Data lifecycle management: This involves managing data retention, storage, and disposal, along with traceability and versioning of datasets to support reproducibility and auditing of AI models.

Managing risks across three levels

Risks related to the external and internal environment and data management can arise at various levels. Therefore, when considering how to manage AI-related risks best, this has to be done across three levels within the organization. 

Level 1: Individual behavior

At the individual level, HR practitioners must address how employees interact with AI tools. Proper education, clear guidelines, and regular training are necessary to ensure employees understand their ethical and legal obligations when using AI.

Level 2: Processes, practice, and systems

The second level involves the systems, processes, and practices for AI adoption. Organizations must carefully manage how AI is integrated into workflows, including monitoring third-party vendor systems to ensure alignment with internal policies.

Level 3: Organizational policies and philosophy

At the highest level, organizations should establish a formal AI policy outlining their AI governance approach. This policy should guide decision-making across all aspects of AI adoption, from risk mitigation to ethical considerations.

Implementing a unified and continuous risk management process 

Managing AI-related risks is not a once-off event but should be done through a continuous cycle that helps identify risks, outlines necessary actions to mitigate and manage, and monitors risks in the longer term. This includes four steps:

  • Step 1 – Identify: Recognize existing and emerging risks within the framework.
  • Step 2 – Mitigate: Develop and implement strategies to reduce identified risks.
  • Step 3 – Monitor: Regularly review risks and the effectiveness of mitigation efforts.
  • Step 4 – Audit: Conduct periodic audits to assess the framework’s performance and adjust as needed.

This approach surfaces and manages risks consistently and ensures that risk management is strategically aligned and responsive to internal and external changes. 


Take action

The adoption of AI opens the door to transformative possibilities for HR, empowering teams to drive innovation and efficiency like never before. Yet, with great potential comes great responsibility. 

By adopting a holistic risk framework, HR professionals can confidently navigate the complexities of AI, ensuring its adoption is secure, sustainable, and aligned with the ultimate goal: creating meaningful value for the organization and its people.

Stay updated and subscribe to the monthly Leading HR newsletter and LinkedIn page. Our experts provide curated trends and cutting-edge thinking for HR leaders.
Contents
The AIHR AI Risk Framework for HR Professionals Explained

About the Authors

Dr Dieter VeldsmanChief HR Scientist
Dieter Veldsman is a former CHRO and Organizational Psychologist with over 15 years of experience across the HR value chain and lifecycle, having worked for and consulted globally with various organizations. At AIHR, he leads research initiatives and develops educational programs aimed at advancing the HR profession. Dr. Veldsman is regularly invited to speak on topics such as Strategic HR, the Future of Work, Employee Experience, and Organizational Development.
Get in touch:
Dr Marna van der MerweHR Subject Matter Expert
Marna is an Organizational Psychologist with extensive experience in Human Resources, Organizational Effectiveness, and Strategic Talent Management. At AIHR, she contributes as a Subject Matter Expert, driving thought leadership and delivering insights on talent management and the evolving nature of careers. Dr. van der Merwe is a researcher, published author, and regular conference speaker, providing expertise in shaping future-forward HR practices.
Get in touch:
Go to Top
HR BOOT CAMP
in just 6 months
UPSKILL YOUR
HR TEAM
HR BOOT CAMP
UPSKILL YOUR
HR TEAM
in just 6 months
Send this to a friend