Understanding AI Compliance: Challenges and Solutions for 2025
Explore the latest about the 2025 landscape of AI compliance, global regulations, and strategies to mitigate risks and ensure ethical AI practices. Read more.
According to Deloitte's latest State of Generative AI in the Enterprise report, a staggering 67% of surveyed organizations are increasing their investments in generative AI due to strong early value. Yet, while the recognition of the need to leverage the power of this rapidly evolving technology is clear, confidence in the ability to do so effectively is not. In fact, only 23% of responding organizations felt they were highly prepared.
What’s holding them back? Potential risks, regulatory impacts, and governance issues.
Initial concerns about AI
The advent of AI has not been without its detractors, including such well-known tech gurus as Elon Musk and Apple co-founder Steve Wozniak. Even Geoffrey Hinton, known as the “Godfather of AI” spoke of the dangers of AI after leaving his position at Google. Hinton even warned that AI could eventually become smarter than humans and take control.
Concerns about AI range from the potential for AI to take over jobs and put employees out of work to concerns over privacy violations and the potential for bias. Some notable missteps have occurred as companies have embraced the technology in the hopes of streamlining operations and driving innovation.
Amazon, for instance, developed an AI-powered talent acquisition tool that proved to be biased against female job applicants. IBM’s Watson, used in a “Watson for Oncology” project was found to make potentially dangerous treatment recommendations and was ultimately canceled.
In an effort to prevent unwanted outcomes and to shape how AI technology is being used, the AI regulatory landscape has seen significant development in recent months.
The growing need for AI compliance
AI technology advanced and was widely adopted quickly—so quickly that its advancement has outpaced the speed of evolving AI regulations. Organizations are now facing increasing pressure to ensure that their use of AI tools addresses concerns related to bias, privacy, and transparency while understanding and following emerging global and local regulatory trends.
Overview of global regulatory trends and standards
The AI regulatory landscape is complex and continually shifting. As 2025 gets underway we’re seeing a diverse range of regulatory efforts around the globe, including:
The EU AI Act is expected to have impacts that range beyond EU borders, similar to GDPR.
Growing pressure in the U.S. to establish clear guidelines while ensuring the ability of companies to balance innovation with risk management.
Countries like Canada, Australia, Brazil, and Singapore are aligning their regulatory moves with the EU’s approach.
Various sectors like finance and healthcare are seeing the emergence of sector-specific regulations.
There are growing technical considerations as well.
Ethical considerations in AI deployment—addressing bias, privacy, and transparency
Since their emergence, AI tools have demonstrated that their output can contain bias based on the data and information used to train them. For instance, companies have found that these tools may introduce bias in areas such as hiring, lending, and law enforcement.
Because AI systems rely on large amounts of data, concerns about the protection of personal data are also emerging. Companies must find an appropriate balance between processing personal data and improving their models and services. In addition, concerns are emerging about the use of AI for surveillance in public spaces.
Companies are recognizing the need for transparency and ethical guidelines to both protect proprietary and personal information and to raise trust among their key stakeholders.
The AI compliance landscape
Various countries and regions have introduced new regulations such as the EU Artificial Intelligence Act to help provide some guidelines about AI use. The EU Artificial Intelligence Act is the world’s first attempt to regulate AI use and was adopted by the European Parliament in March 2024. The Act classifies AI systems into four levels of risk:
Unacceptable: Social scoring systems, manipulative AI, and real-time biometric identification systems used in public settings.
High-risk: Strict compliance measures for AI systems used in healthcare, education, employment, law enforcement, border control, and the administration of justice and democratic processes.
Limited risk: Transparency related to generative AI models like ChatGPT.
Minimal risk: Unregulated, such as AI-enabled video games.
The AI Office, established within the European Commission, will oversee implementation and compliance with fines of “up to 20 million euros or 4% of annual global turnover for GDPR breaches.” Companies in the U.S. that also operate in the EU or provide AI products/services to EU customers are required to comply with the Act, as are companies with AI system outputs used in the EU.
To bridge the gap until the Act fully takes effect, another EU initiative—the AI Pact—has been put into place. It’s a voluntary pledge in anticipation of the Act’s requirements. Companies including Microsoft, Google, Amazon, IBM, Salesforce, SAP, and OpenAI have already signed the Pact.
The ultimate implications of AI are still somewhat unknown. It’s an environment in which many companies and individuals are looking for increased oversight to ensure that AI is accurate, safe, and trustworthy.
That includes Visier. Visier considers readiness of compliance instrumental to building and maintaining customer trust in our AI technology.
Here’s what we’re doing.
Visier’s approach to AI compliance
Visier has been leveraging the power of AI for more than a decade to power our people analytics platform and predictive analytics capabilities. Visier’s digital assistant, Vee, makes use of generative AI to help customers navigate their Visier People® data.
Visier has created an internal AI Taskforce to determine our readiness for compliance with the EU Act, and our ability to voluntarily participate in early compliance commitments like the AI Pact.
In developing our AI governance strategy, we are identifying, planning for, and developing real-time customer-facing AI compliance materials addressing topics such as bias, transparency, training, and other related matters, as first addressed by Visier’s AI FAQs.
Visier ensures internal cross-collaboration when it comes to AI development initiatives and the use of AI technologies within Visier. We continually develop and implement processes and policies to meet the requirements of applicable laws and regulations, and continuously monitor our internal controls, systems, and core infrastructure. We are committed to effectively communicating our position internally and externally as thought leaders and industry experts.
We believe in the positive power and potential of AI and are committed to responsible AI use and governance.