Categories
News

EU AI Act Final Draft: Navigating the AI Act’s Impact on Organisations

The unveiling of the final text of the AI Act marks a pivotal moment in the regulatory landscape for Artificial Intelligence. With the Act swiftly progressing, organisations need to promptly adapt their AI strategies to meet the new legal and ethical standards.

In this article, AI Ireland delve into the critical aspects of the AI Act, highlighting recent significant changes and offering strategic insights to ensure organisations are both compliant and successful in the evolving AI environment.

Prohibited AI Systems

The final text of the AI Act prohibits various AI practices to safeguard against manipulation and harm. Noteworthy prohibitions include the use of AI systems employing subliminal techniques or exploiting vulnerabilities based on age, disability or social situations. 

Additionally, biometric categorisation systems inferring sensitive information and real-time remote biometric identification in public spaces for law enforcement are restricted. The Act also addresses concerns related to profiling and the creation of facial recognition databases through untargeted scraping.

High-Risk AI Systems

A significant change in the final text is the classification of AI systems as high-risk based on specific criteria, with a particular emphasis on profiling natural persons. Providers must undergo an AI Impact Assessment and register their systems in the EU database. Notably, the Act applies not only to providers placing AI systems on the market but also to deployers of general-purpose AI models, regardless of their location.

General Purpose AI

The AI Act introduces General Purpose AI models (GPAI models) as a new addition, defining them as models displaying significant generality and capable of performing a wide range of tasks. Notably, the Act exempts AI models used for research and development activities before market release. GPAI models, especially those with systemic risk, are subject to specific obligations, including standardised evaluations, risk assessments, and cooperation with authorities.

Deep Fakes

In the context of elections and beyond, the Act addresses deep fakes, requiring disclosure when an AI system generates or manipulates image, audio or video content. Notably, exemptions exist for authorised law enforcement use and artistic, creative, satirical or fictional works.

Human Oversight and Employer Obligations

The Act emphasises the importance of human oversight proportional to the risks and autonomy of AI systems. Employers deploying high-risk AI systems must inform workers and their representatives, aligning with EU and national laws and practices.

Codes of Practice

The AI Office plays a pivotal role in developing codes of practice to support the AI Act’s application. Involving various stakeholders, including AI model providers, the codes aim to ensure compliance with specific articles, focusing on key performance indicators and the interests of relevant parties.

Testing AI Systems

The AI Act’s final draft outlines conditions for testing high-risk AI systems outside regulatory sandboxes, emphasising ethical guidelines, informed consent and liability for damages caused during testing. SMEs and startups are given priority access to regulatory sandboxes.

Third-Party Agreements and Technical Documentation

Providers of high-risk AI systems must have written agreements with third parties, similar to GDPR obligations. Technical documentation requirements differ for SMEs and startups, providing a simplified submission process.

Fines and Timelines

Non-compliance with the AI Act incurs fines based on the severity of violations. The Act becomes applicable to prohibited systems in six months, GPAI models in 12 months and high-risk AI systems in 36 months.

Strategic Steps for Organisations

Organisations must proactively align their AI systems with the AI Act’s ethical and legal standards. Key steps include rigorous evaluations, robust documentation, disclosure measures for AI-generated content, investment in human oversight, transparent communication with employees, meticulous data governance, preparation for compliance costs, staying informed and strategic planning for timelines.

Mark Kelly, Founder at AI Ireland said the AI Act signifies a crucial shift towards responsible AI governance, urging organisations to embrace ethical AI aligned with societal value.

“I welcome the final text of the AI Act as a significant stride towards responsible AI governance. This Act represents a pivotal shift in our approach to AI, blending legal compliance with ethical responsibility, ” he said.

“It is vital for organisations to recognise that this is not just about meeting regulatory requirements but about embracing a culture of ethical AI use that aligns with our societal values. The Act’s emphasis on prohibited and high-risk AI systems, along with the introduction of general-purpose AI models and provisions on deep fakes, underscores the need for a comprehensive and forward-thinking AI strategy. 

“Organisations must act swiftly and strategically to adapt, ensuring their AI systems are not only compliant but also ethically aligned and socially responsible. This is an opportunity for businesses to lead in the era of AI by championing transparency, accountability, and human-centric AI practices.” 

The AI Act presents a paradigm shift in the regulatory landscape, requiring organizations to navigate multifaceted challenges and opportunities. By embracing a proactive approach, organisations can ensure compliance, maintain ethical standards and position themselves for success in the dynamic AI-driven regulatory environment. As the Act shapes the future of AI governance, strategic adaptation becomes imperative for organisations to thrive in this new era.

AI Unleashed: Navigating the AI Revolution

Accessible for purchase on Amazon, AI Ireland’s latest book “AI Unleashed: Navigating the AI Revolution,” is your must read for 2024. For executives, policy architects or technology aficionados seeking to make sense of the intricate world of AI, “AI Unleashed: Navigating the AI Revolution” is your essential handbook. Available on Amazon Kindle or hard copy, this book furnishes you with the expertise and instruments required to employ AI both effectively and ethically.

Book an AI Presentation with AI Ireland today

Discover tailored presentations designed to meet the unique needs of your industry. Gain invaluable insights into the transformative power of AI technologies, ensuring your organisation stays ahead of the curve. Equip your team and stakeholders with the knowledge they need to confidently embrace the future.

Don’t miss the chance to enlighten your team and explore how innovation is positively impacting your industry. Secure your presentation now!

Categories
News

Navigating the Future with AI Trust, Risk and Security Management (AI TRiSM)

As we stride towards an AI-driven future, AI Trust, Risk and Security Management (AI TRiSM) emerges as a key technology trend poised to revolutionize businesses.

This innovative AI TRiSM framework allows organisations to identify, monitor and mitigate potential risks associated with the application of AI technology, including the rapidly evolving Generative and Adaptive AI. Adherence to this framework ensures compliance with pertinent regulations and data privacy laws.

In this article, we will unpack the concept of AI TRiSM, its operational dynamics and its strategic leverage for organisations.

Building Robust AI Systems: The Importance of AI Trust, Risk, and Security Management (AI TRiSM)

Companies implementing robust Artificial Intelligence Trust, Risk and Security Management (AI TRiSM) frameworks successfully deploy more valuable AI models. So, what does it take to make AI systems both secure and effective?

AI TRiSM and Cybersecurity: A Crucial Intersection 

AI models are susceptible to cyber threats, implying that cybercriminals can manipulate these models to optimise malicious processes, such as:

  • Malware Attacks
  • Data Breaches
  • Phishing Scams

In the first half of 2022, about 236 million ransomware attacks were reported globally, signifying a sharp rise from previous years. This surge can be attributed to the widespread adoption of novel technologies and adequate security measures.

The Imperative for an AI Bill of Rights 

The recently proposed U.S. blueprint for an AI Bill of Rights underscores the need for strict protective measures against potential AI perils, urging AI developers and users to integrate safety precautions within their AI models and strategies. This highlights the crucial need for rigorous AI TRiSM implementation.

Demystifying AI TRiSM

AI TRiSM is a comprehensive framework advocating for AI model governance, fairness, reliability, robustness, efficacy and privacy. It encompasses solutions, techniques and processes to enhance model interpretability, explainability, privacy, model operations and resistance against adversarial attacks, which is vital for both the enterprise and its customers.

IT leaders who invest time and resources into AI TRiSM can expect improved AI outcomes in terms of adoption, business goals and user acceptance. Given the relentless evolution of AI threats and compromises, AI TRiSM must be an ongoing effort.

The Rising Tide of AI TRiSM

By embracing AI transparency, trust and security, organisations are likely to witness an improvement in their AI model performance concerning adoption, business goals and user acceptance. By 2028, it’s predicted that AI-driven machines will comprise 20% of the global workforce, contributing to 40% of all economic productivity.

However, it’s important to note that several organisations have deployed countless AI models that even IT leaders find difficult to explain or interpret. Organisations failing to manage AI risk are more susceptible to negative AI outcomes, including security and privacy breaches, financial and reputational losses, and harm to individuals. Poorly managed AI could also lead to detrimental business decisions.

Strategising and Operationalising AI TRiSM

With increasing AI regulations on the horizon, it’s crucial to adopt practices promoting trust, transparency and consumer protection before such protections become mandatory. IT leaders need to adopt innovative AI TRiSM capabilities to ensure model reliability, trustworthiness, privacy and security.

Applying AI TRiSM shouldn’t wait until models are in production, as it may expose the process to potential risks. It’s advisable for IT leaders to familiarise themselves with potential compromises and utilise the AI TRiSM solution set to adequately safeguard AI.

Successful implementation of AI TRiSM requires a cross-functional team, involving legal, compliance, security, IT and data analytics staff. Establishing a dedicated team or task force is recommended to derive optimal results, with appropriate business representation for each AI project.

Benefits of AI TRiSM extend beyond mere regulatory compliance, enabling organisations to enhance the business outcomes derived from their use of AI.

Conclusion

AI TRiSM capabilities ensure model reliability, trustworthiness, security and privacy. Organisations need to manage AI trust, risk and security for better AI adoption, achieving business goals, and user acceptance. Consider AI TRiSM as a comprehensive solution set to adequately protect AI. 

In the digital era, ensuring the safety and effectiveness of AI is a necessity, not a luxury. AI TRiSM plays a pivotal role in meeting these demands, heralding a secure and promising AI future.

Apply now to the 2023 AI Awards!

Applications for the 2023 AI Awards are still open until August 25th so if you or someone you know is working on exciting projects, products, services and leaders in AI, Data Science and Machine Learning that are making a real impact in the industry, we want to hear from you!It’s free to enter and there are 12 categories you can apply for across industry, academia and leadership. 

Head over to www.aiawards.ie to submit an application or please feel free to contact liam@aiawards.ie with any queries about the submission process.