ARTIFICIAL intelligence (AI) has become a major disruptor in industries and workplaces worldwide. Rapid developments in the strength and efficiency of AI systems in solving longstanding problems have turned the once niche tech into an essential productivity tool.
However, the rapid integration of AI in workplaces has caused considerable friction, due to in no small part to the lack of rules and governance standards underpinning its development and use. AI tools that are used recklessly without rules and standards can harm a company's bottom line and even its reputation.
To put things into context, AI tools have been implicated in several high-profile blunders. AI hiring tools trained with bad data have been reported for discriminating job applicants based on their gender and race. Unsupervised AI chatbots used in customer service roles have made the news for giving customers false information. Careless professionals have been sanctioned for their over reliance on AI tools for their research, which resulted in the use of fake citations in their work.
Even though as laws on AI have yet to be passed in the Philippines, companies should take the initiative to reduce the risks that accompany AI integration through the adoption of AI governance frameworks under a risk-based approach. Proper AI governance balances innovation with the management of potential risks, like privacy issues and biased outputs, through guidelines and policies.
These guidelines and policies are built on the principle that AI use must be transparent, ethical and accountable. These principles form the bedrock of most AI regulations and AI standards in other countries, such as European Union's EU AI Act and Singapore's Veritas Toolkit for Responsible Use of AI in the Financial Sector.
Responsible companies using AI can aim to be transparent, making a point to disclose their use of AI, the capabilities and limitations of AI use, and their rules for AI use, to the people who will be affected by it.
Transparency is an essential ingredient in building confidence in a company's AI systems. Transparency builds trust between the company and its stakeholders and gives stakeholders the opportunity to assess the correctness of AI outputs.
Transparency is also a legal requirement in some cases under National Privacy Commission Circular 2023-04 (Guidelines on Consent) when AI is to be used to profile or process the personal information of data subjects.
A company's AI tools must also be fair and ethical both in their development and usage. On one hand, an AI tool is developed fairly and ethically when it is trained using high-quality data that is properly collected and diverse enough to consider a wide range of situations.
The collection of AI training data should not violate the rights of data subjects under the Data Privacy Act of 2012. Training data should also account for protected and underrepresented groups, such as persons with disabilities, to avoid AI outputs that violate laws protecting those groups.
On the other hand, an AI tool is used fairly and ethically when it is only used for the purposes for which it was built and when it is not used to circumvent the law. AI tools are not Swiss army knives that can be used for all purposes.
Responsible AI users know the capabilities and limitations of their AI tools well and only use AI tools for purposes for which they are designed. This means only using ChatGPT to structure and proofread drafts and not to fully write legal memoranda and expert opinions. This means that users should turn to AI-powered resume screening tools to filter resumes but not treat them as the sole basis for a hiring decision.
Responsible AI users know that AI tools are just tools. There must be sufficient human oversight over all AI systems. AI may not be used as legal shield to take responsibility away from irresponsible users.
Responsible companies are accountable for AI use. Accountability does not only mean correcting AI errors, but also means preventing errors from happening. Proper AI governance requires auditing AI tools to ensure their outputs are reliable, correct and legal. There must be algorithmic bias testing prior to deployment.
Accountability also requires companies to install AI experts in leadership and advisory roles to ensure responsible AI principles are integrated in company policies.
AI governance is new and groundbreaking, but the principles underlying it are well-established. It follows a risk-based approach, applies established data protection principles, and integrates product safety standards.
AI governance is corporate governance for today's most disruptive technology. If companies want to maximize gains from the AI revolution, executives should put their hand on the wheel and guide AI adoption and not ride with the hype of AI.
Atty. Edsel Tupaz heads the Data Privacy, Cybersecurity, and AI Initiatives at Gorriceta Africa Cauton & Saavedra. (LinkedIn: https://www.linkedin.com/in/edseltupaz/) During the 2023 Philippine Law Awards by Asian Legal Business, Edsel was awarded "Data Privacy & Protection Lawyer of the Year." Edsel is among the "Top 100 Lawyers in the Philippines" for 2023, as released by the Asia Business Law Journal. Dual-qualified under the Philippine and New York Bar, Edsel serves as data protection officer and legal adviser to Fortune 500 and NASDAQ companies, and many of the most impactful technology startups in Asia. He holds a Master of Laws from Harvard Law School, Cambridge, Massachusetts, a Bachelor of Arts in Economics and Juris Doctor from the Ateneo de Manila University. The views and opinions expressed above are those of the author and do not necessarily represent the views of Finex.
Read The Rest at :