Deep dive into the state of AI & ML. Landscape, trends, regulation. Part 3. AI Regulation, privacy and other concerns
AI emerges as a cutting-edge technological frontier, holding immense potential to revolutionize diverse industries through process automation and enhanced efficiency. However, alongside its promise, concerns about limitations, privacy, regulations, and legal implications raise questions about AI’s future. Let’s explore further into its trajectory.
Generative AI ethics: major concerns:
Distribution of Harmful Content and AI’s Role
AI systems have the capacity to automatically produce content using text prompts provided by humans. While these systems can bring substantial productivity benefits, there is also a risk of distributing harmful content, whether intentionally or unintentionally. According to Bret Greenstein, partner at PwC, an AI-generated email sent on behalf of a company may unintentionally include offensive language or provide harmful guidance to employees. To mitigate these risks, Greenstein advises using generative AI as a tool to enhance human efforts and existing processes. This approach ensures that the content aligns with the company’s ethical standards and upholds its brand values.
Copyright and Legal Risks
Renowned generative AI tools are extensively trained on vast databases containing images and text collected from various origins, including the internet. The challenge arises when these tools produce images or generate code without revealing the specific data sources, posing potential issues for institutions like banks dealing with financial transactions or pharmaceutical companies relying on specific formulas for complex molecules in drugs. The situation can lead to significant reputational and financial risks, especially if a company’s product is suspected of being based on another company’s intellectual property. Roselund suggests that companies should take steps to validate the outputs from these AI models until legal precedents offer more clarity regarding intellectual property and copyright concerns.
Breaches of data privacy
According to Abhishek Gupta, the founder and principal researcher at the Montreal AI Ethics Institute, generative AI large language models (LLMs) are trained on datasets that may contain personally identifiable information (PII) of individuals. This PII can sometimes be obtained through simple text prompts. Unlike traditional search engines, it can be challenging for consumers to identify and request the removal of such information. Therefore, companies involved in building or fine-tuning LLMs must take measures to prevent embedding PII in these language models and ensure that the removal of PII from the models is straightforward and compliant with privacy laws.
Exposure of sensitive information
Generative AI is playing a significant role in democratizing and enhancing the accessibility of AI capabilities. However, as Roselund pointed out, this democratization and accessibility can also pose risks. For instance, a medical researcher might unintentionally disclose sensitive patient information, or a consumer brand could unknowingly expose its product strategy to a third party. Such unintended incidents have the potential to severely damage patient or customer trust and result in legal consequences. To address these challenges, Roselund suggested that companies should establish clear guidelines, robust governance, and effective communication from the top management down the hierarchy. It’s crucial to emphasize shared responsibility in safeguarding sensitive information, protected data, and intellectual property (IP). By implementing these measures, organizations can better protect themselves and their stakeholders from the adverse effects of sensitive information disclosure.
Amplification of Bias in AI Models
Generative AI has the potential to magnify existing biases, particularly when the data used to train large language models (LLMs) contains inherent biases beyond a company’s control. These biases can arise from various sources, such as historical imbalances and societal prejudices present in the data used for training. When AI models learn from biased data, they can unintentionally reinforce and amplify these biases, leading to skewed and unfair outcomes in the generated content. Having diverse leaders and experts in AI companies is crucial to identifying unconscious biases in data and models, as they bring different perspectives. Companies should continuously monitor AI models, establish clear guidelines, and collaborate with external experts to address bias amplification. Cultivating a culture of inclusivity and ethics is essential in mitigating bias and ensuring responsible AI development.
Understanding Workforce Morale
AI is capable of handling various daily tasks performed by knowledge workers, such as writing, coding, content creation, summarization, and analysis. The acceleration of AI and automation has led to concerns about worker displacement, prompting ethical considerations for the future of work. Ethical responses include preparing the workforce for new roles generated by generative AI applications. Developing skills like prompt engineering will be essential for businesses to adapt to these changes and minimize negative impacts while ensuring growth. Nick Kramer from SSA & Company emphasized that the adoption of generative AI will significantly impact organizational design, work, and individual workers, making it vital for companies to invest in ethical strategies.
Data provenance
AI rely on vast amounts of data, which can be inadequately governed, have questionable origins, or include biases. Social influencers or the AI systems themselves may further amplify inaccuracies. Scott Zoldi, chief analytics officer at FICO, emphasized that the accuracy of a generative AI system hinges on the data it uses and its provenance. For instance, ChatGPT-4 mines the internet for data, some of which might be unreliable or inaccurate, leading to challenges in providing accurate responses. In FICO’s case, they have been using generative AI to simulate edge cases in fraud detection training, but they carefully manage the generated data as synthetic data, segregating it from the model’s primary training data to ensure accuracy and data integrity. By containing and carefully controlling the use of generative assets, organizations can address data provenance and accuracy concerns in generative AI applications.
Challenge of Explainability and Interpretability
Generative AI systems often group facts probabilistically, relying on learned associations between data elements. However, these associations are not always transparent, raising concerns about data trustworthiness. When analysts interrogate generative AI, they expect causal explanations for outcomes, but these systems primarily search for correlations, not causality. Scott Zoldi highlights the need for model interpretability to understand why a particular answer was generated and to assess its plausibility. Until generative AI can attain a higher level of trustworthiness and interpretability, caution should be exercised in relying on them for decisions that could significantly impact lives and livelihoods.
Effective regulation and laws surrounding AI are vital in ensuring the responsible and ethical deployment of this technology. The current lack of comprehensive legal frameworks and regulations raises concerns about potential misuse by individuals and companies. It is crucial to establish international cooperation to develop a global legal framework that enforces ethical AI practices, particularly in areas concerning privacy and other sensitive issues. Such measures will help build public trust and confidence in AI while safeguarding against its improper and unethical use.
Creating a standard for ethical AI practices is of utmost importance. To address the concerns surrounding AI, significant steps have been taken by leading authorities. The European Union (EU) has introduced the European Artificial Intelligence Alliance, dedicated to establishing ethical AI standards. Likewise, the U.S. National Science and Technology Council has released a report with the aim of setting ethical principles in AI research and development. These initiatives represent essential efforts to ensure responsible and principled use of AI on a global scale.
This is why UNESCO adopted the UNESCO Recommendation on the Ethics of Artificial Intelligence, the very first global standard-setting instrument on the subject. To learn more about recommendations you can visit https://www.unesco.org/en/artificial-intelligence/recommendation-ethics?hub=32618 and we highlight “Ten core principles lay out a human-rights centred approach to the Ethics of AI”:
1. Proportionality and Do No Harm. The use of AI systems must not go beyond what is necessary to achieve a legitimate aim. Risk assessment should be used to prevent harms which may result from such uses.
2. Safety and Security. Unwanted harms (safety risks) as well as vulnerabilities to attack (security risks) should be avoided and addressed by AI actors.
3. Right to Privacy and Data Protection. Privacy must be protected and promoted throughout the AI lifecycle. Adequate data protection frameworks should also be established
4. Multi-stakeholder and Adaptive Governance & Collaboration. International law & national sovereignty must be respected in the use of data. Additionally, participation of diverse stakeholders is necessary for inclusive approaches to AI governance.
5. Responsibility and Accountability AI systems should be auditable and traceable. There should be oversight, impact assessment, audit and due diligence mechanisms in place to avoid conflicts with human rights norms and threats to environmental wellbeing.
6. Transparency and Explainability. The ethical deployment of AI systems depends on their transparency & explainability (T&E). The level of T&E should be appropriate to the context, as there may be tensions between T&E and other principles such as privacy, safety and security.
7. Human Oversight and Determination. Member States should ensure that AI systems do not displace ultimate human responsibility and accountability.
8. Sustainability. AI technologies should be assessed against their impacts on ‘sustainability’, understood as a set of constantly evolving goals including those set out in the UN’s Sustainable Development Goals.
9. Awareness & Literacy. Public understanding of AI and data should be promoted through open & accessible education, civic engagement, digital skills & AI ethics training, media & information literacy.
10. Fairness and Non-Discrimation AI actors should promote social justice, fairness, and non-discrimination while taking an inclusive approach to ensure AI’s benefits are accessible to all.
AI, such as ChatGPT, can also have negative implications for education. It may render marking coursework virtually impossible, thereby emphasizing the importance of exams even more. ChatGPT has faced bans in New York schools and universities in Japan due to concerns that students might exploit it to complete their assignments. Additionally, UK exam boards have recommended supervising students during coursework to prevent its usage.
AI Policy as Industrial Policy
The competition often referred to as the “AI race” between the US and China is being leveraged by various industry and national security players to resist regulatory interventions aimed at US Big Tech companies. Consequently, there has been significant policy momentum towards increased state support for large-scale AI development.
This is evident in three key policy areas:
- Antitrust or pro-competition regulation
- Data privacy
- Industrial policy that increases government support for AI development.
Over time, the rhetoric surrounding the US-China AI race has transformed from a casual topic of discussion to a firmly established stance, evident through collaborative efforts involving government, military, and tech-industry stakeholders. This position is further bolstered by legislative actions and ongoing regulatory discussions. The initiatives put forth underscore the perception of AI systems and the companies behind them not solely as commercial commodities but primarily as crucial strategic national assets.
Arguments against antitrust based on the US-China “AI race” are being cynically promoted by industry interests—yet the Biden administration, with its Executive Order on Promoting Competition in the American Economy, offers a clear refutation to this logic: it proposes a competitive tech industry as the clearest path to advocating for the national interest.
In 2021, CCIA, an industry lobby group whose members include Amazon, Apple, Google, Facebook, and others, published a white paper called “National Security Issues Posed by House Antitrust Bills” that canvases several reasons why pro-competition legislation threatens the national interest, including:
- The American Innovation and Choice Online Act would affect companies’ ability to resist malicious activity.
- The Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act could impact national security by compelling leading US tech companies to share data and ensure interoperability with other organizations, including foreign entities.
- The Platform Competition and Opportunity Act would severely restrict US companies’ ability to make mergers and acquisitions but would not apply to foreign rivals.
- The Ending Platform Monopolies Act would also disadvantage US firms compared to their international competitors due to restrictions on mergers and acquisitions.
These lobbyists argue that together, these bills would threaten national security by risking the misuse of US intellectual property and data; reducing US law enforcement’s access to effective data; reducing the US’s ability to combat foreign misinformation; impeding cybersecurity efforts; giving foreign companies an advantage over US companies without any reciprocity; and “undermining U.S. tech leadership.”
The US-China “AI race” has become a basis for arguments against further regulations, particularly in data privacy and AI accountability. Critics point to Chinese companies’ perceived unfettered access to citizen data, contrasting it with proposed restrictions on data usage by US companies. For instance, Mark Zuckerberg warned that consent requirements for facial recognition could hinder the US from competing with Chinese counterparts. The vice president of the US Chamber of Commerce expressed concerns that federal privacy legislation could impact the competitiveness of US companies in the global AI race.
However, claims portraying China as a regulatory vacuum are countered by the growing body of data security and data protection regulations in China. While analysts acknowledge that Chinese privacy regulations may not be sufficient and enforcement may be lacking, they dispel the notion that Chinese companies have unchecked access to personal data.
These discussions also draw attention to the US as a global outlier, lacking comprehensive federal privacy protections, unlike the EU and several other countries. While acknowledging the need for more robust privacy regulation, analysts caution against oversimplifying the Chinese regulatory landscape and urge the US to address its own privacy protection shortcomings.
According to Stanford University’s AI Index, a total of 37 AI-related bills were enacted into law globally in 2022. While some countries are still navigating the complexities of AI regulation, others have taken proactive steps to establish national frameworks for monitoring and controlling the use and advancement of artificial intelligence technology.
European Union
The EU has taken a significant stride ahead of other Western countries in the global drive to implement AI regulations with the Artificial Intelligence Act inching closer to becoming law. The Act is available here https://artificialintelligenceact.eu/the-act/
The initial proposal for the AI Act was put forth in 2021, but the recent advancement of advanced generative AI, exemplified by OpenAI’s ChatGPT, renewed the urgency for lawmakers to move forward with the proposed regulations. On June 14, the European Parliament took a significant step by approving the text in the draft of the AI Act, setting the stage for its potential passage into law.
The proposal encompasses a broad array of AI technologies, including AI-generated deepfake videos, chatbots like ChatGPT, certain drones, and live facial recognition systems. The EU has adopted a tiered risk-based approach in the AI Act, classifying applications based on the level of risk they present to the public. Unacceptable risk technologies are banned outright, while high-risk AI tools that could potentially impact safety or fundamental rights must undergo a risk assessment before being deployed to the public. Additionally, generative AI applications are required to disclose the copyrighted works used to train their programs.
The EU is targeting the law’s final approval by the year’s end. The next step involves negotiations between the European Commission, the parliament’s AI committee chairs, and the Council of the European Union to finalize the legislation.
Artificial Intelligence Act: different rules for different risk levels. The new rules establish obligations for providers and users depending on the level of risk from artificial intelligence. While many AI systems pose minimal risk, they need to be assesse:
Unacceptable risk. Unacceptable risk AI systems are systems considered a threat to people and will be banned. They include:
- Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children
- Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics
- Real-time and remote biometric identification systems, such as facial recognition
Some exceptions may be allowed: For instance, “post” remote biometric identification systems where identification occurs after a significant delay will be allowed to prosecute serious crimes but only after court approval.
High risk AI systems that negatively affect safety or fundamental rights will be considered high risk and will be divided into two categories:
- AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices and lifts.
- AI systems falling into eight specific areas that will have to be registered in an EU database:
- Biometric identification and categorisation of natural persons
- Management and operation of critical infrastructure
- Education and vocational training
- Employment, worker management and access to self-employment
- Access to and enjoyment of essential private services and public services and benefits
- Law enforcement
- Migration, asylum and border control management
- Assistance in legal interpretation and application of the law.
All high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle.
Generative AI. Generative AI, like ChatGPT, would have to comply with transparency requirements:
- Disclosing that the content was generated by AI
- Designing the model to prevent it from generating illegal content
- Publishing summaries of copyrighted data used for training
Limited risk. Limited risk AI systems should comply with minimal transparency requirements that would allow users to make informed decisions. After interacting with the applications, the user can then decide whether they want to continue using it. Users should be made aware when they are interacting with AI. This includes AI systems that generate or manipulate image, audio or video content, for example deepfakes.
On 14 June 2023, MEPs adopted Parliaments negotiating position on the AI Act. The talks will now begin with EU countries in the Council on the final form of the law. The aim is to reach an agreement by the end of this year.
Canada
Canada is currently considering a similar proposal called the Artificial Intelligence and Data Act. You can visit https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document to explore it
The AIDA proposes the following approach:
- Building on existing Canadian consumer protection and human rights law, AIDA would ensure that high-impact AI systems meet the same expectations with respect to safety and human rights to which Canadians are accustomed. Regulations defining which systems would be considered high-impact, as well as specific requirements, would be developed in consultation with a broad range of stakeholders to ensure that they are effective at protecting the interests of the Canadian public, while avoiding imposing an undue burden on the Canadian AI ecosystem.
- The Minister of Innovation, Science, and Industry would be empowered to administer and enforce the Act, to ensure that policy and enforcement move together as the technology evolves. An office headed by a new AI and Data Commissioner would be created as a centre of expertise in support of both regulatory development and administration of the Act. The role would undergo gradual evolution of the functions of the commissioner from solely education and assistance to also include compliance and enforcement, once the Act has come into force and ecosystem adjusted.
- Prohibit reckless and malicious uses of AI that cause serious harm to Canadians and their interests through the creation of new criminal law provisions.
The AIDA Companion Document anticipates that businesses would be required to establish suitable accountability mechanisms to guarantee adherence to their legal responsibilities as stipulated in the AIDA.
The AIDA Companion Document outlines the following principles to guide the implementation of obligations under the AIDA:
- Human oversight and monitoring – High-impact systems must be designed and developed in such a way to enable people managing the operations of the system to exercise meaningful oversight, with a level of interpretability appropriate to the context.
- Transparency – Providing the public with appropriate information about how high-impact AI systems are being used, with the information sufficient to allow the public to understand systems capabilities, limitations, and potential impacts.
- Fairness and equity – Building AI systems with an awareness of the potential for discriminatory outcomes, with appropriate actions taken to mitigate discriminatory outcomes.
- Safety – High-impact AI systems must be proactively assessed to identify harms that could result from the use of the system, including foreseeable misuse.
- Accountability – Organisations should put in place governance mechanisms to ensure compliance, including proactive documentation of policies, processed, and measures.
- Validity and robustness – Analysing whether high-impact AI systems perform consistently, and whether they are stable and resilient in a variety of circumstances.
Moreover, the AIDA Companion Document provides different measures that may apply at each stage of the AI lifecycle:
system design:
- performing an initial assessment of potential risks associated with the use of an AI system in the context and deciding whether the use of AI is appropriate
- assessing and addressing potential biases introduced by the dataset selection
- assessing the level of interpretability needed and making design decisions accordingly
system development:
- documenting datasets and models used
- performing evaluation and validation, including retraining as needed
- building in mechanisms for human oversight and monitorin
- documenting appropriate use(s) and limitations
making a system available for use:
- keeping documentation regarding how the requirements for design and development have been met
- providing appropriate documentation to users regarding datasets used, limitations, and appropriate uses
- performing a risk assessment regarding the way the system has been made available
managing the operations of a system:
- logging and monitoring the output of the system as appropriate in the context
- ensuring adequate monitoring and human oversight
- intervening as needed, based on operational parameters.
The AIDA Companion Documents highlights that going forward, following royal ascent of the DCIA, the Canadian Government is aiming to conduct a consultation of industry, academia, and civil society, to inform the implementation of the AIDA and its regulations
United States
Although federal regulations on AI are not yet firmly established, both the White House and Congress have shown a growing commitment to prioritizing AI. In October 2022, the White House unveiled the Blueprint for an AI Bill of Rights, to view, visit https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf , a set of proposed principles to govern the design and use of AI technology, aiming to safeguard the American public from potential harm. While the guide does not specify penalties for non-compliance by companies, it does offer suggestions on how to ensure that the technology is developed with a focus on preserving civil liberties.
And here is a framework, that describes protections that should be applied with respect to all automated systems that have the potential to meaningfully impact individuals’ or communities’:
Rights, Opportunities, or Access
Civil rights, civil liberties, and privacy, including freedom of speech, voting, and protections from discrimination, excessive punishment, unlawful surveillance, and violations of privacy and other freedoms in both public and private sector contexts;
Equal opportunities, including equitable access to education, housing, credit, employment, and other programs; or,
Access to critical resources or services, such as healthcare, financial services, safety, social services, non-deceptive information about goods and services, and government benefits.
A list of examples of automated systems for which these principles should be considered is provided in the Appendix https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf
China
China has moved to the front of the pack after issuing the “world’s earliest and most detailed regulations on generative artificial intelligence models”
The “provisional regulations,” which go into effect on August 15, were published by a group of seven Chinese regulators led by the Cyberspace Administration of China. All generative AI content presented to the public, including text, pictures, audio and video, will be subject to the new rules. Compared to an earlier draft of rules released in April, “the new regulations have a more supportive tone on the new technology,” the outlet continued
China already had regulations to limit the spread of deceptively manipulated images, audio and videos, known as deepfakes, which went into effect in January. “On a smaller scale, tech hub Shenzhen in southern Guangdong province passed China’s first local government regulation focused on boosting AI development in September last year,”
SUMMARY
What conclusion can be drawn from the three parts of our review? AI is a highly powerful tool that can lead to both breakthroughs in knowledge and human development, as well as potential destruction. The ultimate impact depends solely on all of us and how we choose to utilize it, determining the purposes and goals for which AI will be employed.
In case that you missed:
Part 1. What is AI? AI & ML Landscape
Part 2. Investments, Fundraising, Insights and trends