Navigating AI innovation and legal risks

What international businesses in China need to know

As artificial intelligence (AI) continues to transform industries worldwide, Alex Roberts and Yang Fan argue that international companies operating in China face a rapidly evolving landscape of technological innovation and legal complexity. Recent developments highlight both the opportunities and the risks associated with deploying AI-driven solutions across sectors such as finance, marketing and compliance.


Global AI regulation: China’s distinct approach

While the European Union advances its comprehensive AI Act and the United States pursues a fragmented regulatory path, China has emerged as a leader in both AI regulation and innovation. The country’s ‘AI Plus’ strategy and targeted regulations, such as those governing deepfakes and content labelling, set it apart as a jurisdiction with unique compliance challenges for multinational organisations.

Key technological developments in the Asia-Pacific

China and the broader Asia-Pacific region are at the forefront of AI-driven advancements, including:

  • Efficient AI models e.g., DeepSeek and Qwen
  • Advanced semiconductor development, such as Huawei chips
  • Drone-enabled logistics that allow autonomous drone delivery  e.g. Meituan drones
  • Major investments in data centre infrastructure, notably in Malaysia, Singapore and the Philippines

Legal risks in AI deployment

Content creation and marketing

In the realm of content creation and marketing, AI is enabling companies to generate images, advertising copy, and videos with unprecedented speed and efficiency. However, this innovation comes with significant legal risks. One major concern is intellectual property infringement, as AI models trained on vast datasets may inadvertently reproduce copyrighted material, leading to potential legal claims – an issue that has already been the subject of several Chinese court cases. Additionally, there is the phenomenon of ‘hallucination’, where AI produces factual errors that pose reputational risks and could result in misleading advertising or breaches of consumer protection laws. The rise of deepfakes—realistic but fake videos or audio—has prompted China to introduce specific regulations to protect brand integrity and prevent unauthorised impersonations. Furthermore, content labelling laws in China require careful navigation, as multinational teams must comply with a dual-track regulatory regime that is not yet harmonised across jurisdictions.

Communication and transcription

When it comes to communication and transcription, many platforms now offer real-time AI transcription and translation services to automate meeting minutes and enhance accessibility for global teams. While these tools are valuable, they introduce legal complexities around confidentiality and legal privilege. Using third-party AI transcription services for privileged conversations could risk waiving that privilege unless the provider is clearly acting as an agent of the principal. Financial institutions, in particular, must also manage record-keeping obligations, ensuring compliance without inadvertently creating excessive records. Data privacy is another critical issue, as transcribing conversations involves processing personal data, which requires a legal basis and transparency under data protection laws. Recent regulatory actions in Europe and Asia underscore the importance of obtaining valid consent and maintaining clear data processing policies.

Customer profiling and automated decision-making

AI’s role in customer profiling and automated decision-making is expanding rapidly, with models analysing transaction histories and social media activity to assess creditworthiness or profile customers. This brings the risk of discrimination and bias, as models trained on historical data can perpetuate or even amplify unfair treatment based on societal stereotypes resulting from characteristics such as race or gender. The complexity of some AI models also makes it challenging to explain decisions, which can conflict with individuals’ ‘right to an explanation’ under laws like the EU’s General Data Protection Regulation and China’s Personal Information Protection Law. Moreover, using personal data collected for one purpose, such as social media activity, for another, like credit scoring, may violate the principle of purpose limitation. Regulators in Hong Kong and elsewhere are strengthening the rules in this area, and Asia is seeing a trend toward standardised data-sharing frameworks – though approaches vary between market-driven and regulation-led models.

Fraud detection and anti-money laundering (AML) compliance

In the field of fraud detection and AML compliance, financial institutions are increasingly deploying AI agents to streamline customer due diligence, document verification and risk assessment. While 61 per cent of banking executives identify fraud detection as the top driver of business value from AI, these systems are also prime targets for cyberattacks.[1] Malicious actors may attempt to manipulate transaction inputs, extract sensitive model data, or poison training data, potentially compromising the integrity of fraud monitoring systems. Algorithmic bias is another concern, as AI models may disproportionately flag legitimate transactions from certain demographic groups or regions, leading to unfair account freezes or enhanced due diligence. The lack of transparency in AI decision-making further complicates matters, as regulators now require financial institutions to provide clear explanations for why transactions are flagged or customers are subjected to increased scrutiny.

Asset management and robo-advisory

Finally, in asset management and robo-advisory services, AI advisers are helping clients select funds and optimise portfolio strategies by analysing both structured and unstructured data. However, financial institutions must ensure that AI tools do not replace their fiduciary responsibilities. Even when AI provides recommendations, the ultimate responsibility remains with the institution, and questions of liability may arise between the institution, the AI developer and individual advisors. Algorithmic failure is a real risk, as biases or errors in large language models can lead to inappropriate investment recommendations or even market disruptions. Additionally, firms must retain records of all communications and the basis for investment recommendations, which can be challenging when dealing with AI-generated content and ‘black box’ decision-making processes.

Conclusion: Compliance and opportunity

As AI technologies continue to advance at an unprecedented pace, international companies operating in China must navigate a complex intersection of innovation and regulation. The deployment of AI-driven solutions across finance, marketing, compliance, and customer service presents significant opportunities for operational efficiency, competitive advantage, and enhanced customer experiences. However, these benefits must be balanced against a growing array of legal and regulatory obligations that are unique to China’s regulatory framework.

Organisations should adopt a proactive compliance posture that integrates legal considerations into the AI development lifecycle from the outset. This includes conducting thorough risk assessments before deploying AI systems, implementing robust data governance frameworks, ensuring transparency in automated decision-making processes, and maintaining comprehensive documentation of AI model training, testing and deployment. Particular attention must be paid to China’s specific requirements regarding content labelling, deepfake regulations, and the dual-track regulatory regime that may differ from frameworks in other jurisdictions.

Cross-functional collaboration between legal, compliance, technology, and business teams is essential to identify and mitigate risks while maximising the strategic value of AI investments. Regular training and awareness programmes should be implemented to ensure that personnel understand both the capabilities and limitations of AI systems, as well as their legal obligations when deploying such technologies. Furthermore, organisations should establish clear accountability structures that delineate responsibility for AI-related decisions and outcomes, recognising that technological automation does not eliminate institutional liability.

Looking ahead, the regulatory landscape for AI in China is expected to continue evolving rapidly, with authorities demonstrating a willingness to introduce targeted regulations in response to emerging risks and technological developments. International companies that invest in building adaptive compliance frameworks, maintain open dialogue with regulators, and stay informed of regulatory developments will be best positioned to harness AI’s transformative potential while managing legal exposure effectively.


Alex Roberts is a partner and head of China technology, media and telecommunications (TMT), and privacy at Linklaters. Yang Fan is a solicitor specialising in TMT and privacy at the firm.

Linklaters LLP is a leading global law firm, supporting clients in achieving their strategies wherever they do business. The firm uses its expertise and resources to help clients pursue opportunities and manage risk across emerging and developed markets around the world.

Linklaters has a long-established presence in China, with about 50 years of experience advising Chinese and international corporates, Chinese state-owned enterprises and financial institutions on their most important strategic deals. We have a truly integrated team of over 200 lawyers working across China’s three major business centres: Beijing, Hong Kong SAR and Shanghai.


[1] Banking in the AI era: The risk management of AI and with AI, IBM, 23rd June 2025, viewed 9th January 2026, <https://www.ibm.com/thought-leadership/institute-business-value/en-us/report/banking-in-ai-era>