Salesforce’s UK chief, Zahra Bahrololoumi, has voiced her concerns about blanket regulations on artificial intelligence (AI) in the UK, calling for a more targeted and tailored approach.
As the AI landscape rapidly evolves, the Labour government’s AI policy is under scrutiny for its potential impact on a wide range of companies involved in AI development, from consumer-facing tools like OpenAI’s ChatGPT to enterprise-focused systems such as Salesforce’s AI platforms.
With the growing role of AI in industries from marketing to customer service, the debate over AI regulation is critical to balancing innovation with public safety.
Salesforce calls for proportional AI regulations as industry grows
Zahra Bahrololoumi, the CEO of Salesforce UK and Ireland, highlighted the need for policymakers to differentiate between AI companies developing consumer-facing products and those creating enterprise AI solutions.
Salesforce’s AI systems, such as the Agentforce AI platform, are designed to support businesses by automating tasks like customer service and sales operations. Unlike consumer-facing AI models that often operate in a more flexible regulatory environment, enterprise AI tools must comply with stringent privacy and security standards.
Rising concerns about data privacy and AI applications
One of the key issues raised by Bahrololoumi is the handling of sensitive data. Salesforce’s “zero retention” policy ensures that customer data used in its AI processes is never stored in its systems, maintaining high standards of data privacy.
This contrasts with consumer-facing AI models like ChatGPT or Anthropic’s Claude, where the storage and use of data for training models remains unclear.
Data privacy remains a critical concern as AI technology becomes increasingly embedded in daily operations. Enterprise AI systems, particularly those used by businesses to handle customer interactions, must meet robust standards such as the General Data Protection Regulation (GDPR), which governs data security and privacy across the European Union and the UK.
Differentiating enterprise AI from consumer-facing AI tools
Bahrololoumi pointed out the fundamental difference between consumer and enterprise AI. Companies like Salesforce operate under stricter data protection regulations, whereas consumer-facing AI products may not need to meet the same standards.
While enterprise AI tools are subject to more rigorous checks, the broad application of AI regulations could inadvertently hinder the growth of AI in sectors where innovation is key.
“Targeted, proportional, and tailored” legislation is what Bahrololoumi sees as essential for the development of AI across different industries. Consumer-facing AI systems may require different guidelines than business-oriented solutions, which are typically governed by existing corporate regulations.
Labour’s AI policy
The Labour government has not yet introduced a specific AI bill, but AI regulations have been at the forefront of national debate. The government has expressed a desire to support the AI sector’s growth while ensuring safety and fairness in its implementation across industries.
Bahrololoumi’s comments come as the government considers the next steps in crafting a regulatory framework for AI.
A spokesperson from the UK’s Department for Science, Innovation and Technology (DSIT) mentioned that the government’s AI rules would target the most powerful AI models rather than implementing blanket regulations across all AI applications.
This focus could ensure that companies like Salesforce, which provide enterprise AI services, are not subject to the same regulations as firms developing consumer-facing technologies.
AI safety and ethics at the forefront for enterprise providers
The ethics and safety of AI are top priorities for enterprise providers like Salesforce. Their AI systems, which automate various business functions, must ensure that data handling, privacy, and compliance with corporate guidelines remain intact.
The company’s Agentforce platform, for instance, allows businesses to create autonomous digital agents that can handle complex tasks like customer service and marketing without compromising data security.
As AI continues to develop, the challenge for regulators will be to keep up with the rapid pace of change while ensuring that innovation is not stifled. Salesforce’s call for targeted and proportionate regulations reflects a broader concern across the tech industry about the balance between safety and growth.
The AI sector is poised to revolutionise industries, but it must be governed by a regulatory framework that supports innovation without compromising ethical standards or public trust.
The post Salesforce UK chief calls for tailored AI regulations for enterprise vs consumer tools appeared first on Invezz