More than HAlf of Generative AI Adopters Are Breaking the Rules at Work! — Recent Study from Salesforce

Quick take:

Salesforce surveyed over 14,000 global workers across 14 countries for the latest iteration of its Generative AI Snapshot Research Series, ‘The Promises and Pitfalls of AI at Work.’ The research reveals that, despite the promise generative AI offers workers and employers, a lack of clearly defined policies around its use may be putting businesses at risk.

Recent Post


- Generative AI refers to a type of artificial intelligence that creates entirely new data, mimicking existing styles or patterns. This can include text, images, code, audio, and video. Unlike traditional AI that analyzes and learns from existing data, generative AI actively generates new content.

- Generating creative content: Creating realistic images, writing different kinds of creative text formats like poems, code, scripts, musical pieces, etc.
- Drug discovery: Simulating molecules to identify potential drug candidates.
- Material science: Designing new materials with desired properties.
- Personalization: Recommending products, tailoring marketing content, and generating personalized experiences.

- Generative AI tools are software applications or platforms that allow users to create and interact with AI-generated content. These tools can range from user-friendly interfaces for non-technical users to complex programming libraries for developers.

Breaking the rules with generative AI can lead to various negative consequences for individuals and organizations:
- Reputational damage: Unethical use of AI can damage an organization's reputation and public trust.
- Legal issues: Depending on the specific misuse, legal issues might arise, including copyright infringement or privacy violations.
- Algorithmic bias: Unethical practices can perpetuate or amplify existing biases in AI algorithms, leading to unfair or discriminatory outcomes.
- Erosion of trust: Breaking the rules can erode trust between employees, management, and the public.

- Develop clear and comprehensive AI policies: These policies should address issues like data privacy, bias mitigation, transparency, and accountability.
- Educate employees: Train employees on the responsible use of AI, including understanding ethical principles and potential risks.
- Establish oversight mechanisms: Implement processes to monitor AI use and ensure compliance with policies and regulations.
- Promote open communication: Encourage employees to report any concerns or potential misuse of AI.

Implementing responsible AI can be challenging due to:
- Rapidly evolving technology: Keeping up with the latest advancements and potential risks of generative AI.
- Balancing innovation and ethics: Striking the right balance between promoting innovation and ensuring ethical use of AI.
- Lack of expertise: Finding individuals with the necessary expertise to develop, implement, and govern AI responsibly.
- Competing priorities: Balancing the need for responsible AI with other organizational priorities like efficiency and cost reduction.

While the blog title focuses on challenges, there are positive examples of organizations using generative AI responsibly:
- Developing AI-powered tools to improve accessibility for people with disabilities.
- Using AI to personalize learning experiences and improve educational outcomes.
- Leveraging AI to automate repetitive tasks and free up human time for more creative and strategic work.

This can refer to various practices, including:
- Using generative AI for tasks it's not intended for: This could involve using AI-generated content to mislead or deceive others, such as creating fake news articles or deepfakes.
- Failing to disclose the use of AI-generated content: This can lead to ethical concerns about transparency and accountability.
- Neglecting to address potential biases in AI models: Generative AI models can inherit biases from the data they are trained on, leading to discriminatory or unfair outputs.
- Ignoring ethical guidelines and regulations: There are growing efforts to establish ethical frameworks and regulations for AI development and use, which some organizations might not be following.

There are several reasons why organizations might be "breaking the rules" with generative AI:
- Lack of awareness: Some organizations might not be fully aware of the potential risks and ethical considerations associated with generative AI.
- Pressure to innovate: There might be a strong push to be at the forefront of AI adoption, leading to shortcuts or overlooking ethical concerns.
- Insufficient resources: Implementing ethical AI practices can require resources and expertise that some organizations might not have readily available.

- Develop clear policies and guidelines for AI use.
- Invest in training and education for employees on AI ethics and responsible use.
- Implement robust risk management practices for AI development and deployment.
- Promote transparency and accountability in AI decision-making.
- Conduct regular audits and assessments of AI systems to identify and address potential issues.

Scroll to Top
Register For A Course