One in five Canadians using generative AI at work or school

04 July 2023 2 min. read

One in five (20%) of Canadians are using generative artificial intelligence (AI) tools to help them at work or school, according to KPMG Canada’s Generative AI Adoption Index. The consulting firm surveyed 5,140 Canadians in May 2023 for the inaugural edition of the index, which tracks how, when, and why Canadians are using generative AI tools (such as ChatGPT).

The first index score was 11.9, indicating overall penetration remains relatively low. A score of 100 indicates mass adoption.

Of the 20% of Canadians who use generative AI, 18% use it daily, 24% a few times per week, and 26% a few times monthly.

The most commonly cited uses are generating ideas, research, writing essays, and creating presentations – with respondents saying their AI usage has helped increase their productivity and grades.

One in five Canadians using generative AI at work or school

Over half of users said generative AI saves them up to five hours per week, while two-thirds said using AI frees up time for them to take on other work.

The adoption of generative AI tools exposes enterprises to new risks, however. Nearly one-quarter (23%) of professionals using AI said they are entering info about their employer into prompts, while some are entering private financial data (10%) and other proprietary data, such as HR or supply chain info (15%).

Many are also skipping a quality check on the AI-generated content – even though tools like ChatGPT often produce misleading or inaccurate content that can appear legitimate at a cursory glance. Approximately half (49%) of users said they check every time, while 46% said they check sometimes.

Many users are also claiming AI content as their own original work, with nearly two-thirds doing so all or part of the time.

One in five Canadians using generative AI at work or school

To deal with enterprise risk and ethical issues arising from emerging AI adoption, KPMG recommends businesses institute robust frameworks, controls, processes, and tools to ensure AI systems are used in a trustworthy manner and sensitive data is protected.

Aside from relevant training, companies may also opt to create proprietary generative AI models with safeguarded access to data to prevent leaks from usage of public tools.

“Responsible AI is the foundation of every successful generative AI strategy,” said Ven Adamov, co-leader of KPMG Canada’s responsible AI framework and a partner in KPMG’s generative AI practice.

“For organizations, that strategy should also include: assessing and implementing the right technology; ensuring data is relevant, recent and accurate; and training and empowering employees to use AI responsibly. Organizations that make these investments will gain a real competitive advantage and be able to monetize their data.”