Think You’re Ready for AI? Here’s What Most Organisations Get Wrong

Everyone’s talking about AI transformation. Copilot, ChatGPT, generative AI – they’re all over the headlines, promising to revolutionise how government agencies work. At Factor, we know government agencies across Australia are racing to implement AI. But here’s an uncomfortable truth… Most organisations aren’t nearly as ready for AI as they think. From our experience, this […]

Published on January 30, 2025

Everyone’s talking about AI transformation. Copilot, ChatGPT, generative AI – they’re all over the headlines, promising to revolutionise how government agencies work. At Factor, we know government agencies across Australia are racing to implement AI.

But here’s an uncomfortable truth… Most organisations aren’t nearly as ready for AI as they think.

From our experience, this is what we are seeing:

  1. Agencies succeeding with AI aren’t necessarily the ones moving fastest – they’re the ones building the right foundations first

  2. The result of those rushing toward AI adoption without the essential foundations in place? Wasted investments, security vulnerabilities, and AI systems that deliver far less value than promised.

Let’s break down the risky assumptions that could derail your AI journey before it begins.

Pitfall #1

I know our data is ready for AI – but is it?

The common assumption is your data is as good enough as it can be. It’s stored safely in a system, so ‘technically’ AI will figure out what data is more important than other data.

Here’s the scoop:

  • Siloed data across different systems makes it impossible for AI to see the full picture
  • Poor data quality leads to AI making recommendations based on incorrect or outdated information
  • Unstructured, inconsistent data formats allows AI tools to provide unreliable or contradictory recommendations
  • Many agencies discover their “AI-ready” data is fragmented across hundreds of spreadsheets and legacy databases

Truth time about AI, error-prone chatbots have been providing misguided advice to people looking to migrate to Australia – attributed in part to old data.1

Start by modernising legacy and core business applications. Moving key systems to platforms like Dynamics 365 and Power Platform naturally consolidates your data, bringing them into the cloud while preparing for AI capabilities like Copilot and Azure OpenAI. It’s about transforming your everyday tools into AI-ready assets.

Pitfall #2

She’ll be right attitude towards Security

The common assumption is you can add security controls after AI has started or that your existing framework and measures are good enouh for now.

Here’s the scoop:

  • Prompt injection attacks can “poison” AI responses, potentially exposing sensitive government data or personal information
  • Unsecured AI systems can inadvertently learn from and expose private citisen information
  • Without proper security controls built with AI in mind, sensitive data could be exposed through AI systems that have broader access than intended
  • Shadow AI tools may already be accessing your data without proper security oversight

Truth time, Samsung discovered their developers were inadvertently exposing proprietary code through AI coding assistants, leading to a company-wide ban of certain AI tools and an urgent security review of all AI implementations.2

Security needs to be proactive, not reactive. This means implementing robust access controls, establishing AI-specific security policies, conducting regular security assessments, and training staff on secure AI practices. Yes, platforms like Power Platform provide strong security foundations – but your security strategy needs to encompass people, processes, and technology working together.

Pitfall #3

Let’s be agile to our approach with policies

Often the assumption is to create governance frameworks once you see how you’re using AI. The current policies will cover most scenarios.

Here’s the scoop:

  • Uncontrolled AI adoption leads to inconsistent use across departments
  • Lack of ethical frameworks means there’s no governance over how AI is used in decision-making, potentially leading to unfair or discriminatory outcomes that go undetected
  • Absence of clear policies leaves agencies vulnerable to AI misuse
  • Without governance, different departments implement conflicting AI approaches

Truth time, Italy’s data protection authority forced ChatGPT to temporarily shut down operations due to lack of proper data governance and privacy policies, costing millions in lost revenue and requiring a complete policy overhaul before services could resume.3

Take advantage of existing frameworks and guidance. Microsoft’s responsible AI standards4 provide a tested starting point, which can be customised to your agency’s needs. Begin with clear policies around data handling, AI use cases, and ethical guidelines – then evolve these as your AI capabilities grow.

Pitfall #4

People will just adapt to using AI

Often the assumption is people will just adapt to using AI, as they likely are using it already and maybe the cohort or team are already technically savvy.

Here’s the scoop:

  • Untrained staff often misuse AI tools, leading to security risks
  • Lack of proper training results in inefficient use and wasted resources
  • Without change management, staff resist adoption or create workarounds
  • Important nuances in AI use are missed without proper guidance

Truth time, Lawyers from a New York firm faced sanctions after using ChatGPT for legal research without proper training, submitting fictitious case citations to a court because they didn’t understand how to properly verify AI-generated information.5

Foster AI literacy through structured learning paths. Start with basic training on platforms your team already knows, like Power Platform, then gradually introduce more advanced AI concepts. Change management isn’t just about training – it’s about building confidence through hands-on experience with trusted tools.

Pitfall #5

But, we must get ahead of the AI curve and just start now!

The Assumption: A common assumption is you are too late to the AI race, firstly… What race? Agencies aren’t progressing faster than the technology.

Here’s the scoop: 

  • Starting early, while important, doesn’t create an urgency
  • Effective AI implementation takes time
  • Quick wins without strategy result in disconnected solutions
  • Hasty adoption often means paying twice: once for the quick solution, once for the proper implementation

Truth time, BCG’s 2023 study of 1,400 organisations found that achieving significant value from AI can take multiple years – three years on average – of sustained investment and preparation.6

Start soon but prepare to be in it for the long game. Focus on foundation-building quick wins. Begin with app modernisation, have a strategy, then gradually layer in AI capabilities as your foundation strengthens. This approach delivers immediate value while preparing for more advanced AI implementations.

Pitfall #6

We’re good, we have AI covered internally

The biggest assumption in this blog is that all AI projects are just another IT initiative, and that you have the right technical experience internally to handle it.

Here’s the scoop:

  • AI projects require specialised skills beyond traditional IT
  • Hidden costs emerge without proper planning
  • Infrastructure needs are often underestimated
  • Ongoing maintenance and training needs exceed expectations

Truth time, AI skills and expertise is one of the top AI adoption barriers as reported by IBM in their Global AI Adoption Index (2023).7

Partner strategically to complement your internal capabilities. Look for partners who understand both the technical landscape and government’s unique challenges. The right partner helps you leverage existing investments while building internal expertise at a sustainable pace.

So tell us, are you making any of these assumptions too?

The path to AI success starts with understanding where you really stand. That’s why we’ve developed a comprehensive AI Readiness & Digital Maturity Assessment specifically for government organisations.

This free assessment will help you:

  • Identify gaps in your AI readiness
  • Understand your true starting point
  • Avoid costly mistakes before they happen

Ready to see where you stand? Take our AI Readiness Assessment now. 

The Factor helps government agencies build the right foundations for AI success. Let’s make sure your AI journey starts on solid ground.

Published on January 30, 2025