7 Questions you need to be asking about AI in your organization today
AI is here to stay, and become as normal as Google. If you don't manage it, it will own all of your information, removing your IP and competitive advantage.
We are now fully in the world of GenAI. Every product we now touch has some AI component or a plan to have one. Whether you or your organization plan to join in or not, there still needs to be a strategy around it. AI was originally touted to increase productivity, efficiency, and for some organizations even replace hiring needs, it has also caused significant damage.
GenAI has shifted the landscape for organizations in 2025, whether you're planning on rolling out or not there still needs to be a strategy around it. While it was touted to increase productivity, efficiency, and for some organizations even replace hiring needs, it has also caused significant unintended damage to companies, their intellectual property or even their software code.
As GenAI has become multi-modal, it translates to the fact we can no longer simply trust what we see, hear, or read anywhere including social media as content can be produced by anyone with malintent. From a hacker perspective, it's has now become infinitely easier to generate phishing emails, social engineering attacks. For software development, since AIs are trained against open-source libraries, those vulnerabilities will get copied into many private code repos!
Should your decision be to not adopt GenAI tooling due to these fears, a strategy is still needed as many of the vendors in your software supply chain will be using such tools. Yes, they are supposed to get consent before collecting and using data, but it's always good to have a plan otherwise.
Concerns About The Current Landscape
Whether you're prepared or not, someone in your business will be adopting such tools and you will undo all the work you put in to eliminate shadow IT. In my experience, I have seen countless times departments like marketing have gone out procured a tool without IT team knowledge especially when "denied" the use of a particular type of application. The other part about this is if not don't address it actively, you're not sure how your team or organization are adopting these tools. Low hanging fruits such as code development and emails are easy to guess, but without using responsible AI tools (which usually cost money), it could also mean proprietary knowledge being shared with 3rd party entity without the organization's knowledge.
The truth is though, while the knowledge leakage is real, it is mostly not intentional. Case in point, I was chatting with an acquaintance regarding their ChatGPT usage, they mentioned:
Them: "I just throw in all documents and have it reviewed."
Me: "What about proprietary knowledge and business information? You could be sharing sensitive materials without realizing it, in fact you might be breaking half a dozen compliance standards without realizing it"
Them: "I never thought about it"
The kicker was - this person was in IT!
The landscape is becoming easily accessible to end users in both their home, and work lives with technologies like Apple Intelligence on phones to OpenAI agent models & Claude on the computer, all of which have the ability to control computers and mine personal data. Now imagine someone providing access to those on their own systems with Bring-Your-Own-Device policies (and it having access to your corporate data). Circumnavigating IT guardrails has never been easier.
Even if you decide to block via firewall, it's not entirely possible as MS Copilot, Google Gemini and similar products are integrated within the corporate products we use.
You Can't Beat It or Hide Your Head In The Sand, You Need a Strategy
While its daunting to not wanting to deal with AI, especially given so many other responsibilities we all have, it has to be done collectively in the entire organization for the organization's survival may depend on it. The following 7 Questions will help you uncover the guardrails needed to responsibly rollout AI in the organization:
What are your organization's mission critical workflows and proprietary data? This data must be protected and not fall into the wrong hands. Step 1 is to go and ask the business (or if your scope is department and/team, ask them) to review their mission critical workflows and determine whether they should be accessible via GenAI or not. Review implications if the data is leaked.
Look at your software supply chain, and how they are delivering using GenAI and its impacts to your tool usage. Do you want those capabiliteis and is there a way to turn them off if not? I can't count the number of times I've seen an AI product that was just a wrapper to OpenAI, Claude, etc. There are very real implications for your business should your vendors be sharing proprietary information.
Access to information? Oversharing is a huge issue in most organizations. There was a scenario where someone using CoPilot was able to see CEOs emails, that should never be the case. Access Control is imperative and determining how to clean those/put it in is key.
Differentiation of data sources and who (from an AI tool perspective) should have access? There is a common false notion that having access to all data means the best results, I think LLMs have proved that's not true. The quality of the data is imperative, and too much data can lead to noise. Even from a vector database (the common dbs behind LLMs), formulating a response requires it to select a subset of information and if it hased indexed a lot of noise, that doesn't bode well. Additionally, not all sources should be treated equally in the organization. Data from OneDrive should not be treated as a trusted source on a vacation policy that should instead be coming from the HR SharePoint site. Training AI agents on specific areas within the organization will likely produce better quality. Also, training users on responsible usage helps as well.
Do you have policies and procedures for dealing with Shadow IT? The core problem is not tackling applications that are outside the knowledge of IT, therefore no governance, monitoring, compliance, or risk best practices can be applied. The good news is that this is not an unique problem to AI, it just makes it louder. But using best practices to target shadow IT applications will help immensely.
Understand possible use cases? Talk with various teams and how they think they might benefit from such a tool, integrating human presence with AI to produce a better results. You can always start with a small footprint and grow as the business develops capabilities internally. Some examples might include integrating note taking into meetings, condensing heavy real-time data processing tasks such as advanced threat intelligence.
Adopting SaaS, PaaS, or IaaS? It might surprise you, but AI rollouts are following the same 3 models used for all cloud services. Satya Nadella had mentioned that SaaS would eventually be replaced by agents, you're still consuming it in a SaaS model for a lot of cases. SaaS would be your ChatGPT, Copilot for Microsoft, Claude, and similar applications available via web interface or integrated into end user applications such as office, you can consume the services but minimal access to setting up any policies or procedures. PaaS models would be AWS Bedrock, Azure OpenAI where cloud providers give you access to LLM models, you decide what knowledge bases to refine the models against, and how these are rolled out. These could range from customer facing chatbots to internal line of business application integrations. You get access to all well-known LLMs but the benefit of applying governance guardrails, security, and monitoring best practices without needing to worry about the underlying infrastructure. Finally, the IaaS approach means deploying the infrastructure, selecting a LLM such as llama, and training it yourself. This option obviously gives the most flexibility but also requires the most investment from a building perspective. Let's be clear, you own everything.
Is your organization rolling out AI? Do you have any fears around it? Comment Below.


