The Pulse: AI's Shifting Landscape
The rapid evolution of artificial intelligence is reshaping the technology landscape daily. We're developing an analysis of recent AI developments, particularly examining DeepSeek's entry into the field and its impact on industry dynamics. With new developments emerging constantly, we're working to provide a comprehensive understanding of the implications and how they may affect stakeholders.
In this week's Pulse we gathered observations and key takeaways that, while they may not fit the main article's scope, could provide valuable context for understanding the broader implications of these developments.
DeepSeek performance against o1
The open-source model from China is handled beating o1 with faster processing, using fewer tokens and being cheaper, running on more energy efficient hardware.
It's a great thing to get some more competition in this space. OpenAI has been data hoarding and not contributing back to the open-source community, which is a fundamental pillar to the tech space.
It's estimated 97% of applications use some form of open-source software, and we need to ensure it's available in the future. The other great outcome here is LLMs do not need to have such a significant carbon and energy footprint, it's possible to be efficient.
MS Copilot oversharing information
Imagine a GenAI assistant sharing confidential information with the wrong employees or contractors, that would be a nightmare. It's not as simple as just turning on the tool, paying a license and forgetting about it. The article below covers your options from Microsoft.
Data security & sensitivity can easily be taken for granted. GenAI tools typically have a lot more access, and we need to ensure the agents use security best practices such as principle of least privilege to do their job functions.
It's also worth reviewing sensitivity periodically as organization shifts frequently & it needs to be a part of your data governance process.
LLM on Kubernetes
If you've decided you want to setup your own LLM, the infrastructure management can be significant. Microsoft introduced the Kubernetes AI Toolchain Operator (KAITO) that automates tasks like provisioning GPU nodes, configuring resources, and setting up inference endpoints
I've seen a lot of organizations use APIs from OpenAI, Claude, etc. Several issues come with this - at scale these can be expensive; more importantly data security becomes an issue as it's hosted by a 3rd party.
You can choose to use a managed service such as AWS Bedrock, however, you're forced to deal with vendor lock-in, limited customizability, and data portability issues. It's a great project to familiarize yourself with as other AI use cases beyond LLMs increases, these tools will be necessary for scaling.


