Powering Scalable, Smarter AI
Listen or watch now
Why this episode matters
While large AI models grab headlines, many companies are discovering that smaller, specialized AI models offer better control, efficiency, and business relevance. This episode demystifies Agentic AI and explores how organizations can move beyond proof-of-concept AI projects to implementations at a large scale.
Laying the Data Foundation for Scalable AI
Without high-quality, enterprise-specific data, AI cannot deliver meaningful results. Learn why a strong data foundation is essential for implementing AI at scale and how organizations can prepare their data for AI-driven transformation.
Moving Beyond PoC Purgatory
Many AI initiatives stall after the proof-of-concept stage due to a lack of infrastructure for large-scale deployment. Learn how to avoid this common pitfall and ensure your AI projects drive long-term value.
The Role of AI Agents in Business Transformation
AI agents can automate decision-making, optimize workflows, and enhance user experiences. We explore how AI agent platforms can be structured for maximum business impact.
Building an AI-Ready Organization
A successful AI initiative depends on more than just technology—it requires an AI-driven culture, the right governance frameworks, and cross-functional collaboration. Discover strategies for aligning AI adoption with business objectives.
Meet our Experts
Maciej is a senior manager, UX/UI expert, researcher, and strong advocate for professionalizing companies’ approaches to enterprise software UX. He believes that when data meets the user, UX is crucial for the quality of insights and reports, serving as the foundation of data-driven decision-making. At C&F, Maciej is dedicated to delivering exceptional UX in every process and solution developed for clients.
Go to expert’s pageAs a management and technology consulting professional, Kenny is passionate about helping businesses adapt to the increasingly digital world. With extensive experience leading cross-functional teams, he has successfully transformed agile delivery processes and implemented new technology stacks, including Internet of Things (IoT) platforms and web-based applications. His work has helped clients achieve delivery under various agile frameworks, including the Scaled Agile Framework (SAFe) and Continuous Integration/Continuous Delivery (CI/CD) models. With more than a decade of experience in technology, Kenny has delivered IT and management consulting services to the pharmaceutical manufacturing and supply chain sectors. As a product owner, he led a team of designers and developers in creating custom applications optimized for various device form factors, employing agile methodologies to ensure a user-centered design.
Go to expert’s pageUnlocking the Power of Agentic AI: Episode Transcription
Introduction to Value-Driven AI Implementation
Maciej Kłodaś (MK): Hello everyone, my name is Maciej, I’m the leader of Analytic Experience competency group at C&F and this is C&F Talks, a place where experts discuss their challenges and ideas from the perspective of an IT partner. My guest today is Kenny Gibbons, hello Kenny.
Kenneth Gibbons (KG): Hey Maciej, nice to see you on this side of the world.
MK: So Kenny, can you tell us a bit about yourself, your responsibilities and your focus at C&F?
KG: Yeah, so Kenny Gibbons here, I’m a senior director, I lead our client engagement practice at C&F. My focus is to partner with our clients to help bridge the gap between business value, business cases, strategy and roadmaps to technology and innovation. So on a day-to-day, a lot of times I’m helping clients understand the landscape of new technology and how to apply that to unlock business value.
And then after we create that strategy, I help our engineering teams deliver that. So I support the end-to-end software delivery cycle for our key clients. Nice to talk to you Maciej, I’m happy to be a part of this.
Gen AI, AI, ML, LLM, all the acronyms, it’s a super interesting topic in this space, constantly evolving and constantly being talked about. So what more of a better time to have this conversation?
MK: So Kenny, you joined C&F around two years ago, and I have to say you are covering one of the most interesting, but also one of the most complex areas, which is supply chain and manufacturing. And I was very eager to talk to you about this.
We had one recording with Max about the value of building a business roadmap. Right now, we would like to talk about AI implementations in your area. So can you tell us about the background of AI? How has it started and how it’s going right now?
KG: Yeah, and I think one of the really fun parts about this is I work a lot in the manufacturing supply chain space. I also help other clients in other areas. So I will probably pull in some manufacturing and supply chain examples, but my goal here is to keep this broad and open this up as a conversation that can be applicable for any industry. And as you mentioned, AI is a super hot topic and at times even a buzzword in the industry.
I think every conversation that I’m having in the industry comes back to AI in one way or another. So I’ll take you through kind of that journey, our thinking of where AI is now and where it’s going. I’ll quickly go into some background, introduce a couple topics.
I’m going to try to not make this too technical, keep this more business-focused and try to not to dive too deep into that. So I’m not going to go back to 70 years where people like to say, machine learning statistics have been around for the last 60 years. It’s just now been hot over the last 10.
I want to maybe not go the whole way back to where let’s say neural networks would have been starting, but really to start with one of the biggest pushes that we’ve seen in the past couple of year. When ChatGPT broke through and really shocked everyone at how far LLMs have come and really sent shockwaves throughout the industry. Verifying my sources, ChatGPT was actually November 30th, 2022, close to 2023. But I think that was a wake-up call for a lot of the industry of just how far large language models have gone.
And at the time, no one thought that that was even possible. And that opened up all this conversation. If we fast forward from 2022 to now, a lot of the big tech players are all have their own LLM models, all have their own approaches.
They’re trying to carve out their unique selling points within the space. I think a couple other unique events recently, the whole DeepSeek really showing how quickly and how less cost-intensive their LLM inferencing can be also sent shockwaves.
So I think what you’re going to see is these continued breakthroughs in the industry as more and more investment, more and more tech players come into the foray here. And at the end of the day, what we’re hearing from our clients is where do we start and how do we take advantage of all the technology that is out there? And like you mentioned, Maciej, there’s so much out there. Where do you begin? How do you begin?
The Current State and Most Important Trends in AI Adoption
MK: How do you see the industry right now? Because most of the companies, most of the clients we are working with, we can see that they have budgets secured for AI implementations. They don’t really know how to utilize it. So we are helping them build a roadmap, build use cases right around that.
KG: Yeah, and what we’re primarily seeing right now is I would say a lot of experimentation. This tech is moving so fast that a lot of companies, especially non-digitally native companies, ones that we work with and help them become more digitally native, are struggling to keep up with all of the innovation and all the changes that are coming as part of it.
And I think as this tech evolves, it’s going to continue to become more and more challenging to stay on top of it. I’m going to talk through some of the trends that at least I’m seeing, or I think we will see in the coming months to even years in the industry. Before I jump into that, like I said in the beginning, we’re going to focus on the AI as a platform, how to think about more than just one AI use case.
There’s a lot of different areas to focus on in AI that could be like the ethics side, how do you train, how do you do all of that in the AI space. We’re not going to cover that here. Another piece is truly building out the LLM models.
So the large language models that you interface with in natural language, as well as the ML models, which are doing more advanced statistical analysis, prediction, and those kinds of insights. I’m not going to talk about the best use of those models. What we’re going to focus on when we talk about trends is knowing that these models are going to continue to evolve and continue to become more and more advanced.
How do you position your organization to really take advantage of this growing tech trend. So as I start talking through these future trends, that’s all of that is in the back of my mind and want to make sure that it’s part of the talking points as we move forward.
So I think one of the big trends that we’ll see is that models will become smaller and fine-tuned to specific tasks. What does that mean? So take something like a ChatGPT. It’s a large language model. It’s technically closed source in most of the implementations. It’s huge. It needs a lot of data. It has a lot of inferencing that goes on behind the scenes, a lot of processing.
What we’re starting to see is the evolution of these models to stop being so generic in nature, but really being fine-tuned to being really good at smaller, more minute style tasks. And I think this trend will continue to be over the next year to two years as costs pressure start to come in, as LLMs being the big ones being good in general tasks, but not great in specific tasks. And I think that’s one of the big trends that we’ll start to see.
And that sets the stage for one of the next evolutions of this. And when we talk about AI platforms is they’ll start to become more and more autonomous. So right now in talking to LLMs like ChatGPT, you’re chatting back and forth and they’re giving you a response back.
What we’ll see in the coming, I think even months, is that a lot of these AI platforms won’t need to take user input to make action. They’ll be able to start recommending or even taking those actions by themselves based off of the knowledge and context that it’s given. And this will continue to evolve, especially as these models get, as I said before, more and more fine-tuned for specific tasks.
I think with all this, one of the other areas that you’re already starting to see is a proliferation in these AI models. So as these get smaller, you start to have specific either generative AI models or ML models for specific tasks. You’re going to see a lot of different style models come into play.
These platforms will have to really allow for interoperability between plugging and playing different AI models for different use cases is what it’ll come down to. And I think one of the last things outside of cost and privacy and the ethical area is, I think you’ll start to see more and more citizen development of the space. We’ve seen it in low-code, no-code style platforms where it’s a lot easier today if I wanted to go into Microsoft and build a simple power app, I could do that.
There’s a lot of intuitive, very quick, easy to use tools, and I can build an application without having previous technical skills. You’re going to start to see that in the AI space. A lot of technical citizens, will be able to spin up their own AI agents to help them in their day-to-day tasks, fine-tune them, and start to deploy these into the organizational platform.
And I think that’s going to be a very common trend. And actually, we’ve already seen some players, some platform players in this space start to provide tools to do that. Some of the data platforms like Snowflake or Databricks, so Snowflake has Cortex, it’s an LLM model that they’re training with the data that you can take advantage of. And you’re actually starting to see they’re offering additional services to deploy and run AI agents and these models in their own ecosystem. And these will just continue to trend for most of the software players that are in the market today.
MK: What I’m hearing is Skynet, this is the beginning of the Skynet right now.
KG: That’s what everyone thinks, right? I think that will come when we get closer to AGI.
MK: Well, one of the hottest news right now is that AI can clone itself right now. I’ve heard it yesterday or even today. So this is scary. Or even AI prevented itself from deleting. So this is also something that we need to keep in mind that autonomous models, as you said, will be proactively recommending actions or taking actions without any user interference, right?
KG: Yes, and that’s a good set up for some other talking points that we’ll make later. Maciej is doing a great podcast host job there. Part of the platform approach is to make sure that you put the adequate guardrails to make sure that those kinds of things don’t go off the rails.
And I’ll talk through these challenges a little later, but I think that is one of them. There is concern around that. There’s ethical concerns to make sure, you know, the data and the way that the AI operates is in an ethical manner and what data it’s trained on. And then also there is always the quality concern.
One of the biggest challenges you see in a lot of AI models and even broader implementations are hallucinations. So hallucinations are when you ask, you ask your AI model and it starts answering with gibberish or it doesn’t make sense. And then you ask it to repeat. And I’ve even seen the cases, and I’m sure we all have where it doubles down and actually starts gaslighting you to think you’re wrong.
So those are all the things as AI advances that really need to be part of the organizational and the platform approach. There’s some architectures that are out there that help to solve that. We’ll definitely see those continue to evolve, especially in 2025.
Main Challenges for Successful AI Initiatives
MK: All right. So the challenges, what you see in the industry right now, what we are seeing.
KG: When we see challenges, they’re very common across all the clients that I’m hearing from. I think the first one starts with value translation. A lot of times where we are in the AI journey are generally productivity value measures.
So, I’m communicating with a co-pilot that is helping me with PowerPoint that saved me three hours a week, right? That’s what a productivity saving is. I think this is where we are now, but there’s been a lot of advancement. I’m seeing a lot of PoCs done in the AI space, but now the question is, what’s next? How do I get more value? How do I translate value into this? And productivity will only get you so far when you’re talking about business cases, especially to really move the needle for large-scale organizations.
So one of the challenges that I’m observing in the industry is: this is a very novel and new space. Business processes are going to change because of these. And a lot of times it is hard to make business cases on the savings that will be made, the direct revenue savings that would be made because of AI.
I’m seeing a lot of our clients struggle in really putting together strong value cases as part of this. And with that, I mentioned the business process changes. A lot of times when I’m seeing an implementation, it’s focusing on the current business process and not thinking about how does that process change because of AI.
So we’re used to developing solutions enabling business processes that we already know about. AI is truly changing, not only our daily lives, but how we conduct our work and our business. And I think a lot of times clients are having a hard time stepping out of the current comfort zone to really reimagine how AI could affect and impact business processes outside of just productivity improvements.
MK: This is just a mental model of the user, right? We’ve done that in some fashion using certain tools. So we are trying to shift the paradigm of using AI to reshape the process from scratch to do it differently.
KG: Yeah, and when we were, that’s 100%. I think the next leap, and Maciej, you’re a UI UX lead, so you know the design thinking the next leap is we’re shifting the paradigm. So what’s next? When we start talking about autonomy and we start talking about how, instead of me being involved in that process, can it become more predictive, more autonomous in that use case.
And I think that’s where the challenge is. We don’t live in a very predictive world. We live in more of a reactive world. And that’s where that big challenge is starting to show.
AI Agents—The Next Step in Generating Value with AI
MK: Okay, where does it take us now? I know that there is a shift to small language models because they are focusing on a narrow area of expertise. The next level is building certain agents to help us in everyday life, focusing on a very narrow set of data, right?
KG: Yes, and if 2024 AI was one of the biggest buzzwords, I think AI and digital twins probably were the two biggest buzzwords. I think if I was a betting man, 2025, if it’s not already the case, I think, agentic AI or AI agent platforms or is going to be the biggest buzzword of 2025.
MK: Well, actually in 2024, the biggest buzzword was Gen AI.
KG: That is true. That is true. In my client conversations, obviously the AI piece, but also digital twins came into play a lot. I don’t know if you’ve heard it a lot, Maciej, in your conversations, but we’ll cover that another time because digital twins can mean a lot of things. And it really became a buzzword, but that’s a topic for another day.
MK: So what about those agentic AIs?
KG: Yeah, so when we talk about agentic AI, I think fundamentally it’s really important to understand the core word in there, which is ‘agent’. It’s effectively a standalone AI process. Usually that will be an LLM model or an ML model that is gathering context from data, whether that be simple or more robust data sources, and then taking an action. It can be seen as either actions or functions, but it’s a building block for taking context, reading it into that ML, LLM model, and then taking action based off of that.
And where this starts to get expanded is if you start stringing these things along together. So you have one agent that is really good at doing one task and it gets the data, it takes the context of the situation, takes action, it passes that action to another agent that is really good at its specific task, and then passes that through. So this is where you’ll start to get more robust autonomy and more of that action that we were talking about.
And that’s really, I think, where the next level of organizational and enterprise AI is going to go, building those reusable agent building blocks.
MK: Can you give us some example so we can understand how these elements or agents can be linked together?
KG: Yeah, I’ll give you a super simple one, and then you can take that and extend it in all kinds of ways. And I think this is where the design thinking of what is truly the art of possible comes into play in a lot of these conversations.
Let’s take two different types of models and walk through it. Large language models, they’re really good at taking in natural language and then communicating back in that natural language. So I’m going to ask you to maybe do some imagination here as I explain this and walk through it.
So imagine you’re looking at a KPI dashboard and you want to ask some questions about those data sources. You’re going to use an LLM to do that behind the scenes, the user may never know. So I’m typing in my ChatGPT, I’m asking it, hey, can you tell me about this data set in 2025? It’s going to go off, process it, it’s going to come back and give you that information.
Now, what if you could send that message and say, hey, can you tell me about this? And also can you create and provide me additional insights based off of social media trends as part of this? So that agent will understand what you just asked it for, but it’s not good at understanding and looking through social media data. It’s just not trained for that. It will understand that ask, it’ll take that context, it’ll take an action to send that information to another agent that is now tasked to really scrape through and understand social media trends.
That agent will go into that social interaction data, bring it back out, send its information to the LLM, and that LLM will now bring it back to you in the format and language that you would expect it to. And in this case, because I’m a native English speaker, that would be in English. And that’s just one simple use case.
MK: It’s about building a mesh of agents. So you could have this seamless experience of very complex solution.
KG: Yes, and that’s where it’s, from my experience, just starting to be experimented on is how do you start building this agent mesh effectively. And from an enterprise standpoint, how do you build the reusable components to do that? So we spoke a little bit about the hallucinations. So imagine now you’re just chatting to that LLM, but now it’s also taking actions and telling other agents to do things.
If you don’t have that LLM tasked and trained very well, or don’t have hallucination guardrails in your architecture, you might be sending an army of agents to go do all kinds of other things that you probably don’t want to do. So instead of arguing with you, which is today’s implementation, that LLM is going rogue and also creating all kinds of other actions that are creating downstream impact. That’s one of the areas when we talk about this platform is having the guardrails in place to be able to stop those things from happening as they occur.
The LLM still might hallucinate. You can’t always stop that in the actual model itself, but you can basically know that no more actions should be needed because this isn’t correct. And there’s a lot of different ways to do that in the agent mesh, but that’s one example as we think about more of a platform approach.
MK: How do you check whether the answer you got is correct?
KG: Well, what I’m starting to see, and what we’ve already done actually, is using other agents to check that. So you use another model that can check that and then build a feedback loop into that. It’s actually part the training side, but I’ve seen that in the enterprise approach too. And that’s how we’ve started to do quality check in some of our, I would say for now, simple implementations, but we’re thinking of how to take advantage of this agent mesh framework.
I think that’s a good word. I actually didn’t use the agent mesh framework before, but I like it, Maciej. It’s a good- Thank you. It’s a good concept.
Where to Start in AI Projects
MK: Okay, so tell me, usually when a client approaches you and tells you that they are looking into an AI implementation, what are your recommendations? And another question, how to start off a project of implementing AI or agentic AI?
KG: Yeah, so in typical consulting answer, I’ll say it depends.
MK: It depends? Okay, I hear that a lot.
KG: More seriously, I think it really depends on where they are on the journey, right? So one of the first questions that is, and I know we didn’t cover it here, but do you have any AI governance processes or guidelines in place? Do you have an enterprise endorsed SOP so that we can start?
When we were talking about all those other platforms that provide AI capabilities, usually there needs to be controls over those platforms to make sure that, one, the data is safe, and it’s processed correctly, and two, to make sure that that data’s not getting sent anywhere. That allows us to start those conversations. If the clients aren’t there, generally speaking, that’s the starting point. Let’s start with getting the strategy in place so that you can start the implementation.
If there’s already AI products currently available, and this is, I would say, the next step, and even sometimes the challenges that I’ve seen, is understanding what are the use cases that you want to go after. One of the biggest challenges that I’ve seen is creating use cases past just typical chatbots. I’d say 90% of the use cases we’re currently at is chatbot-style. Let me talk to a chatbot and get some answers back.
That’s where, in general, we recommend to start, but we also recommend to think bigger. So helping our clients to think about how do you not only enable a simple use case that could probably bring minor business value, but think bigger so that you can build a platform and start reusing these components as this platform evolves, and making sure that that architecture’s in place.
Usually we will start in the PSE space that I was just talking about, probably going to be a chatbot. That’s where everyone starts, and honestly, that’s where a lot of organizations start, and even inside organizations, I’ve seen a proliferation of disconnected gen AI use cases, because everyone is off doing their own things.
But having that platform is a good starting point because as different areas and organizations go through the experimentation process, if you put up the right architecture strategy to support that, as some of those bring more and more value, you start to create those reusable AI components. So that’s where we start. Like I said, usually, from what I’ve seen, a lot of this is in PoC.
From POC to Production: Why Scaling AI Often Fails
A lot of our clients are struggling, which we help with, to get these use cases into production. There are some production areas, but as typical in new tech, there’s this stuck in, let’s call it ‘PoC purgatory’. When you try to scale up these solutions, when you try to do more with these solutions, if the architecture isn’t in place to support that scale, which a lot of times it’s not, you see cracks in the foundation and the cracks get bigger, the more it tries to scale.
That’s where I would say a lot of our clients are, unless they’re using an out-of-the-box tool, like a copilot. I’m talking a little bit more on the more advanced end, not just OOTB.
MK: You’ve mentioned democratization. People who are building their own pieces of AI using low code or whatever, then there’s governance. We are working mainly for clients of a highly regulated nature, how to deal with that?
KG: When I was talking about getting stuck in production, compliance is one of the biggest pieces. I think there’s a couple answers to that, as always the ‘depends’ answer, but I can give a couple of examples.
So I was actually at a conference late last year. It was a biotech manufacturing conference, but one of the topics was digital manufacturing. And one of the interesting points was one of the manufacturers was actually using Gen-AI to speed up their quality documents. So when they’re doing a batch release, for example, instead of needing to create those documents from scratch, they were using Gen-AI to create 80% of those documents.
And then a human went through the last 20% to make sure everything is accurate. One of the interesting things that came from that was they mentioned that they brought the FDA along as part of this journey. So I think that the regulatory bodies are starting to understand that they need to look at things differently. I think they’re probably looking for those use cases and seeing how they can start to apply that.
Right now, you’re seeing it mainly used in non-compliance areas, like in manufacturing, it’s GMP, in finance, it’s SOX, but you’re seeing a lot of times in like these non-compliance systems. So you’re not making these super critical decisions or not having access to these specific datasets, or you’re seeing shifts into partnering with regulatory bodies to help support and get approval for the way that it was done.
In all of this, there’s still a human interaction that takes place, and that human interaction is still usually performing some level of quality check, whether it be 10% or 50%, I think varies depending on the implementation. But no matter what, you’re not going to prove safety and efficacy without having human oversight in that process.
Data—A Prerequisite for AI Implementation at Scale
MK: Okay. So Kenny, imagine that I’m the client now. I have budget, let’s say limitless. I don’t have any foundations, but I know that I need to implement AI to optimize the processes in my company. So how can you help me?
KG: Well, I would tell you, and I don’t know what you said about foundation, but it brought up a really important point that I actually haven’t made in this podcast yet. Before you want to do anything in AI, and let’s say you do, you need to make sure that you invest in creating that data foundation as part of your core enterprise strategy.
At the end of the day, AI needs data. As it matures in a couple of years, maybe it won’t need as much as models are already trained, but you still need enterprise- and organization-specific data. A lot of times clients trying to skip past that and then ask for why can’t AI start? And why can’t AI do these things I want it to? Well, the biggest problem is you don’t have a robust data lake or data products to support that AI.
So you’re not going to get much out of it because it doesn’t have a lot of data to make context out of. If you have unlimited money, focus on getting the foundational solutions in place that provide the data for your process in manufacturing. Those are things like ERPs or MESs or starting to get data out of the OT layer. Then provide that data into your data cloud in the commercial space. Those are your CRMs, some of your marketing tools.
You can’t do any of the AI things that you want to do without having that data foundation. The next piece, let’s say that your data foundation is in place, I would recommend to create an AI strategy. It’s not only governance and all that, but also what is your vision of where AI will be in the next two to three years? And how do you best position your organization to be effective in that environment? We are talking about a rapidly evolving environment. The AI use cases that we’re building now, honestly, will probably look completely different in another year.
Some of it, even if it’s in production, may be completely obsolete in a year based off of where those models are going. So how do you create a platform that is interoperable and allows you to be agile as new technology advancements come in to play here?
One of the things that we haven’t talked about is making sure that you can interoperate between different Gen AI models or ML models for those specific tasks, but also if more models come into market and are better utilized for those specific tasks. So a lot of times what I’m seeing is, we’re experimenting with certain out of the box Gen AI models that are out there, but again, ChatGPT, I think it’s on 4.0 now, so it’s already released, and that’s just Microsoft and OpenAI.
This market is going to continue to expand and continue to have advancements in. That interoperability in that agent platform and sending information in between agents to models and picking the ones that are best tuned for those tasks is going to be critical as organizations build out these platforms. So that’s the second piece.
And then the third piece is start starting. I think regardless of all the data not being there, I think it’s picking the right use cases and going after those that bring value. There’s value in starting, learning lessons along the way, so that as an organization, you’re better equipped to have conversations and utilize these advancements as they come. And that would be my third point, Maciej is pick the right use case.
We can work with clients to really brainstorm, go into design thinking, art of the possible, and then just start to go down that path. I would say where the wall is approaching is how to get out of this PoC area that I’m now seeing organizations get stuck in, expanding the use cases, expanding the value. And that’s where that step two and then step one really come back to catch up to you if you don’t have the data and you don’t have the strategy in place to allow for broader expansion and ultimately business adoption of different use cases that you’re deploying.
How AI Is Going to Impact the Future of Business
MK: You said that clients need to be ready and flexible in terms of their foundations and use cases to be prepared to either remove the obsolete model or be ready to jump into some new technology or new AI. Just today, I’ve tested the new Claude and I have to say this is the best solution for prototyping and dynamically building UIs. I haven’t seen that before.
I’ve used and tested many different tools. And I have to say Claude right now is the best one. I’m pretty scared, frankly speaking, because I guess our competency will be obsolete in near future. Hopefully not, but it seems like it. From your perspective, what is the future? What the future will give us in terms of new AI tools?
KG: Honestly, I think sometimes the future is hard to imagine of what the landscape is going to be in five years. I think about it a lot, Maciej.
So I go back and forth on two areas. One is this thought of really embracing autonomy, maybe five, 10 years out, where does AGI come into place as part of this? And that’s the typical, ‘AI is going to rule the world eventually’ take. I don’t know if it’s going to be within five years, but probably at some point.
But then I also come back to reality and being in the industry, I know all the challenges that are out there for data and to reach that level of autonomy. So I’m not sure. I think you’ll see some really, really advanced use cases and advanced enterprises coming into the foreview in the next five to 10 years.
You’re honestly starting to see it in some of the industries that are easier to be automated. I think you’re starting to see those trends. In the more complex business processes, like biotech manufacturing, I think those are going to take longer.
A lot of times the industry right now is just trying to get the data out of 30, 40 year old systems to then use it. So I don’t know. I think it’s going to be a broad spectrum.
As the leaders in the space, and like I said, you’ll see industries that are easier benefited from AI taking the lead over the next five to 10 years. I think it’s going to be a long journey to get there globally. Processing is going to be a huge thing.
It’s what I always watch. I’m a big watcher on quantum computing, which will be, it’s just another topic at some point. I think we may hit a processing cliff at some point and we’ll need to have another breakthrough.
And I don’t know, if you asked someone 10 years ago if AI was going to be as advanced as it is, they probably wouldn’t have guessed it. I think it’s a spectrum as is most things in life. Whether I’m right or wrong, I think only time will be able to really tell.
MK: Exciting times ahead of us. All right, cool, Kenny. Thank you very much. Thanks for the talk. Thanks for presenting your point of view.
And thanks for being with us, taking time to discuss the topic. Hopefully the experience, remote experience was okay. And I hope to see you soon in one of the next episodes to discuss some quantum computing or different use cases you are focusing on right now. So thanks again, and see you soon in the next episode of C&F Talk.
KG: And thank you, Maciej. This is our first truly global, I would say, C&F Talk. And it’s almost 4:30 here where I am. So it’s probably almost 10:30 where you are.
MK: Exactly. It’s the modern day world we’re living in. But we made it. Thank you so much and see you soon.
KG: Thanks, Maciej.
Would you like more information about this topic?
Complete the form below.