Home Our Insights C&F talks IoT Solutions and Product Tracking in the Life Sciences Industry
Episode 12

IoT Solutions and Product Tracking in the Life Sciences Industry

Unlocking the power of IoT in pharma: from traceability to transformation
In this episode of C&F Talks, we explore how IoT and product tracking solutions are driving operational efficiency in the life sciences sector. From building scalable solutions to navigating implementation challenges, we discuss real-world insights and lessons from our successful IoT projects in life sciences manufacturing.

Watch the episode

Introduction to IoT Solutions

Maciej Kłodaś (MK): Hello everyone, my name is Maciej. I’m the leader of Analytics Experience at C&F and this is C&F Talks, a place where experts discuss their challenges and ideas from the perspective of an IT partner. My guest today is Piotr Guzik. Hello, Piotr. 

Piotr Guzik (PG): Hello, thanks for the invitation. Thanks for having me. It’s a pleasure. 

MK: Piotr is our enterprise big data architect. Piotr, can you tell us a bit about yourself? 

PG: Sure. So I had an opportunity to work among multiple clients in IoT aspects and area, integrating manufacturing sites all over the world to centralize cloud. And I think this is a fantastic topic to discuss because it’s a niche and not many companies are that much advanced to deliver such projects. So I’m happy to help. I’m happy to be here. And I think it will be a wonderful morning.  

MK: Great. Thanks for finding time to talk to us. Funny thing is that when I joined the company, when I first came to the office, I found out that Piotr is there and we met like, I don’t know, 10 or 12 years earlier. 

PG: 10 years ago. Yeah.

 

Product Tracking and IoT in Manufacturing 

MK: While delivering some big data projects for one of the largest manufacturers in Poland. And so this was a funny meeting then. So Piotr, some time ago, we’ve discussed with one of our guests during the previous episodes, the topic of digital transformation. And one of the elements of digital transformation is in fact, products tracking and key indicators related to manufacturing. So this is the topic that we would like to discuss today. So, what is in fact, product tracking in IoT?  

PG: That’s a great question. So I think it all depends on the maturity of the manufacturer of the sites that we’re running. And it’s especially important in healthcare business, in life science business, where I have quite a broad experience that we deliver together. So you can imagine that especially there’s a concept of batch. 

Batch is a part of, I would say drugs or medical devices, which are produced in specific site and they are released for further use. So it’s very important to aggregate some units of production into batches. So it’s easier to aggregate them and track them, especially if something goes wrong, it’s super important to quickly react, respond and for example, remove this batch from the market. 

Because not only it’s crucial from the health perspective of the people who would use, for example, malformed batch or contaminated batch, but it’s super important that there will be no lawsuits. So such ROI of a project is extremely high, even though it’s very hard to measure. But the important part of tracking, it’s from the start of the production line, from all the parameters that are used while producing the items. 

And you can think of them in the very, I’ll make it as simple as possible. There are PLC controllers, which usually have three parameters, PLC, and there are multiple combinations of such parameters. So they change over different elements of the production line. 

And you not only need to be able to track the parameters that were used during their production, but also some additional parameters like metrics from measures of the manufacturer of the device to make it like a robust predictive maintenance.  

MK: So that’s like what? Temperature, the speed of what I’m filling, something like this.  

PG: Yes. For example, it can be temperature, it can be hygrometer parameters, because it’s super important, not only for the production process, but also from the perspective of the companies who are responsible for maintaining the production line. 

So they don’t stop, right? Because the stop of the production line, it’s a natural point of waste. You waste the resources, waste of money. So such predictive maintenance algorithms are also in place. 

So it’s a combination of two worlds, making it smooth, making it high quality, and making it run 24-7 for the production line, but also keeping track of all the parameters to make it more efficient, to make less, produce less waste, and to ultimately product better units, better batches. 

MK: And we need to have in mind that the quality restrictions in such plants are very, very strict. So you need to monitor this process constantly and react promptly, because when it goes out of the range, you have waste. 

So you need to optimize it on the go. And so you have to report those parameters on the fly in order to optimize the process and reduce the total waste of the product.  

PG: Yes. And I would even go a bit further. So, there are two aspects from the optimization, from the digital transformation perspective. One aspect is that a very smart person told me from one of the sites that you can think of it when it comes to drug production, that if you find the waste on the line and it’s automatically detected during the production, it almost costs nothing to get rid of such a pillow or drug. 

It’s a regular process. This happens, right? There’s a small number of wasted items on the line, but it happens. If you track it, when it’s distributed, you can think it’s 10 times more expensive. 

And if you don’t find it during the distribution, but when it’s already distributed, then it’s like a thousand times more expensive than finding it on the line. So you have to be as close to the process and understand this very deeply so that you actually do it as soon as possible.

 

From Single Sites to Scalable IoT Platforms 

MK: Okay. But I assume, and this is of course my assumption that it’s not, you know, this is not magic. Like people do that or our clients do that, but they focus on single plants most often, right? What we are talking about is kind of a scaling up in order to have a standardization or optimizing the process in order to have multiple plants in the process.  

PG: Yes. That’s the ultimate goal to think of it in a platform mode perspective. And this is actually what was my biggest part of responsibility to explain it, to build such a scalable platform based solutions. And here, I think there are two important aspects. 

One is to build a platform that’s feasible and that’s suitable for different types of sites. Because, as far as I know, we have to categorize the sites, the plants into at least three categories because they have different needs, they have different budgets to deliver such IoT projects. And there are different pillars of, I would say, figuring out which solution is best for them, but ultimately the standardization, which I think we’ll discuss later. 

And nowadays it’s called unified namespace as a way to standardize the process. However, it’s funny because even though the name is there, unified namespace, if you ask different people, they’ll tell you a very different definition of what it is. So I’ll give you mine today, but to keep it simple, in order to build a platform-based approach and scale it and why there’s a demand for scale. 

Usually there are at least 20, 30 sites across the globe that do different productions of different drugs or devices. And this is where actually in life science, it gets really interesting because you can think at least of three different types of products that are being manufactured. One are the regular drugs, which you can buy in pharmacy. 

And let’s call it that this is the easiest production because it’s very repeatable. And less restrictions. Yes, less restrictions, but those are usually non-prescriptive drugs, right? And this is, let’s categorize it as the easiest manufacturing pipeline. 

Second one are medical devices. This is where it gets a bit tricky. You can think about it like some inhalators for asthmatic people. And it gets a bit more complex here because first of all, it needs to work very well. It needs to give the right dose. The dosage needs to be very strictly controlled. 

So with this specific dosage, it’s very important to control it, to actually know the production process, to monitor all the parameters. And it’s even more important because if you malform this, the malformation of the whole batch of units, it’s way more expensive, right?  

And last but not least, I think there’s the biggest challenge and the highest performance and reliability constraints where it comes to very high margin factories. And by higher margin factories, by ultra-high margin factories, I mean very dedicated factories that produce like active substances for the drugs for so chemotherapy piece. 

And this is where the yield is super important because here we’re now talking about production, for example, of 10 kilos per quarter, maybe 100 kilos of the substance per year. This is very little, right? If you think about it in terms of a site, you have a huge plant that’s producing such an active substance. The higher yield is super important. 

And here, if your batch goes to waste, it’s a huge impact on the business, right? Because even then the small production pipeline, which fails leads to very high loss and it has to be super reliable there. And these three different types of sites, I would say have very different demand for technology, for streaming the data and how it can be done to be in the budget of specific use case. That’s not trivial. 

 

Building a Standardized, Cloud-Connected Architecture 

MK: And to make it even more complicated, it’s very often that pharma manufacturers buy different plants from different companies. And these plants were used to produce something totally different from ingredients, from substances. So they need to also optimize the plant in order to produce something totally different. 

And these plants have different setups. So we need to take also something like that into account. And we think of what it’s called a blueprint, right? So this is a kind of a standard approach to take all of these plants under one umbrella and try to find an optimized solution which will fit all of those plants, right?

PG: Yes. When it goes to standardization, I think there are two major aspects. One is on the site level to understand and create metadata around the data. And this is actually where I believe what’s UNS is all about. 

And this is the most complex part, which requires huge collaboration between automation engineers, chemical engineers, biochemical engineers with IT. And I think this is very crucial for the successful run of the project. Because if you don’t do it there, for example, let’s think about such production where we as an IT people don’t have a huge in-depth understanding of biochemistry. 

And I think that’s normal. However, we should be able to understand what kind of data is being transmitted to the cloud because the standardization of the technology and the blueprints that will run on the cloud, this is pure technology. I think this is where our excellence comes from, but this is only the part of the successful project. 

And I would say it’s for me, at least as an architect, that’s easier because that’s where we can fit good technologies. We can glue them together and achieve the goals under some pillars, which I will describe how we evaluate the different technologies and what we’ve been focusing on. However, this standardization part actually starts on the site level, especially to give the easiest explanation. 

You can think about it as the different SI unit metrics. For example, you need to know if it’s like milligrams, if it’s kilograms, if it’s percentage of the heat or density, those kinds of things are usually not given out of the box from the devices which are producing the data. So you need to have additional description of what kind of data is being produced, what’s the units being used on these metrics, how these metrics can be even described. 

And this is where actually UNS comes in place. Because if you don’t gather this information and don’t work very closely with the people on the line, you probably won’t get this information, which is very bad, or even worse, you will assume something else which might not be true.  

MK: And how to build there then the digestion level reporting, for instance, right? If you don’t know what kind of unit it is. 

PG: Exactly. So you have to start from the core of the solution. So from the source of the data. 

And this is where it’s truly important to have this, I would say, unique knowledge, which people on the line have, because they exactly know the process, they can describe it for you. And once they do it, it stays, right? Because obviously change is inevitable, change happens during the production line and the process, but it’s not that often as we think of it. And this can also be managed, right?  

 

Project Kickoff, Security, and Integration Challenges 

MK: Okay. So this is our unique value. We have this area knowledge, we know how to work with such data. Let’s start from the very beginning. 

Imagine that one of the clients approaches us and says that, well, we need to optimize the process, we need to build such blueprints. So how such projects usually starts?  

PG: It’s a very good question. So obviously from assessment and the workshop, I think that’s the easiest answer, because we need to know what we’re dealing with, right? So by assessment, I mean, we need to understand the volume of the data, the aggregation, the level of the aggregation of the data. 

And if it’s however structured, if it’s already there, there’s descriptive analytics like the metadata around the data or super important, which I would check. I know it’s only one of the standards, but it’s called ISA 95. It’s very specific. 

I know it won’t tell you much, but high level, it’s a very nice hierarchy tree, which describes in different parameters, how your production process looks like. And it makes total sense. You have to put like, for example, what’s the manufacturer name? What’s the plant name? What’s the region of the plant? What’s the device type? It makes total sense, even for people who are not very close to it, but it makes life easier because you can model, use it for modeling the data in the future and it stays. 

So I would start with this. And then the second part, working with the stakeholders, it’s super important to understand if they use, and it’s obviously your topic where you’re the champion, if they use any kind of visualizations. And maybe I’ll explain it from my experience, like chemical process are hard, but when you draw some diagrams to the chemists and people who are working with them, they immediately see if the solution is reasonable or not. 

And it’s very hard to do it when you look at the tables. So even though data engineers are used to work with tables, people who are experts in the domain usually works with some graphs and visualizations. So it’s very important to understand if they use it already, because otherwise it takes a lot of time to instantly see deviations on the graph. 

Yes, exactly. Deviations, anomalies, which might happen because this is the most important part, right? Detecting some anomalies, detecting some malfunctions. And this is the basics, which I would check. 

And then I think we come to some next steps. Like for example, we need to categorize how crucial the production on the line is, which category it should fit. Is it like a very regular production where, for example, if we lose 15 minutes of the production data, what’s the impact on the business? How resilient the solution needs to be, because it will impact different architecture, different costs of the solution, and the different learning curve for the people who are operating. 

And last but not least, I would say what is super important, what are the regulations when it comes to security on the site? Because I think this is very underestimated in most of IoT projects. So sites, for very good reasons, have high security standards. And usually some things are not allowed, right? For example, getting back with the commands from the cloud to the site. 

MK: Usually they are offline for a very good reason in order for someone to hack into the system, because these are very, very dangerous substances.  

PG: Exactly. So as you can imagine, there are strict security rules that need to be in place. And it’s part of their business. It’s understandable, but it makes it a bit more difficult for engineers and architects to create a good solution, because usually it doesn’t happen in a lot of different projects. So you need to take it into consideration, because you’ll need to open some firewalls.  

There are different network layers which are standardized in factories. Usually, the factories, I’ll make it simplified, have at least three layers. The basic, where there’s no internet connection and there’s no external connections allowed. 

 They call it the militarized zone, where you can actually reach out to the internet, but you won’t send anything from the internet there, but you can push from the site to the internet. And there are firewalls between them. So it makes it a bit more complicated to build it, especially when we’re thinking about high availability software, because…
MK: Two-way communication, instant two-way communication. 

PG: Exactly. So it’s super important to understand those limitations, because it has an impact on the timeline mostly, right? Because even though we think it wouldn’t be a big problem, if we start speaking with security engineers on the site, it takes time to explain to them that we need to open this port on the firewall, why they will ask you all these questions. And there’s a very good reason for it, because if we can send the data to the cloud, it has to be done in a very strict fashion. 

 

Smart Data Handling and Real-World Use Case 

MK: Okay. What I understand is that we need to kind of work with two different setups. We have this on-prem setup, which is loved by all those plants and engineers working there, and we need to kind of marry them with the online setup, or cloud setup that we need for this data exchange and analysis. 

PG: Yes. It’s a perfect mix of both worlds, because usually, obviously, plant is physical. It’s a physical production, so it’s an on-prem, and you have to have good knowledge of the on-prem solutions. 

And it’s a bit more limited. However, still, if you do this software in a good architectural and design patterns, it’s quite similar to the beginnings of the cloud, how people understood cloud 10 years ago, because you can run containers inside the site, which at least simulates a bit of the basics of the cloud, which we know these days. And the cloud part, I would say it’s way easier, right? So it’s a mix of both worlds, and not only it needs to happen, but it needs to adjust to the changing demand of the site. 

And what companies are looking for is to make this digital twin idea, to have fully replicated site-level processes on the cloud, which is great. However, I think there’s one thing that also needs to be said here. We need to understand what do we really need to gather? Because when we’ve been working with different sites, in the beginning, and it’s natural, you’ll start gathering all the data that’s available. 

And in the end, trust me, you need…  

MK: It’s too much data then.  

PG: Yeah, it’s too much data, and it’s not good for you. It’s not good for the project. 

So it’s very important to understand what data is really needed, and which can be sampled, for example, which can be gathered every one minute, every one second, every 10 seconds. And what’s also super important, and I think it’s one of the best patterns that needs to be in place, it’s called report by exception. And this is exactly how to, in the simplest, and I would say most meaningful way, avoid the heavy burden of the data that are not needed in the cloud. 

So I’ll explain it this way. You’ve got some parameter in the production process, and it has a specific value, and you only send the data when this value change. It’s the most important pattern that we need to follow, because thanks to this, you only have the changes. 

So it’s way less data than, for example, if it’s a constant line, and it doesn’t change for one hour, you only send it twice an hour, right? But then if you send every second without such paradigm that you send and you report by exception, you’ll have tons of data which are actually maybe not needed for you.

MK: Okay. Piotr, do you have any kind of an example of a project that we’ve delivered and was interesting in a way of monitoring some processes?  

PG: Sure. I can think of one that I think it’s easy to explain, and it’s quite spectacular, because it’s used some machine learning solutions, which I would say also brings a lot of value to the table when it comes to automation of the process. So I’ll explain it. There was a production line which was manufacturing a medical device. 

I actually even mentioned it before, it was the inhalator for asthmatic people, and it was very important to have a highest quality of such devices so that they function for a long period of time and they serve their purpose for the end client. So the goal here was that we got the classical computer vision, I would say, algorithms to be applied here, because there were different images of how this device looks like on the different stages of the production line. And the goal was to help the automation engineers who’ve been supporting this line to detect malformed devices and put them aside from the production line. 

Why it was important? Because if you think of it, if everything has to be done manually, which was the basic case, it’s not going to fly on the long term on the line, because such automation engineer would probably miss some malformed devices. And the goal was to create an algorithm, to create the pipeline, which will help to only make this decision if it’s valid or if it’s a malformed device. And to give like an option, yes, go or no go for a device for the automation engineer. 

MK: Okay, so he was only confirming whether it’s okay or not.

PG: Yes. But the goal, the business use case here is he only needs to confirm the, I would say, devices that look like they are malformed.  

MK: More or less, this was a kind of automated recommendation model. 

PG: Yes, we can think of it. In the end, his tool was Power BI, where we draw two images with the recommendation, whether we believe this is malformed, please take a look at it, yes or no. And it made a high impact on efficiency of this automation engineers, because thanks to this, they only focus on, I would say, 5% of the devices. 

And they have a lot of time to focus on the right parameters, on the tuning, on the smoothness of the production line, because in the end, they only look at things that matters. And this was a very good use case of also joining the on-premise world with the cloud world. And maybe I’ll explain why. 

Obviously, in the cloud, you can run this and train this computer vision models on NVIDIA chipsets and it works like a charm. But in the end, you need to ship the containers, the Docker modules, which are the model onto the on-premise solution, where this hardware is very limited, right? It’s not easy to deploy a new hardware on the production site. So it has to be highly optimized to run on, let’s call it an older PC, right? So it was a very good use case for us because we had all the things we needed to retrain the model, to validate the model on the cloud. 

But when we’ve been focusing on the use case on the production line, we had to tune it this way that it runs smoothly on, I would say, very limited hardware.  

MK: So this was the performance optimization of the model itself.  

PG: Yes. So it’s a classic machine learning operations problem. And I think we delivered a good value here because this solution was scaled to a second factory. So that’s one of the use cases that I’m proud of. 

And maybe just to give you like a food for thought, what I really liked about this solution here, the major idea was that it’s even better for such use cases to report false positives than not because you cannot, it shouldn’t happen that you don’t report a malfunction. So we actually deployed two models and they voted. And if any of the models voted that the device is malformed, it was presented to the automation engineer to either discard this or allow the production further. 

And this was really good because we can use two different solutions and so far it works very well. 

 

Risks, Best Practices, and the Future of Industrial IoT 

MK: Perfect. So tell me, because we see that this process is very complex. We have different factors. We have security, compliance, hardware restrictions, environmental restrictions. So from your perspective, what are the biggest challenges and risks while delivering such solutions?  

PG: Great question. So one of them is understanding the category of the problem. So category of the usage of the site and picking up the right solution, which is optimized on few pillars, total costs of ownership, resiliency, which I think is super important to understand that it’s a huge demand issue because the data will only be produced once. So the solution that we’re building needs to be resilient, needs to be able to buffer the data, even when internet is gone. 

And it might happen due to very different reasons. Then to very well pick and understand the use cases on which we will be working because every project in order to succeed needs to prove itself with some good use cases. So it’s important to pick the ones that can give the value to the site, like for example, the one that I’ve mentioned. 

And having this metadata, which I also mentioned on the UNS part to understand what’s really going on the production site. And I would say last but not least is to think about extensions, think about additional value that can be brought to the table in upcoming time. Because if we deploy such solutions, they will be used for a longer period of time. 

And one of the very good things that we can think and recommend is also, for example, how to work with third parties. And by third parties, I mean here, for example, manufacturers of the devices that are using the production line, which are then able to integrate with such solutions. And it’s easier for them to do predictive maintenance, to actually help them build solutions which can predict that this line requires some maintenance. Because then the production gets smooth and the KPIs of the overall production are even better, right?  

MK: What is it about machine learning; I remember one of the first use cases when machine learning was implemented was with the Rolls Royce jet engines, where they predicted how many cycles are over. So when do you need to replace some parts in order to prevent any errors there, right? 

PG: Yes. Because prevention and maintenance is way cheaper than dealing with — especially with jet engines. For sure, that’s a very good use case. But even on a regular production site, it’s cheaper to predict and replace the parts of the device than to stop the production line., right? Because that’s the nightmare of each factory. 

MK: Okay, so any best practices while, you know, working on such solutions or preparing for implementations? 

PG: Sure. So, first of all have a great understanding of the use cases and what we’re dealing with. Second, work very closely and early with site-level engineers .By site-level engineers I mean: one, automation engineers and domain experts like chemists to build this metadata model around the data, and second, work with security and network engineers on the site to smoothly allow the deployment and rollout of the solution further.  

Then, third, create some pillars of the architectural choice which will be easy to explain to a larger pool of stakeholders, so that you don’t pick the solution because you like it, but it’s very easy to measure, and it’s tangible for different people to say: okay, this solution fits and it plays a good role in my overall architecture.  

Why? Because, for example, TCO is acceptable resilience is great, easiness of use is high, because you don’t want to probably train all the personnel for newest technologies just because you like them. It’s not the way it works. But also, this is super important in order not to create silos inside the organization, because I believe the centralization of such platforms have the very high value here. 

Because in such larger organizations, which run manufacturing all over the world, it’s very easy to create silos per site, and then they measure things differently. They operate on different KPIs, it’s hard to compare different sites around the world. So in order to optimize…  

MK: And use one model to optimize each site, you need to use case by case… 

PG: You have to go place by case, and it’s not manageable. It’s expensive, it’s not manageable in a large scale. But it’s tempting because it’s there, it’s easy for you to work this way. But I think in the long term, it doesn’t bring such a huge value as a holistic approach to the platform.  

So when we’re building such a platform, it’s very important to have a successful business there, and I think it’s super important to understand it from the client’s perspective. It should be one of their goals. So avoiding such silos means you use tools that have a lower learning curve and that are standardized by the metrics, how you gather them, maybe by this hierarchy that I mentioned like the ISA-95 model.  

And this way, you’ve got a similar applicable solution for different sites, and then, in the cloud part, everything should be, I would say, also standardized. So people from different sites can look and compare how it’s running in what’s being measured, right? Because then it brings synergy, then it brings a bit more value, and then it’s a true platform, right?  

And maybe last but not least, we’ll say is thinking a bit about the DevOps perspective. Infrastructure as a code, so creating all the templates, keeping up this code in a templated mode, so that it’s easy to roll out. Because that’s usually what happens when you’re becoming successful. You’ll need to roll out to further sites, and you cannot reinvent the wheel from scratch. You’ve got some use standards, and those standards are super important from my perspective at least in one place.  

Because when you go to the steering committee of this site level, you always have a high-level security and network engineer who’s responsible, and they ask very similar questions. So if you have a templated solution, it’s easy for you to even prepare ahead and tell them: yes, we need this part, especially this part, this communication will look like that, it’s secured…

MK: It has been implemented there and then.

PG: It has been implemented there and then and tested. Exactly, you’ve got a proven solution, so keeping such blueprints and templates, I think it’s what’s convincing for others, right? And as your putting something new, something, I would say, that is still not yet super widely adopted, it’s easier for showing it worked, it’s the way how we can go.  

MK: Okay. The security officers always resist. Their job is to resist.  

PG: It’s their job, but with good arguments, it’s possible to convince them, right? 

MK: Okay. What do you think is the future of such implementations? This is a niche right now, it’s not like very widely adopted yet, there are new tools, there are new systems, we have AI in place which is developing very rapidly. So what happens next? 

PG: So predictive-based questions? So I’ll give you predictions. First of all, I think the streaming technologies are getting better and better. So it will be easier to stream data to the cloud and to use it for some real-time scenarios. And real-time scenarios are hard, but they bring a lot of value. Because if you can find out the anomalies just in time, it’s where you save the money. So this is where I think the business value will come from.  

The second part, I think, of what’s the future? The future will have easier ways to deploy solutions to the on-premise base. Because especially having some high-availability tools on site level, that’s what will become a bit easier and this will also help to scale the solutions. An the third thing; as we are having more of AI, more of chipsets, the hardware will get a lot cheaper.  

So I believe the hardware will also be upgraded and a bit modernized on the site level, so it will be more powerful, and it will be easier to experiment and run some solutions, which now need to be heavily optimized in the cloud to serve the need on the less powerful PCs on the site. And this solution will get more and more popular. And maybe the last prediction from the business perspective: business people will start figuring out that they need some level of standardization, they need to understand the data, because also the world is getting more and more global.  

So I believe it will happen that with the digital twins you will be able, for example, to run 24/7. It’s easier to control the production with different time zones. So you have to have a very repetitive understanding of the measurements, right? Because with the different regions, it’s possible, and it’s, I would say, probably cheaper than doing everything on the local site, and to have a real 24/7 support for all sites.  

I think that’s also part of the future. So going on to the cloud, standardizing the things, and streaming more and more. Because these days, I think, one of the things that’s really hard is to get sub seconds latency for some real-time production use cases. They have a lot of value but they are very difficult these days. 

MK: All right. Thank you very much! Thank you for covering this super interesting topic with us. 

PG: Thanks for the introduction, thanks for having me. 

MK: So thank you very much, and see you next time in the next episodes of C&F Talks! 

Key insights on IoT in life sciences

The strategic role of IoT in driving digital transformation in life sciences
How to design scalable, platform-based IoT architectures
Challenges and best practices in pharma IoT implementations
A real-world case of ML-based anomaly detection in manufacturing

Why life sciences leaders should take IoT seriously

IoT for life sciences leaders is no longer optional. It became a strategic enabler that supports quality and production optimization. In this episode, our Big Data Architect, Piotr Guzik breaks down how product tracking powered by IoT is becoming essential for meeting regulatory demands, ensuring product quality, and enabling real-time decision-making. He also highlights how combining on-premise reliability with cloud scalability can create robust and future-ready architectures.

A practical approach to implementing IoT in pharma

From evaluating your data landscape, through predictive maintenance, to ensuring security and performance, successful IoT projects start with asking the right questions. Piotr walks through key assessment areas: data volume, structure, metadata, aggregation levels, and visualization. You’ll also learn how to mitigate typical roadblocks in implementation and build a foundation for long-term success.

Gain a strategic edge with IoT data integration, real-time analytics, and AI-powered insights.

Meet the expert

Piotr Guzik

Big Data Architect, C&F

Piotr is a Big Data Architect with a strong track record of designing and implementing advanced, user-centric software solutions that drive real business value. With deep expertise in IoT and data-driven architectures, he focuses on transforming complex challenges into scalable, future-ready systems; particularly in highly regulated industries like life sciences. A firm believer in the power of data to fuel innovation, Piotr combines technical excellence with an agile mindset to help organizations unlock new possibilities through smart, connected technologies.

You might also like

Let’s connect

Our engineers, consultants, and experts are here to help you uncover solutions tailored to your business needs. Whether you’re looking for targeted support or planning a complex digital transformation, we’re ready to help you achieve more.