Cloud Native Testing Podcast
The Cloud Native Testing Podcast, sponsored by Testkube, brings you insights from engineers navigating testing in cloud-native environments.
Hosted by Ole Lensmar, it explores test automation, CI/CD, Kubernetes, shifting left, scaling right, and reliability at scale through conversations with testing and cloud native experts.
Learn more about Testkube at http://testkube.io
Cloud Native Testing Podcast
API Mocking, Contract Testing, and the AI Shift with Yacine from Microcks
Welcome to the first edition of the Cloud Native Testing Podcast for 2026! In this episode, host Ole Lensmar is joined by Yacine Kheddache to dive deep into Microcks, a CNCF Sandbox project dedicated to API mocking and simulation.
As cloud-native architectures grow more complex, the need to decouple services during development is critical. Yacine explains how Microcks serves as a "Swiss Army Knife" for developers, offering a single solution to mock and test REST, gRPC, GraphQL, and Event-Driven protocols (like Kafka and NATS). They discuss the tool's evolution from a centralized Kubernetes operator to a developer-friendly utility that runs natively in IDEs and pipelines, enabling true "shift left" testing.
Later in the conversation, they explore the intersection of API testing and Artificial Intelligence. Yacine details how Microcks is embracing the AI era by using Copilots to generate mock data and leveraging the Model Context Protocol (MCP) to make existing APIs accessible to LLMs.
Key Topics Discussed:
- The CNCF Journey: Microcks’ status as a community-driven Sandbox project.
- Polyglot Support: Mocking REST, GraphQL, gRPC, and AsyncAPI with one tool.
- The Testing Lifecycle: How to reuse mock data artifacts for automated contract and conformance testing in CI/CD.
- Shift Left: Moving testing from QA environments to local developer laptops and IDEs.
- AI & MCP: Generating datasets with AI and exposing APIs as tools for AI Agents using the Model Context Protocol.
Ole: Great, hello everyone. Welcome to 2026 first edition of the Cloud Native Testing Podcast. I'm your host, Oli Landsmar, at CTO TestCube. I'm super excited to be joined by Yacine Kedachie. I don't know if I got that right from the Microx project. Yacine, how are you?
Yacine: Pretty good. Thanks a lot, Oli. Thanks for the opportunity. I'm very happy to start the year with you guys. That's very, very cool.
Ole: Great, it's great to have you. I mean, I've seen been on know, KubeCons for a while now and I've seen more and more about your name, more and more on talks and in the corridors. So I obviously really interested in learning more about Microx, you know, what it is, how it fits into the cloud native world, what you guys are doing and kind of where you're going. So please tell us.
Yacine: Yeah, so Microx is a CNCF project. We are a sandbox project today on a way to move to incubation. We started the process a year ago and it looked like that the TOC is taking care about it. And we are cross our fingers to make it happen in the coming months. Just regarding the project itself, it's not a brand new project. Let's say it's quite rock solid based on the fact that it has been started 10 years ago by my colleague Laurent Brodoux.
And we have donated the project to the CNCF close to three years ago. And regarding the project itself, let's say what we are doing and what we focus on, MyCrocs is ⁓ fully focused and dedicated on API mocking, simulation, and testing. And I'll dig further how we do that and why the use cases and all specificities. But it's fully driven by the open source community.
One important point, we are not doing any business. We don't have any enterprise version. It's fully, fully community driven.
Ole: fantastic. I think I'm just trying to think of the CNCF landscape. I don't think there are a lot of other testing tools out there. You're probably the only one I can think of that's actually been donated to the CNCF. mean, TestCube is on the landscape, it's not a sandbox. It's not something we've donated. So do you know, there any other? mean, and K6 is there, but I don't, it's also not donated. Do you know, are there any other testing tools?
Yacine: That's
a very, very good point because I think we are quite, let's say, unique within the CNCF landscape. And more than that, mean, when we have decided to donate a product, we've been certainly one of the first, first of all, Java tool, not a Go tool. And last but not least, more or less dedicated to the developer person, which means we are a tool for developers.
Ole: Mm-hmm. yeah.
Yacine: And yeah, it was not that easy, to be honest, to explain to the TOC on Sun why we think that we are very legitimate within the Cloud Native ecosystem and how we can help enterprises and organizations to develop Cloud Native application on top of all the CNCF and Kubernetes landscape and ecosystem.
Ole: And so, I mean, I'm very familiar with the API testing space and also mocking in general. So, but please tell us a little bit, how does API mocking, why is it so well positioned to be part of the CNCF and how do you think it help developers who are building applications?
Yacine: Yeah, I mean, first of all, when we are dealing with a cloud-native application, most of time, let's say, on good application, good microservices application on some, you need to decouple all your main components and take something that is easy to understand. If you are doing a front-end application, you have a back-end application, and in the middle, to interact, you use and expose back-end services through API. It could be any type of API.
Most of time it's REST HTTP endpoints and you'll declare and specify that using the OpenAPI specification. And Microcs for mocking and simulation, we have decided since the beginning to not re-unbound the wheel, but to reuse existing artifact. Most of the artifacts which are well known and used within the enterprise. So for REST, OpenAPI, for...
GRPC protobuf, GraphQL schema. For EDA event driven and async protocols, we are using the async ⁓ API protocol, cloud event, and we are also able to use Postman collection for enhancing the testing stuff and also adding example to your mock. And let's say the two main use cases of mocking.
⁓ is first of all, when you are starting from a brand new project and a new ID, you can, using Mycrocs, describe by using OpenAPI contract, for example, what your service is going to do. And then you ingest that within Mycrocs and we generate, without any code, every operation endpoints and simulation for you ready to use, which means you can start from a contract, mock it with Mycrocs,
and start to iterate with your partner, developers, and so on. So one of the first use cases, if you're taking about cloud native application, you can mock all your dependencies by using the contract and start developing without any limitation on your laptop or on top of Kubernetes.
Ole: And that's super powerful. just one, I'm curious about the, you mentioned the different protocols. Do you still see that REST is the most prevalent or do you see any of the other protocols like getting more traction, less traction? I mean, there was a big push for GraphQL many years ago that didn't maybe go as far as many thought. Do you have any insight there?
Yacine: Yeah, so as you said, REST is certainly the biggest one, let's say, from a usage point of view. But a solution has been built from scratch for big organization on the enterprise. And what we noticed and what was missing from our point of view, from a tooling point of view, is a tool that is able to do mock and test exactly in the same way, based on artifact, based on contract, for multiple protocol, which mean
In any organization, yes, everybody is doing REST, but most of time, some of those organizations are also doing GraphQL, gRPC, CASCAR, or any stuff like that. And that's where we are good at, let's say. We are able to mock, simulate, and test. Doesn't matter the type of API you are using. So back to your point, GraphQL is still heavily used in big organizations. We see a lot of usages for federation in order to
have an additional GraphQL layer on top of ⁓ databases and the huge amount of data, for example, and to expose that through GraphQL. And a lot of event-driven and especially async API usages. Kafka is the biggest one, but we have binding for eight different protocols today, including NATS, which is also a CNCF project, but also PubSub from Google.
AWS, SNS, SQS, and plenty of others. Let's say WebSocket, RabbitMQ, MQTT, and so on.
Ole: Okay, amazing. Yeah, so I definitely see value of having one mocking framework for all the protocols to your point. We also use REST and gRPC and some event-driven protocols. So it's always nice to have one thing to use and not have to use different frameworks for different things. agree there. One thing I remember from...
the old days around mocking and what I'm interested to hear about is how you inject your mocks into your infrastructure. So let's say I have service A talking to service B and I want to mock out service B using a mock I create with Microx. How?
What's the kind of process for getting service A to use that mock instead of service B? Can I do that by some magic network routing or is it something I would have to configure in service A to target service the mock instead or are there different approaches?
Yacine: Yeah, you just need to point, let's say, your application to the Microx endpoints that we generate for you. Doesn't matter where you run it. Could be on your laptop. Could be on top of communities for highly centralized deployment, could be highly scalable and available to hundreds of developers within an organization. So that means that we are not ⁓ changing anything on the way you interact with your API. If it's a mock.
Ole: Hmm.
Hmm.
Yacine: generated by Mycrox or the real implementation, we are not changing anything in the middle, which means we don't change any headers. We don't have any specific way to request and query the mock from Mycrox. So you just need, let's say, to change the endpoint from a mock to the real implementation when it's done. Or if it's a partner or an API for which you pay,
You just use it for end-to-end testing, for example.
Ole: And if you're using a service mesh like Istio or something, is it possible to do this at the network level so you don't even have to change the endpoint so you could basically dynamically route the traffic to your mock without having to change? Is that the endpoint?
Yacine: Absolutely. Yeah, absolutely. can use a service mesh or you can use sidecar containers. That's exactly what we have demonstrated with DAPR, for example. And we have done a nice talk at the latest KubeCon North America. And we also have a CNCF blog post on that. We explain how you connect and use DAPR, which is EV-LY user sidecar containers and Microcs. But also in Microcs, we have a proxy setting.
which mean the endpoint you query within Microx, we can reply with mock and fall back to a real implementation or another endpoint if we don't have the answer. Because something that is important to understand when we are dealing with mock, it's, know, Microx is very smart. I mean, it's dynamic mock. It's not static mocks. You can interact at the dispatching layer and create any
rules, any dispatching rules or ⁓ business behavior that you would like to introduce at the mock layer. That's very flexible. We can also do stateful mock, which means we store what we receive and you can pick it back or compute it, do any stuff you would like to simulate the behavior of the real implementation, including crazy workflows.
Ole: ⁓
Is that done declaratively or is it you would write that in Java or is there like a mixed approach there?
Yacine: We have two ways to do that. You can create a Groovy script or a Java script directly within the dispatching ⁓ rules from Mycrocs. You can do it through the interface. Could be a metadata that you add on top of the specification on contract you use. And Mycrocs just executes that for you. That's it.
Ole: Very cool. so, I mean, it sounds like you're really solving for a lot of different use cases. I think the use case I was getting at with the dynamic injection is also around fault simulation, right? So basically making sure that service A, in my previous example, can handle errors coming back from service B. So you could have a mock that also to your point passes through.
many of the operations to service B, but for certain operations actually returns in stochastically an error and just as a way to make sure that service A handles those correctly. So I think the use cases for mocking are a lot and it's super powerful. how would you, when it comes to like a CICD automated testing process, how, I'm sure it's doable, but walk us through how you would.
add a micro-ox to a CI-CD pipeline where you want it to mark out ⁓ certain dependencies when you run your integration tests, for example.
Yacine: So before moving to the testing part of MyCroc, just to add on top of what you said previously on mock, it's true that we can simulate errors. Some of our adopters are doing more or chaos monkey by using a MyCroc simulation. But we can also simulate latency on each of these endpoints. And you can specify the latency you would like, and so on. So we have adopters who are using MyCrocs and injecting latency that they see on the production systems of a developer.
are developing on something real, which represents also what the behavior and the latency they will have on the real system. So now, if you have a flip-up of the quaint, as you said, testing. And a good point to understand, and it's good that we have started with mock, because as you understood, mocking with Microx, as much data set, as much example you had within your simulation, we're just going to reuse that for testing purpose.
For testing, Microx is not providing new endpoints and simulation. We'll act as a consumer, as an API client. We'll connect to your endpoint. And then we can reuse all the example, all the data set that you have had for your simulation to send it to your real endpoint and do contract testing, check that all the syntax is correct. Everything is.
Ole: Mm-hmm.
Yacine: Serialization, everything, the network layer are correct and you respect the commitment that you have described within the contract, OpenAPI, for example. And it could be coupled with ⁓ functional testing by using Postman collection scripts, for example, to do real behavior and functional testing. And of course, you can do that and simulate it by using the Microx UI.
just for testing purpose and so on. And when you are ready, we have a button on top of the testing sequence, which is added to your CI. And then you can pick and choose if you are using GitLab CI, GitHub Action, Tecton, Jenkins. We generate a snippet for you. You just copy past within your CI. And then you're going to execute automatically within your CI the exact same test with the same data set in order to ensure that you have not
you are not introducing any breaking changes on the fly, for example. We have adopters who are doing that on any commit, which mean any commit, for example, on GitHub, generate a GitHub action. We pop up Microx. we have, by the way, because that's important, a very, very light image of Microx. can run it at less. As I said at the beginning, Microx is done in Java, which means we have a Java native image. We can be running less than 200 milliseconds.
Ole: Mm-hmm.
Yacine: which means in CI, it's very, very efficient. And then on each commit, you can check that you have not introduced any breaking changes, contract testing, behavioral functional testing, or business testing, fully automated on all the version of your API you would like to support and be conformant.
Ole: Wow, and do you see?
If you look at kind of the adoption of Microx and what functionality it features people use, it sounds to me like maybe there's different maturity levels. You you start off with static mocking and then you go through and then maybe the level five wizard level is that kind of the CSC integration that you mentioned now or are there things beyond?
that you could achieve with microbes.
Yacine: We have much more than that, yes. I mean, as Microx is an open source solution, we are, let's say, very versatile. And we have made quite a huge effort, let's say, to not try to force adopters to use Microx the way we think it's nice to use or should be used, but try to do our best to be able for adopters to integrate Microx within their organization and practices.
Ole: Okay, let me know. Tell me.
Yacine: to do so and back to your point, which is a very important and nice thing to understand, you're right. A lot of people are starting using Microcs just for mocking at the beginning, let's say. And as I said, when they understand and realize how powerful it is for increasing the time to market, the time to delivery on the development lifecycle, and the fact that they have created a nice data set for simulation,
And they realize that they can use that for testing. And most of the adapters are starting by doing mock. And then they move to automation, simulation, integration within the CI because they just realize why it's not costly. It's very easy to do. And they just reuse the same artifact and data set, which is very, very powerful. But when I was saying we have more than that, for the latest two years, we have done a big effort.
Ole: Mm.
Yacine: to do a very strong shift left. Microcs at the beginning and to be fully transparent and honest, we were very, very opinionated on Kubernetes. Microcs, we are providing ElmShort, a very, very nice Kubernetes operator. And most of the first adopters we have had were deploying Microcs on top of Kubernetes centralized for
their organization and hundreds or thousands of developers. But we realize that the developer persona, most of the time, they want to use their laptop, their IDE, and moving to Mycrocs on top of communities is not under their control. And that could be a blocking situation. And as we love developers, we've made a huge effort to provide them the native image of Mycrocs.
We have test container modules and binding for multiple languages, which mean now for the latest two years, you can run Microx directly within your code, within your IDE, launch mock simulation and test, specify the data set version, the artifact you would like to use. And for example, if you are developing in Java, you can integrate Microx directly within your Java unit test, which mean
Ole: Mm-hmm.
Yacine: A lot of developers are now using an adopters of Microx, are using Microx to do integration tests directly on their laptop before moving this test on QA. It's a big, big win. And when I say it's very versatile, this way of using and deploying Microx, that's mean that now developer can really use Microx the way they would like. And what we have been able to solve out with this approach
Ole: before they commit.
Yacine: is the effect that it runs on my laptop, but not on QA anymore. And I don't know why, and it's not my business. With this approach, that's going to happen anymore, because you just reuse the same data set. It's exactly the same. Same way of testing it locally or centralized.
Ole: You use the same.
Yeah, that sounds very cool. Definitely. I'm also thinking I have to bring up AI that, know, within, the, we see today, at least from our side is that people are using AI to generate more and more code and the need to run tests, more tests, more frequently.
and preferably early before committing, right? It's so easy to ask your agent to, know, fix this thing or do that. And then of course, if you can validate that immediately before even committing using a tool like Mycrox, I think there's a lot of value there. And hopefully that will also drive...
the usage of testing tools like Microx as a way to embrace AI for increasing the velocity of generating code. Since I brought up AI, I'll ask you, do you see AI both in Microx itself or do you see ⁓ AI somehow influencing how people are using Microx?
Yacine: Let's say both. And to make it efficient, we have started again to try to understand and help developers. And for this purpose, we have introduced, again, more than two years ago, think, what we name an AI copilot within Microx, which means as a developer, you just import your contract. But most of time, we invest contract, you don't have example or enough examples.
So a lot of developers, they just create dummy examples, random string or numbers or whatever just to move forward. Now with Microx, we can just click on a single button and ask AI and you can set up the LLM you would like, your own token or whatever to generate example for each operation in a contract and ask you to validate and pick and choose the one you would like. And you can export those examples.
to air and share with others and include that within the data set that you can use within your organization. So that's the first use case was to simplify and help developers and people who don't want to ask product owners or business owners to provide example, generate that through AI. And that's very efficient. The other point, it's more
You are doing API, you are creating API services. But now, I mean, we are all in the AI era, and a lot of organizations just would like to expose, not reinvent the wheel, so they just would like to expose this existing API services and most of the time business services to the AI world. And thanks to the MCP protocol and our friends from Anthropic, now you can expose that.
as ⁓ MCP tools. And now in MyCrocs, when I was saying we generate mock, it doesn't matter the protocol, the type of API you use. For each of us mock, we're also generating MCP HTTP streamable endpoints, which mean each of your REST operations are ⁓ accessible as MCP tool. So from a mocking point of view,
Ole: Mm.
Mm-hmm.
Mm-hmm.
Yacine: the great stuff and what our adapters really enjoy is through this approach, they can tune or test that the API or existing services are AI friendly. And if it's not, they can reshape the backend API or add the MCP server layer, do a better description, a better MCP tool name, just in order to make it more efficient and usable.
or predictable because that's important as well for ⁓ LLM and any conversational agents.
Ole: Mm.
That's very cool. I continue to be amazed by how MCP and how this whole ecosystem is evolving and all the creative ways people are using.
stitching together the fabric of all these protocols and tools and approaches. And I just feel like it's such an exciting time. know engineers have said that for many years, but there's so much cool things going on. I haven't obviously thought of what you just told me, but that sounds as I need to read up more. almost out of time, I did have one more question and I have a lot more questions too. One is around, do you see a mocking of MCP servers?
Is that something that people do to kind of validate the LLMs on their end, given a certain response from an MCP server, that they do the right things? Is that a use case or am I out on the limb there?
Yacine: That's what we are doing with Microx today, thanks to the MCP endpoint we generate, but more for translating existing API or exposing existing API as MCP tool. And because remember, Microx adapters are big organizations. Some of them have started to develop their own MCP servers, but most of them realize that that's nice for testing, let's say.
Ole: Okay.
Yacine: to go in production with that. You generate two branches. That means you need to duplicate all the business logic and stuff like that, have a new expertise. So anything that can, let's say, expose existing API in the right way from a security point of view to be LLM friendly, to not explode your context window and reduce token usages and so on is a killer feature for them. And that's exactly what MyProx is doing.
plus some of the solution we are working on. I can disclose the fact that if you check solution which is named mcp, m-i-c-e-p-e.io, that's more something that is connecting the dot, let's say, for moving to production.
Ole: Wow, and so that maybe leads me to my next question is kind of what's in the oven and what are you working on for the future? So that sounds like one answer to that. Anything else that you can share or kind of directionally, strategically where you see microcs, how you see microcs evolving?
Yacine: Yeah, so first of all, moving forward and level up with the CNCF, that's not technical, but it's important. The maturity level ⁓ of Microx is growing very, very fast and the number of adopters as well. And we would like to be fully committed to our adopters and the community on the fact that we would like to keep driving this project the open source way and make it better, efficient and grow within the CNCF organization.
So that's an important step for us. From a feature point of view, everything related to AI simulation in Microx, that's an important topic, yes. We have already demonstrated at the latest KubeCon talk that we can also simulate a chat GPT API with Microx. So then we can simulate the LLM itself.
Be sure that it's predictable and speed up what you're going to develop on top of it. So that's important. And as I said, as more or less as a teasing, that's another project may become a product. It's not open source today. We are thinking about it, which is mcp.io. And that's a different path. Check it out. Private beta is open.
Ole: Mm-hmm.
Yacine: feedback is very amazing because it's sold out a lot of issues.
Ole: So that was M-I-C-E-P-E
dot I-O. Is that right?
Yacine: correct.
Ole: Awesome, I will check that out. with those words, Yacine, it was a true pleasure having you. It's so exciting to hear about your project. We wish you all success going forward. And I'm looking forward to bumping into you at KubeCon. I'm guessing you'll be in Amsterdam. I hope so, at least. And we'll catch up there, if not before. Thank you so much. And of course, thank you to everyone listening. Thanks. Take care. Bye-bye.
Yacine: A pleasure, thanks a lot.