• Skip to main content
Zühlke - zur Startseite
  • Business
  • Careers
  • Events
  • About us

Language navigation. The current language is english

  • Expertise
    • AI implementation
    • Cloud
    • Cybersecurity
    • Data solutions
    • DevOps
    • Digital strategy
    • Experience design
    • Hardware engineering
    • Managed services
    • Software engineering
    • Sustainability transformation
    Explore our expertise

    Highlight Case Study

    Zurich Airport transforms operations for a data-driven future

    Learn more
  • Industries
    • Banking
    • Insurance
    • Healthcare providers
    • MedTech
    • Pharma
    • Industrial sector
    • Commerce & retail
    • Energy & utilities
    • Government & public sector
    • Transport
    • Defence
    Explore our industries

    Subscribe to receive the latest news, event invitations & more!

    Sign up here
  • Case studies

    Spotlight case studies

    • Swisscom migrates millions of email accounts to the cloud
    • Global Research Platforms and Zühlke are fighting Alzheimer's disease
    • UNIQA: AI chatbot increases efficiency in 95% with half the effort
    Explore more case studies

    Highlight Case Study

    Zurich Airport transforms operations for a data-driven future

    Learn more
  • Insights

    Spotlight insights

    • AI in the industrial value chain
    • How to master cloud sovereignty with risk-based strategies
    • How to apply low-code technology in the insurance industry
    Explore more insights

    Highlight Insight

    From Hardware to Systems: Turning Legacy into Advantage

    Learn more
  • Academy
  • Contact
    • Austria
    • Bulgaria
    • Germany
    • Hong Kong
    • Portugal
    • Serbia
    • Singapore
    • Switzerland
    • United Kingdom
    • Vietnam

    Subscribe to receive the latest news, event invitations & more!

    Sign up here
Zühlke - zur Startseite
  • Business
  • Careers
  • Events
  • About us
  • Expertise
    • AI implementation
    • Cloud
    • Cybersecurity
    • Data solutions
    • DevOps
    • Digital strategy
    • Experience design
    • Hardware engineering
    • Managed services
    • Software engineering
    • Sustainability transformation
    Explore our expertise

    Highlight Case Study

    Zurich Airport transforms operations for a data-driven future

    Learn more
  • Industries
    • Banking
    • Insurance
    • Healthcare providers
    • MedTech
    • Pharma
    • Industrial sector
    • Commerce & retail
    • Energy & utilities
    • Government & public sector
    • Transport
    • Defence
    Explore our industries

    Subscribe to receive the latest news, event invitations & more!

    Sign up here
  • Case studies

    Spotlight case studies

    • Swisscom migrates millions of email accounts to the cloud
    • Global Research Platforms and Zühlke are fighting Alzheimer's disease
    • UNIQA: AI chatbot increases efficiency in 95% with half the effort
    Explore more case studies

    Highlight Case Study

    Zurich Airport transforms operations for a data-driven future

    Learn more
  • Insights

    Spotlight insights

    • AI in the industrial value chain
    • How to master cloud sovereignty with risk-based strategies
    • How to apply low-code technology in the insurance industry
    Explore more insights

    Highlight Insight

    From Hardware to Systems: Turning Legacy into Advantage

    Learn more
  • Academy
  • Contact
    • Austria
    • Bulgaria
    • Germany
    • Hong Kong
    • Portugal
    • Serbia
    • Singapore
    • Switzerland
    • United Kingdom
    • Vietnam

    Subscribe to receive the latest news, event invitations & more!

    Sign up here

Language navigation. The current language is english

Tech Tomorrow Podcast

Transcript: What boundaries should define our relationship with agentic AI in large-scale systems?

Read the transcript for Tech Tomorrow's 9th episode: What boundaries should define our relationship with agentic AI in large-scale systems with Sam Newman.

DAVID ELLIMAN

Hello and welcome to Tech Tomorrow.

I'm David Elliman, Chief of Software Engineering at Zühlke. Each episode we tackle a big question to help you make sense of the fast-changing world of emerging tech. Today I'm joined by Sam Newman, a consultant and author with over 30 years of experience in software development. Currently, Sam specialises in system architecture and advises clients on AI integration.

So, who better to help me answer the question: what boundaries should define our relationship with agentic AI in large-scale systems?

SAM NEWMAN

How we typically tend to interpret the concept of agentic AI systems is kind of LLM-powered agents that are able to work with a higher degree of autonomy. So rather than I ask you something, you give me a response, and then the thing I'm talking to is just waiting for my next prompt, these agents are going off and doing things maybe on my behalf, reacting to external inputs.

At their heart though, they are still powered by, at least what we're talking about nowadays, they're still powered by LLMs and sort of have all the associated challenges in that space and non-determinism and everything else. I'm being slightly careful with my language because I dislike the fact that AI has been reduced to LLMs, because AI is way more than that.

DAVID ELLIMAN

It might be worth just relating the nature of determinism for our audience.

SAM NEWMAN

So, when we say something, a programme for example, is deterministic, it's if you run it with the same inputs, it will give you the same output, simplistically put, so you get reproducibility. The same programme with the same inputs will give you the same answer again and again.

And that's often what we want in software. It's not always what we'd need. At the heart of LLMs, basically it's all about prediction. What we're looking for is something that plausibly looks like the right answer, and actually in a lot of cases that's absolutely fine. You know, when you are doing, say, image generation based on the prompt, the fact that different images can get produced from the same prompt isn't actually a massive problem because we're not trying to get an exact answer.

The challenge becomes when we start taking the software that gives us different answers for the same inputs and then we start trying to use it in places where we actually want the same answer. Because, to give you a real-world example, I spoke to a product owner, one of the clients I had recently, and they were frustrated by the fact that ChatGPT was letting them down.

I asked them what they were doing, and they basically stopped using Excel and were instead copying and pasting formulas and effective spreadsheets into ChatGPT, getting ChatGPT to do sums like Excel. And it's like, that's not what it is. It doesn't know how to do that. It will give you numbers that might be right, or that might look right. It's not gonna give you actually the right answer.

And I think that can be a challenge. And because that's not really explained that clearly to sort of general users of this technology. And I think this gets further compounded. The UX that we have with things like ChatGPT especially, it likes to be very confident in its answers and so we think it can do more than it actually can.

DAVID ELLIMAN

Let's just hear a little bit about what clients are asking you about or what are you experiencing, how much you are getting asked about AI agent systems, complex architectures, et cetera.

SAM NEWMAN

I've had some detailed discussions around the use of gen AI in the context of software delivery, like what role it plays, co-authoring and the like.

I mean, that's been a really, really hot topic. One of my clients in the US has been very thoughtful about their use of those tools. You know, they're still being very open about the tools they're using. They're recognising this is changing. They're not fixing themselves into one particular workflow.

My main contact there has a mantra he uses internally that he wants his developers to be more human. So his idea being, okay, if computers can do more for us, let's double down on the things that make us human. So, he's actually investing more in his people in terms of getting them talking to customers, working with their presentation skills, better at working together, which I think has been a really interesting take.

I think the architectural stuff is getting really interesting. People saying, ‘I've gotta open up my existing systems for GenAI workflows. Can I just stick an API in it?’ And it's like, well, now we have to get into questions about the security of that side of things as well.

DAVID ELLIMAN

An API, or application programming interface, is simply a way for AI to interact with another programme or system.

SAM NEWMAN

A lot of this is how do we architect for uncertainty? And we've done this for a while, but now the stuff we're uncertain about, we're really uncertain about. There's a lot of people going, I don't know if this is right, but I'm talking to real AI experts and they're saying, we don't know if this is good either.

My job, I think, with my clients has always been to, as well as I can, be truthful. So I can't go and give them firm answers. So, all I'm trying to do with my clients at the moment is help them chart a path that gets them in the right direction for what we know now but doesn't close off the fact that we think things are going to be rocky in the future.

So, I think it's the old agile mantra: embrace change. There's just a lot more change nowadays.

DAVID ELLIMAN

Indeed, indeed. And I guess in the same vein that AI has taken and does take many forms and has been around a long time, agentic AI seems to be associated with doing something for you. How do you feel about that? There's a sense that this is all kind of new.

SAM NEWMAN

I've had a few conversations about this quite recently. You know, when we think of automation in the context of software delivery, automation in that world needs to be deterministic. It needs to be reproducible. To introduce non-determinism in that part of the automation cycle, all sorts of things start breaking apart.

So I've actually, you know, and I've had conversations with people about using agents as part of deployment workflows, for example. I've been saying, look, it's fine to, say, have your agent trigger a deterministic process because you know if it's triggered the process or not, right? So the non-determinism, did it actually trigger the process, you can track. But the details around automation in that space, I need these things to be done in the way that they are done for very concrete reasons.

And so that, I think, is really important for people to understand when they're thinking of using agents to automate processes, is where can you allow for determinism versus non-determinism?

This also picks up a general thread around this space, which is for a number of different reasons, it's important not to treat these things as monolithic agents, but rather than decomposing them into smaller pieces, which gives you a whole lot of flexibility about things like your token spend, mixing and matching models and vendors, but also mixing in deterministic and non-deterministic automation.

DAVID ELLIMAN

I think that sometimes people will look at a particular decision, and if that is made, given some level of confidence that you'll take one route versus another, that could be something that you can reason about. But if you put that in a systemic chain where a number of these things are happening back to back, you could get some sort of systemic drift through that, that all of a sudden you think, what on earth is this doing?

And it's bad enough to try and understand what a model is doing at the best of times, let alone what a system of models is doing.

SAM NEWMAN

And this is a general challenge. There's been a shift towards effectively breaking problems down into lots of steps where you are invoking a model, maybe in a chain or a fan-out.

And this is an issue that if you have a failure early on in that chain, and what I mean by failure is not necessarily it didn't work, in which case you know it didn't work and you can try again, but where the output might not line up with what you want. But then if the output from one model becomes the input to another model,

then even if the rest of the chain is doing exactly what it should be doing, you've poisoned it early on. And so that is one of the challenges around moving to this more kind of modularised workflow-based model around not just agentic gen AI solutions, but other types of gen AI workflow, is that if you have that series of steps, then yeah, if you get something wrong early on, you could have issues. It doesn't mean you shouldn't do it, you probably still should break these down into smaller pieces, but it is a set of challenges that needs to be dealt with.

DAVID ELLIMAN

When Sam talks about poisoning, he means that if an early step in a chain of AI-driven tasks produces a flawed output, every subsequent step inherits and amplifies that error, even if those later steps work perfectly. To guard against this, you need validation gates between the steps, essentially checkpoints that verify the output of one stage before it becomes the input to the next.

This might include some scheme of validation to confirm the structure's correct, or confidence scoring to flag uncertain outputs, or deterministic checks. But if a step is supposed to extract a number, verify it's actually a number. You can also introduce human-in-the-loop reviews to actually check that it is at critical junctures and design your workflows so that each step's contract,

in other words, what it expects in and what it promises out, is explicit and testable. So when we get problems in a complex system, we have to consider rollback. And rollback in a distributed AI-augmented system is genuinely harder than in traditional software, and that's something architects need to design in upfront.

In a conventional deployment, you can often redeploy the previous version and restore a database snapshot, for example. So how does Sam step back to consider how to build a system that successfully integrates agentic AI?

SAM NEWMAN

At the moment, when I'm trying to plan that process out in my head, I'm pulling in concerns around security, and I'm pulling in concerns around the general volatility in this space, and because of those factors, which I can dive into, I'm very clearly ring-fencing any of these workloads, right? I want abstraction, so I want the system that I have built and that I run, that has my data, over here.

I don't want these agent-y little components right in the heart of that system for a bunch of reasons, right? I want them kind of separate and with some abstraction over it, to an extent. From the outside, one of these components, I mean, this goes back to the original kind of work around modularity, right? The whole point of, say, a module is that it has information hiding.

You don't know what the internals are. You have a nice abstraction on the outside. That's how I want to treat my agents and even the components my agent uses. And that principle still applies. And the reason for that is, talk about some of the uncertainty in the AI space, we want to switch our models out. We know that the big AI companies are haemorrhaging money and are completely opaque around their financing.

There's a chance one of them could go bust. So against that backdrop, right, we've got vendor uncertainty and also what's great now might not be what's great in a month from now or something else comes out. So it's even more important that we have abstractions around these things. Also, as you start breaking out these workflows, you start realising actually that thing that an agent was doing, I could just write some code for now and replace it, save myself some money or whatever.

But also by having clear abstraction boundaries around it, it also makes it much easier for me to manage some of the security concerns. There are some additional benefits to deploying these modules as separate services in a distributed system. So this is what we might call a microservice, because that way we can further isolate access to data as well, which is really important if these were just modules inside a single process, right? Because you can have one process that consists of lots of modules. They tend to all have access to the same data as each other, and that's a bit of an issue when we've got some legitimate concerns about bringing our data together with these agents, right? So having those abstractions also allows us to be much clearer around information classification.

What access to data are they allowed? You can use things effectively like network routing to understand the relationships between things more clearly.

DAVID ELLIMAN

I know you were central to the development of microservices, had a very easy relationship with domain-driven design and the construction of bounded contexts around something that means something in the business.

And you can sort of say, well, I can think about that in a cohesive sense, and it's got its own data and things that operate within that. That's a good candidate for a microservice. And I think that from listening to you, that language translates very well into something that might have, if it's something that you want to have some sort of LLM participation in for whatever reason, then you may have a model, it’s data utilisation, the data sources wrapped up in something that we might be familiar with in terms of a bounded context.

SAM NEWMAN

I think that's exactly right and I think in many cases that isn't happening and that's causing some other interesting problems. I mean, how many companies have gone out there because their board told them to have some AI and they've had the people that know about this stuff create some sort of solution off to one side to do some AI?

But these are often clean-room implemented systems that are often divorced from the rich domain logic that makes up the existing application architecture, right? There's these existing systems. We have a lot of our understanding of the system in this code, and that often represents how people work with it and use it.

If we are going to create AI-driven or AI-powered user-facing functionality, it needs to be in and part of the product domain. It needs to know about the domain, know about the concepts in the system. So for me, absolutely it should be using domain-driven design speak. You know, I would imagine you might have bounded contexts which are implemented by agentic workflows or just generative AI solutions in general.

Right. And this then opens up the discussion around the use of things like MCP. Right? So we've had this kind of way of getting our new...

DAVID ELLIMAN

I was going to say, do you want to define that for us?

SAM NEWMAN

MCP stands for Model Context Protocol, and it's basically a standard way to get LLMs talking to existing software. So you've already got a programme and you want to have your LLM talk to and control your programme. So what you do is you create something that matches this MCP specification that allows an LLM to talk to the MCP server, which in turn controls your software.

I could have an MCP server for Excel, for example, which would then allow my LLM to do stuff with my Excel application. So that's the kind of generic idea. And this is an obvious place, to an extent, where some of this could happen. So like there's an MCP server for GitHub, for example. So you can have your LLM manage your GitHub account and, you know, create repos, check files in and out, and now

I can just have my LLM talk directly to GitHub via that interface. So then we start thinking about, coming back to your point about, well, a microservice can have lots of different interfaces that it exposes that allow different types of consumer to use that microservice's functionality in different ways. So one option we've got here is to say, well, this microservice contains information about product history.

Does that thing, rather than just exposing, say, a REST API for use by other microservices, does it also just expose an MCP server directly? And that's something I think kind of needs a bit of exploration. I mean, the challenges around the MCP specification is that a lot of things like security were a total afterthought, and so we've been running to catch up on that.

And there are some other downsides to it at the moment. A lot of people are pushing much more towards, no, you just have one sort of MCP shim between all of your existing infrastructure and your agent stuff. I do have some concerns about that model, but I think we're still working out what good looks like in that world, to be frank.

DAVID ELLIMAN

I think you made an important point earlier. I just wanted to go back to, you said that we were sort of talking about if you had a framework, call it a workflow, whatever you wanna do, some sort of interconnection of components that have some agent decision-making either orchestrating or within components or both.

I think there's a sense there that that orchestration and the construction of whatever that component does, that it's easy or easier to construct that with AI coding. And I think you made a point where you said you could get to the point where you just realise that you can write the code for that yourself, whether that's AI-assisted or not, it doesn't really matter, but you start to realise that some things are, if not overall, but certainly component level, actually can be serviced deterministically. AndI very often find that there is a misunderstanding with people thinking that you need an LLM to do something when actually what they're talking about is something that good old software has been doing for decades.

SAM NEWMAN

Yeah. And it could be good old software that you've written with a generative AI coding tool, right?

DAVID ELLIMAN

Yes.

SAM NEWMAN

Because that's a whole different thing, right? So I could have a deterministic component of my system that I can prove is deterministic, have tests around it. I could have authored that with an AI coding agent. I also think there's wider discussions around things like the cost of tokens.

DAVID ELLIMAN

In the AI context, tokens are the way prompts are broken down into individual words, punctuation or other elements. Of course, each of these tokens takes computing power and that comes at a cost.

SAM NEWMAN

Token costs are currently probably under-costed for most providers, and that's partly being hidden by the fact that a lot of people are paying subscriptions for these services. And if you are paying the real token cost, you start to see the cost of that. If you've got a system that consists of multiple different little models doing bits and bobs for you, maybe as you say, it's part of a workflow, it may not be, oh, this is what I want, a deterministic solution. It's more now, yeah, this thing's getting quite expensive.

Could I replace it with a bit of code that we've written ourselves, right? Because our token costs are too high, so let's just do that and bring that token cost right down. We just have something that has the same behaviour that we've written ourselves and the thing is, if you're getting a system up and running rapidly, you know, maybe you don't do that initially.

But as you start to understand what the software needs to do, how it needs to operate, you might say, you know what? It's worth us spending two or three months to rewrite that bit with our own code just to bring costs down, or we rework it to work in a different way with a different provider. Or we, you know, start switching it out for a model that we can run locally.

Longer term nowadays, it's not really viable for most, for anything other than the quite small models, but there's also possibilities, right? You might realise, actually I could solve this with a model that is actually just a few billion parameters or something that I could run on our local hardware.

DAVID ELLIMAN

During the move to the cloud, there was optimism that it would drastically change the cost model, but there was a reality gap. The promise of cloud was compelling: swap your capital expenditure on data centres for a flexible pay-as-you-go model, and the cost should fall. The reality for many organisations was quite different.

Without active cost governance, cloud spending often exceeded what they'd been paying for on-premises stuff. Architectures that weren't designed for the cloud's pricing model, always-on virtual machines instead of event-driven elastic workloads, simply replicated the old costs in the new location. And we could consider whether the lessons learned from this have actually translated into discussions of AI token costs.

And honestly, not really, not enough. We're seeing some of the same patterns repeat. There's an initial wave of enthusiasm, just plug in the API and let the model handle it, but without rigorous analysis of what each call actually costs at scale. And Sam rightly points out current token pricing is almost certainly subsidised.

The big providers are in the land-grab phase, prioritising market share over profitability. When those economics normalise, organisations that haven't been tracking and optimising this token consumption are going to face some uncomfortable surprises. The parallel with early cloud adoption is striking.

The technology is genuinely transformative, but the unit economics need the same discipline we eventually learned when we applied it to cloud infrastructure. As Sam, I wrap up, I'm curious to know where he thinks this all might be heading.

SAM NEWMAN

I think there are some fantastic use cases for the generative AI technology that we haven't worked out yet because I think we're still kind of using it in quite boring ways. I think some of the stuff we've been fixated on, like code generation, is the least interesting stuff to me in that space.

So I'm actually quite positive around that. I'm concerned and pessimistic about the macro picture because I think we have to recognise that it's been the largest investment of capital in human history has gone into building out data centres for these AI companies primarily. A lot of these data centres that we're building don't actually exist. They exist in theory and the amounts of money are eye-watering and vast, like nothing we've ever seen before.

And that's happening in a set of industries which are opaque around how that money is being spent, with some concerning circular financing. And circular financing is kind of fine in a bit more mature markets. These aren't mature markets. So I'm deeply pessimistic. My biggest fear is not that OpenAI or Anthropic go bust. I actually think that would be one of the best things that could happen.

I mean, if you look at all the previous generations of step changes we've had around technology, the first-gen companies always fold, right? And that's just how it should be because somebody else picks up the pieces and they step on from that. My biggest actual concern is that these companies who have taken on these vast amounts of debt without any real sense of ever being able to make a profit off this because their dreams of achieving a general intelligence haven't borne any fruit,

my biggest concern is that they've been laying the groundwork for a couple of years now that they get a bailout if they fall. And it's a really good case to argue that if the finances fall apart for OpenAI, because of the contagion that that will cause, that the US taxpayer and, by extension, a big chunk of the world will end up having to pick up the tab for that.

So pessimism at that level.

DAVID ELLIMAN

So given this backdrop, with any new innovation there is always risk and uncertainty, but you know, there's more for this because it's a bigger deal. But, you know, what would good look like? How would this sort of translate into either the bubble just shrinks and actually it doesn't burst?

Or there might be like a level-setting that occurs? I guess the question could equally be asked for like, in five years' time, what do you think this market's gonna look like? Which is a bit of a tough question, to be honest, Sam.

SAM NEWMAN

Yeah. Okay. Just predict the future. If we sort of mapped this to previous step changes we've had around technology and things, the companies that built the first railways went bust. We still have railways. The companies that built the internet went bust, but we use the networking they left behind. It would be unfortunate if we looked at the success of an individual company as being important to the overall concept.

And I think that's the thing I think we need to divorce quite significantly. I think we've proven out that there's lots of valuable things that we can use these LLMs for and it's already having significant impact in some sectors of work. But I also think we are only at the edge of really thinking through what are good use cases for this, right? So I think we're at the beginning of that, not at the end.

And I think almost at the moment we are being technology-driven. We've got a cool new tool. How should we use it? Because we have to use it because everyone else is using it. That for me is not where the future lies. The future lies thinking from the outside in, which is how can we make our users' lives better with that?

And I think some companies are getting it and some aren't. And I'm hoping we get more companies that get it because that will drive change in all of these spaces. But I think increasingly the centralisation of power around where these things are run and who they're operated by is actually not good for anybody.

So I'm hopeful that we will have a larger plurality of places where we can run these workloads. I don't think the answer is bringing stuff in-house for most people, unless you're a big company. I would like us to see kind of more local players in this space that provide services to people in those countries.

By which I mean they're operated by companies from those countries as well, not by offshore entities. It shouldn't be the purview of the massive multi-billion-dollar companies to drive forward this innovation.

DAVID ELLIMAN

You know, we've talked a lot about the boundaries that define our relationship with agentic AI from a human level. You know, you've discussed security, we've talked about leveraging bounded contexts and so forth. Is there anything else you want to kind of say, just to wrap that up?

SAM NEWMAN

If you are worried about the security side of this, I would thoroughly recommend taking a look at Simon Willison's post about the lethal trifecta. I mean, you should read Simon Willison's blog anyway, but he kind of explains there's sort of three things that have to happen for these things to be dangerous. And I actually think that can help structure your thinking about how you limit data and avoid some of the worst data concerns around this.

I think anyone that tells you with certainty that this is how things should be done, they're either lying to you or they misunderstand the state of the world. I think you've gotta roll with the punches. You're not going to get everything right. And so for me, what that means is do lots of small experiments and get in the habit of doing that.

That's what's going to stand you in the best stead going forward, is just be open to change. Keep exploring, keep experimenting, listen, and talk to people. Find peer groups you can chat to because no one knows what's right. No one knows what the future brings, and so you've just have to be ready to react.

DAVID ELLIMAN

So, what boundaries should define our relationship with agentic AI in large-scale systems?

From my conversation with Sam, a few principles come through clearly. First, containment: ring-fence your AI components behind well-defined abstractions. Just as you would any volatile dependency, don't let them reach deep into your core systems without clear contracts and security boundaries. Second, intentionality.

Not everything needs an LLM. Be honest about where non-determinism adds genuine value and where good old-fashioned deterministic software does the job better, faster and cheaper, and indeed more predictably. Third, reversibility: design your system so that when, and not if, an agent gets something wrong, you can trace what happened and recover gracefully.

And finally, humility. As Sam puts it, nobody knows exactly where this is heading. The organisations that will navigate this well are those that keep their experiments small, their architectures modular, and their minds open to change. The boundary isn't a wall. It's a well-designed interface, and like all good interfaces, it should let the right things through while protecting what matters most.

Thank you for listening to Tech Tomorrow, brought to you by Zühlke. If you'd like to learn more about what we do, you can find links to our website and more resources in this episode's show notes. Until next time.

Get to know us

  • About us
  • Impact & commitments
  • Facts & figures
  • Careers
  • Event Hub
  • Insights Hub
  • News sign-up

Working with us

  • Our expertise
  • Our industries
  • Case studies
  • Partner ecosystem
  • Training Academy
  • Contact us

Legal

  • Privacy policy
  • Cookie policy
  • Legal notice
  • Modern slavery statement
  • Imprint

Request for proposal

We appreciate your interest in working with us. Please send us your request for proposal and we will contact you shortly.

Request for proposal
© 2026 Zühlke Engineering AG

Follow us

  • External Link to Zühlke LinkedIn Page
  • External Link to Zühlke Facebook Page
  • External Link to Zühlke Instagram Page
  • External Link to Zühlke YouTube Page

Language navigation. The current language is english