Author: The Cloud Advisor

  • From Marzipan Bars to Modern Product Strategy: What Conjoint Analysis Still Teaches Us

    From Marzipan Bars to Modern Product Strategy: What Conjoint Analysis Still Teaches Us


    From Marzipan Bars to Modern Product Strategy:
    What Conjoint Analysis Still Teaches Us


    Back in my university days, I spent weeks staring at survey data about chocolate bars. Not exactly what you’d expect from someone who would later live and breathe Microsoft Cloud and AI, but that project shaped how I think about product development to this day.

    Our mission: use conjoint analysis to understand why marzipan bars live in the shadow of other flavors and how to design a bar that people would actually pick. Today, in a world of digital products, AI features and app marketplaces, the lessons from that study are still surprisingly relevant.


    From marzipan bars to modern product strategy


    In our seminar, we worked on the very glamorous category of “Riegelware” – those snack bars you see at the checkout, also known as “quengelware” in Germany because kids spot them, beg for them, and parents eventually give in.

    The brief sounded simple:
    Why do marzipan bars underperform compared to other flavors, and how could we change that?

    To answer that, we didn’t just ask, “Do you like marzipan, yes or no?” We treated it like a full new product development process:

    • Identify customer needs and segments
    • Understand competitors and the overcrowded snack shelf
    • Build different product concepts (flavor, coating, size, price, add-ons, packaging)
    • Test those concepts using conjoint analysis before ever launching a new bar

    If you replace “chocolate bar” with “SaaS feature” or “AI add-on”, you get the same basic pattern that modern product teams and startups still follow today: generate ideas, refine concepts, test before launch, and reduce the chance of a flop.

    Back then, the numbers were brutal: in some FMCG categories, over 70% of new products failed. Concept tests and conjoint analysis were one way to reduce the odds of burning budget on things nobody really wanted. Today, with app stores and cloud services overflowing, the failure rate hasn’t magically disappeared—it just moved into the digital space.


    What conjoint analysis really does (in human language)


    Conjoint analysis sounds like something you’d only do with three cups of coffee and a statistics textbook next to you. In reality, it answers a very human question:

    “When people choose between products, what really matters—and how much?”

    Instead of asking “Do you like marzipan?”, we showed respondents different combinations of product attributes:

    • Price (for example: €0.49, €0.59, €0.69, €0.79)
    • Chocolate coating (milk, dark, no coating)
    • Flavor (plain chocolate, cream filling, caramel, nougat, marzipan, coconut, coffee, …)
    • Portioning (one bar, two pieces, multiple pieces)
    • Weight (small snack vs. bigger bar)
    • Add-ons (no add-on, biscuit, wafer, nuts, cereals, fruit)

    Participants chose which bar they would buy from these combinations. From those repeated choices, we used hierarchical Bayes estimation to compute part-worth utilities—essentially, how much each feature and value increases or decreases the likelihood of a choice.

    Today, product teams do something very similar, just with prettier dashboards:

    • A/B tests in apps instead of paper questionnaires
    • Feature flags instead of hypothetical flavors
    • Data science pipelines instead of SPSS + seminar room

    But the logic is the same: people make trade-offs, and instead of guessing, you measure those trade-offs.


    What people really wanted and why marzipan struggles


    When we crunched the numbers, some patterns were wonderfully intuitive—and some were a bit painful for marzipan fans.

    First, the obvious one: price matters.
    As expected, lower prices generated higher utilities. Between €0.59 and €0.69, many respondents were nearly indifferent; below that, the price became a real positive driver. That’s still true today: even in premium niches, price elasticity is very real, especially in crowded categories like snacks or app subscriptions.

    Flavor-wise, the data was crystal clear:

    • Classic chocolate flavor was the top favorite
    • Cream fillings performed very well, especially among women
    • Caramel and nougat were also strong
    • Marzipan, coconut, and coffee flavors scored significantly lower on average

    Marzipan wasn’t a total disaster—but it clearly played in the second league. That already hinted why marzipan bars sit in the corner while chocolate, caramel and nougat dominate the center shelf.

    For other attributes we saw similar patterns:

    • Milk chocolate coating beat dark chocolate and “no coating” comfortably
    • A bar split into two pieces felt just right—easy to share or save, but not over-fragmented
    • Bigger weights increased perceived value up to a point; beyond about 65–80g, the utility started to level off or drop
    • Add-ons like plain or biscuit were preferred over cereals and especially over fruit pieces, which scored poorly—if people reach for a chocolate bar, they apparently don’t want disguised health food

    Interestingly, when we split the data by gender, we didn’t get a completely different world—but we did see nuanced differences:

    • Women were more price-sensitive overall
    • Men cared more about size (very small bars scored worse with them)
    • Women rated cream fillings and cereal components higher
    • Men leaned more towards nuts and kept a stronger preference for milk chocolate

    These are exactly the kind of insights that still drive segmentation and positioning today: the product that works best for one segment might not be the winner for another.


    Why concept tests still matter in a world of AI and cloud


    You might ask: “Nice snack bar story, Uwe, but what does this have to do with my cloud transformation or AI roadmap?”

    A lot.

    The mechanics haven’t changed:

    • In consumer goods, we mix price, flavor, packaging, size
    • In software, we mix features, UX flows, pricing models, support levels
    • In cloud and AI, we mix service tiers, data residency, AI capabilities, compliance guarantees

    The risks haven’t changed either. Whether it’s a marzipan bar that nobody buys or a cloud product nobody activates, launching the wrong thing at scale is expensive. That’s why concept tests, conjoint analysis, and structured experiments are still gold—especially when you move fast with cloud-native services and AI features.

    The difference today: we don’t have to wait weeks for survey data and manual calculations. With Microsoft’s ecosystem and modern analytics, we can:

    • Run near real-time experiments across regions and segments
    • Feed telemetry into product decision loops
    • Use AI to detect patterns in usage and preference
    • Combine classic survey-based research with behavioral data from real users

    In other words: the snack-bar methodology grew up, moved to Azure, and learned to work with streaming data and AI. But the question it answers is still the same:

    “What combination of attributes gives this product the best chance of success—for whom?”


    The limits of conjoint analysis (and why humility is part of good product work)


    Even in our student project, we ran into the limitations that every serious study faces—and those are still very relevant for today’s product teams.

    First, respondent fatigue. Some participants told us quite openly that the survey felt too long and that they stopped paying attention to details like price toward the end. That’s the reality in many research setups: people get tired, take shortcuts, and the data becomes noisier.

    Second, the “number of levels” effect. Attributes with more levels often look more “important” in the analysis, simply because there are more utility steps between the best and worst option. That can distort perceived importance and tempt decision-makers to tweak the wrong lever first.

    Third, the assumption of compensatory decision-making. Conjoint models often assume that people mentally add and subtract utilities (“this flavor is worse, but the price is better, so overall I still pick it”). Real humans often don’t behave like that. They use heuristics:

    • “Never pay more than €x for a bar”
    • “Always pick milk chocolate”
    • “No fruit pieces in my chocolate, ever”

    Later research showed that only a portion of respondents truly follow additive, compensatory rules. Others use threshold-based or simplified decision strategies. That doesn’t make conjoint useless—but it means you should treat it as one strong lens, not the single source of truth.

    Translate that to today’s world and you get a clear message:

    No matter how shiny your models, telemetry dashboards, or AI assistants are—reality is always messier than the model.

    That’s why the best product teams combine:

    • Quantitative modeling (conjoint, telemetry, funnel data)
    • Qualitative insights (interviews, usability tests, field research)
    • Continuous validation after launch (usage metrics, churn, feedback)

    From marzipan bars to cloud products: what I still use from this study


    Looking back at this seminar from today’s perspective, when I work with enterprises on Microsoft Cloud, Azure, AI and modern application architectures, a few core principles have stayed with me:

    • Don’t fall in love with ideas, fall in love with evidence. Marzipan might be your personal favorite, but if the preference structure says “chocolate + cream + fair price”, that’s the direction your mainstream product should explore—unless you consciously target a niche.
    • Design for segments, not for averages. The “average” respondent in our study didn’t really exist. Men and women had different priorities; in real markets, age, income, context, and use cases add even more layers. In cloud and SaaS, that’s your enterprise vs. SMB, regulated vs. non-regulated, core vs. edge workloads.
    • Prototype on paper before you prototype in code (or in factories).
      A well-designed concept test can kill bad ideas before they cost real money. Today, that might be a combination of user story mapping, Figma prototypes, simulated pricing pages, or low-fidelity feature toggles. Same mindset, different tools.
    • Accept that no method is perfect—but disciplined imperfection beats guessing.
      Yes, conjoint has theoretical weaknesses. Yes, survey fatigue is real. But a structured, data-informed view of preferences is still dramatically better than the HIPPO (Highest Paid Person’s Opinion) method.

    In a weird way, that marzipan project was my first serious lesson in data-informed product leadership—long before cloud economics, FinOps, or AI-driven analytics came into my daily work.

    Stay clever. Stay customer-obsessed. Stay insight-driven.
    Your Mr. Microsoft,
    Uwe Zabel


    🚀 Curious how market research, data-driven product design, and modern cloud strategy come together? Follow my journey on Mr. Microsoft’s thoughts—where cloud, AI, and business strategy converge.
    Or ping me directly—because building the future works better as a team.

  • Azure tags: your secret weapon against cloud chaos (and surprise bills) 🔖

    Azure tags: your secret weapon against cloud chaos (and surprise bills) 🔖


    Azure tags: your secret weapon against cloud chaos (and surprise bills) 🔖


    When you start small in Azure, everything still fits in your head. A handful of resource groups. A few VMs. Maybe a storage account, a web app, some databases. Then projects scale, teams grow, and suddenly your subscription looks like a junk drawer: full, valuable – and completely unstructured.

    That’s where Azure tags step in. Not as a “nice to have”, but as a core building block for FinOps, governance, and long-term maintainability. Let’s walk through how tagging really works today, why it matters for cost control, and how you can build a practical, enterprise-ready tagging strategy that your teams will actually follow.


    Resource groups alone are not a strategy


    Resource groups are often the first organizing principle people reach for in Azure: one resource group per app, per environment, or per department. That’s a good starting point – but it’s strictly one-dimensional.

    You can group by application or by environment or by department, but not all of them cleanly at once. The moment you ask questions like:

    • “Show me all production costs for Marketing across all apps.”
    • “Which resources belong to Project X across subscriptions?”
    • “Which test workloads could we shut down on weekends?”

    …resource groups alone hit a wall.

    Azure tags are designed exactly for this multi-dimensional view. Every supported Azure resource can carry multiple tag key–value pairs like:

    • Environment = Prod
    • CostCenter = 4711
    • Owner = Marketing-Team
    • Application = Online-Shop

    These tags aren’t just cosmetic. They flow into Azure Resource Graph, Azure Policy, Azure Monitor – and, most importantly, into Cost Management + Billing, where you can slice and dice spend by tag for showback, chargeback, and optimization.


    Tagging as the backbone of FinOps


    If FinOps is about answering “What are we spending, and why?”, tagging is the metadata layer that makes those answers possible. Without a consistent tagging model, cloud cost management quickly degenerates into guesswork and Excel archaeology.

    A solid FinOps-ready tagging model usually supports at least these dimensions:

    • Financial accountability – Who pays for this? (CostCenter, BusinessUnit)
    • Technical context – What is this? (Application, Service, Workload)
    • Lifecycle – Where does it run and how critical is it? (Environment, Tier, Criticality)
    • Ownership – Who do I ping when this explodes? (Owner, Squad, ProductTeam)

    Once you have these tags applied consistently, Azure Cost Management lets you:

    • Build dashboards by cost center, environment, or application
    • Run showback/chargeback reports by business unit or product line
    • Spot anomalies (for example: “Why did Environment = Dev costs jump 40% last week?”)
    • Identify zombie resources – things with no owner or no meaningful tag at all

    This is where tagging and FinOps intertwine: a good tagging strategy makes cost allocation transparent; FinOps practices make sure that transparency leads to action – budget controls, right-sizing, and better design decisions.


    Designing an Azure tagging strategy that works in real life


    The biggest mistake I see in enterprises: they either have no tagging rules, or they try to define 25 tags from day one and then fail to enforce any of them. Both extremes break.

    In practice, an effective tagging strategy for Azure follows a few simple principles:

    Start minimal, but mandatory
    Pick 4–6 tags that are non-negotiable for every resource. For example:

    • Environment (Prod / NonProd / Dev / Test)
    • Application or Service
    • Owner or Squad
    • CostCenter or BusinessUnit
    • Optional: DataClassification or Criticality for security & DR planning

    If a tag doesn’t drive a report, a policy, or a decision, it’s probably not a “must-have” tag.

    Standardize values, not just keys
    Tags only help if their values are consistent. Prod, Production, and PROD are three different values for Azure Cost Management. Define an allowed value list per tag (for example in a central Confluence / SharePoint page) and keep it short and well governed.

    Enforce tags as part of the platform – not as an afterthought
    Relying on “please remember to tag your VMs” never scales. Use the platform:

    • Azure Policy to deny or append tags at deployment time
    • Bicep/ARM/Terraform modules that include tags by default
    • Azure DevOps / GitHub Actions pipelines that fail if required tags are missing
    • Azure Resource Graph queries and dashboards to track untagged spend over time

    Your goal: tagging should feel like “how we deploy here”, not an extra governance checkbox.

    Bake tagging into your operating model
    Tags are not a one-off “project”. They evolve with your organization. Product lines change, teams merge, regulations appear. Build simple routines:

    • Monthly review of tag coverage (% of spend correctly tagged)
    • Quarterly review of tag keys and values (retire unused keys, avoid duplicates)
    • Clear ownership: one platform / Cloud Center of Excellence team maintains the global tagging standard; product teams apply it in their templates and IaC

    This turns tagging into a living part of your cloud operating model instead of a forgotten slide in a kickoff deck.


    From tagging to insight: concrete Azure examples


    Let’s make this tangible and connect it back to your original draft on organizing resources with tags. In a typical Azure landing zone, you might:

    • Organize resources into resource groups by workload and environment (for example: rg-shop-prod, rg-shop-dev)
    • Use management groups to separate business units or regions
    • Overlay everything with tags for cost, ownership, and lifecycle

    Some practical patterns that work well in enterprise environments:

    Align tags with cost analysis
    Use CostCenter, BusinessUnit, and Environment consistently, then:

    • In Cost Management + Billing, group costs by CostCenter to drive showback
    • Filter by Environment = NonProd to hunt for obvious savings (idle dev/test, oversized VMs)
    • Combine with Budgets and alerts to notify owners when tagged spend crosses thresholds

    Use tags as automation levers
    Tags are also fantastic control knobs:

    • ShutdownSchedule = 20:00-06:00 → a runbook or Logic App shuts down all matching VMs off-hours
    • BackupTier = Gold / Silver / Bronze → automation applies different backup or retention policies
    • PatchWindow = Sun-22:00 → patch orchestration pipelines pick the right batch

    Here, tags directly connect business intent (“this is a non-critical dev system”) with automated technical behavior (for example: aggressive off-hours shutdown).

    Support security and compliance
    In security and compliance work, you rarely look at a single resource – you look at classes of resources:

    • DataClassification = Confidential, Regulation = GDPR, or Industry = Healthcare
    • In Microsoft Defender for Cloud, you can then scope recommendations, policies, and alerts to specific tags.

    This makes it easier to argue with auditors: “Yes, we know exactly which resources hold personal data, how they’re protected, and what they cost.”

    Connect tagging with your FinOps practice
    Finally, map your tags into your FinOps reporting:

    • Use Owner or Squad to power showback dashboards per product team
    • Use Application to compare cost per feature or microservice over time
    • Use Environment to track the Prod / NonProd cost ratio and set targets
      (for example: non-prod should not exceed 30% of production spend)

    Over time you’ll notice a cultural shift: engineers and product owners start to talk about cost as a first-class signal – exactly what FinOps wants.


    Conclusion: tagging is boring… until it saves you millions


    No one gets into cloud engineering because they dream of defining CostCenter values. Tagging can feel mundane compared to shiny AI services or Kubernetes clusters. But from an enterprise perspective, tags are the quiet foundation of governance, transparency, and cost control in Azure.

    The good news: you don’t need a perfect tagging model to start. You just need a consistent and enforced one that reflects how your organization actually works – financially, technically, and operationally. From there, FinOps reporting, automation, and optimization all become dramatically easier.

    If you’re at the beginning of your Azure journey, start tagging now. If you’re already at scale and drowning in untagged resources, your future self will thank you for investing in a clean tagging strategy today – before the next budgeting cycle asks uncomfortable questions about “who is actually paying for all of this?”.

    Stay clever. Stay cost-aware. Stay well-tagged.
    Your Mr. Microsoft,
    Uwe Zabel


    🚀 Curious how Azure tagging, FinOps, and enterprise cloud strategy fit together? Follow my journey on Mr. Microsoft’s thoughts—where cloud, AI, and business strategy converge.
    Or ping me directly—because building the future works better as a team.

  • Netflix runs on NES, a love letter to engineering culture. 💾🎮

    Netflix runs on NES, a love letter to engineering culture. 💾🎮


    Netflix runs on NES, a love letter to engineering culture. 💾🎮


    Back in the late 80s and early 90s, my world was floppies, cartridges, and cathode-ray tubes. Today, I spend my time in the Microsoft cloud universe, but every now and then a story pops up that bridges both worlds so perfectly that I just have to smile.

    As part of a Netflix Hack Day in 2015, they stuffed a tiny, experimental Netflix client into an NES cartridge and made the 1980s console display a (very limited) version of the streaming UI. No, this was never meant for production. Yes, it was gloriously over-engineered. And that’s exactly why it matters.

    In a world where we talk about microservices, distributed systems, and cloud-native everything, this project is a reminder: at the heart of all that complexity are people who genuinely enjoy pushing boundaries just to see what’s possible.


    What it takes to stream a video on 1980s silicon


    From an engineering perspective, Netflix on an NES is a masterclass in constraints.

    You’re trying to make a modern streaming experience talk to a console that was designed for 8-bit games, not TCP/IP and adaptive bitrate video. That forces some fascinating architectural decisions:

    You have a tiny CPU, almost no RAM, and strict timing rules for rendering graphics to the TV. The console doesn’t know what HTTP is, let alone HTTPS. So you end up with a split architecture: modern networking and decoding on one side, the NES acting almost like a thin client on the other.

    In practical terms, this means:

    • You treat the NES like a deterministic graphics terminal.
    • You design ultra-lean protocols to ship only the data absolutely needed to draw UI states.
    • You squeeze rendering logic into a tiny footprint, where every byte and CPU cycle counts.

    This is the exact opposite of “just throw more resources at it.” It’s disciplined, creative engineering under extreme constraints. The kind of thinking that also helps when you’re optimizing real production systems—whether that’s a streaming service, an enterprise SaaS platform, or a high-scale API.


    Why these “useless” hacks are incredibly useful for teams


    On paper, an NES-based Netflix client doesn’t move any business KPI. It doesn’t ship to customers. It doesn’t bring in direct revenue.

    But for engineering organizations, experiments like this are pure gold.

    They create a playground where ambitious developers can:

    • Try ideas they’d never be allowed to introduce into the main product.
    • Touch different parts of the stack—from hardware constraints to protocol design.
    • Collaborate across disciplines (backend, graphics, tooling, UX) outside of normal silos.

    That’s how you keep top talent engaged. You don’t just give them tickets in a backlog—you give them room to explore. You let them build “impossible” things that make their inner 12-year-old geek grin. 😄

    Morale and motivation in engineering teams don’t come from posters on the wall. They come from moments like this: staying late at a Hack Day, watching a 30-year-old console render a modern UI and thinking, “We did that.”

    Those are the stories people tell new hires. Those are the screenshots they keep in their personal portfolios. And that energy inevitably spills back into the core product.


    What this says about modern software architecture


    Underneath the fun, the Netflix NES hack also says something deeper about how we design software.

    Modern software architecture is all about decoupling:

    • Decoupling frontends from backends
    • Decoupling logic from presentation
    • Decoupling clients from specific hardware platforms

    If you can make Netflix talk to an NES, what you’re really proving is that your core platform can be abstracted away from the device. The NES is just an extreme, retro example of a client.

    Change the wrapper, keep the core.

    That same pattern is at the heart of:

    • Multi-device experiences (TV, console, browser, mobile)
    • API-first product design
    • Experimentation with new interaction models (think wearables, embedded screens, cars)

    A hack like this is a playful stress test of your own architecture. If your service can adapt to something as bizarre as a cartridge-based console, you’re probably doing something right in your abstractions.


    Hacking as a culture signal, not just a side project


    There’s another angle I love here: this kind of experiment sends a message, both internally and externally.

    Internally, it tells engineers:

    • “We trust you to play.”
    • “We value curiosity and weird ideas.”
    • “We know not everything needs an immediate business case.”

    Externally, it tells candidates and the tech community:

    • “This is a place where you can build crazy things with smart people.”
    • “We care deeply about craft, not just shipping features.”

    If you want to attract and retain great engineers and architects, you need exactly that kind of culture. Compensation and tech stack matter, of course—but the ability to work on mind-bending side projects with colleagues is a huge differentiator.

    In a way, Netflix on NES is a recruiting poster disguised as a hack.


    Why this still matters beyond 2015


    Even framed in May 2015, this hack gives us a timeless lesson: the best engineering teams don’t just consume technology—they remix it. They connect eras. They let modern platforms talk to vintage hardware. They treat constraints as creative prompts, not blockers.

    Whether you’re building enterprise cloud architectures on Azure, designing highly scalable microservices, or just tinkering in your spare time: experiments like “Netflix on an NES” remind us why many of us fell in love with technology in the first place.

    Because sometimes, the most inspiring projects aren’t the ones that ship—they’re the ones that show what could be possible if we keep playing.

    Stay clever. Stay curious. Stay experimental.
    Your Mr. Microsoft,
    Uwe Zabel


    🚀 Curious how retro hardware, modern cloud services, and smart integration layers can work together? Follow my journey on Mr. Microsoft’s thoughts—where cloud, AI, and business strategy converge. Or ping me directly—because building the future works better as a team.

  • Capgemini is GitHub EMEA Partner of the Year – Why This Matters More Than Just a Trophy

    Capgemini is GitHub EMEA Partner of the Year – Why This Matters More Than Just a Trophy


    Capgemini is GitHub Partner of the Year –
    This is More Than Just a Trophy


    Sometimes the news hits your inbox, and you just stop for a second, smile, and think: “Yes. That’s exactly where we wanted to go.”

    GitHub has officially named Capgemini the 2025 EMEA Services and Channel Partner of the Year. This award recognizes partners that drive innovation, collaboration, and real impact for developers and enterprises across the region. And this year, Capgemini is on that list. The GitHub Blog

    For me as “Mr. Microsoft” inside Capgemini, this is not just a nice badge for the company website. It is a very clear signal: our strategy around Microsoft Cloud, GitHub, and AI-powered development is working. For our teams and for our clients.


    Why this award is a big deal for our clients


    On the surface, “EMEA Services and Channel Partner of the Year” sounds like something mainly for partner managers and sales decks. Underneath, it tells a very practical story for CIOs and engineering leaders:

    You can build your entire modern software factory on GitHub – strategy, tooling, process – and have a partner at your side who knows how to industrialize it at enterprise scale.

    For our clients, this recognition means:

    • We have proven experience rolling out GitHub across large, complex organizations. Not just small pilot teams.
    • Capgemini knows how to align GitHub with Azure, Microsoft 365, and security requirements. Instead of treating it as a “standalone dev tool”.
    • Our experts help teams go beyond source control and use the full GitHub platform. Actions, Advanced Security, Packages, Copilot, and now more and more AI-powered DevSecOps patterns.

    In other words. This award is not about us. It is about the trust that enterprises can place in a joint GitHub plus Capgemini plus Microsoft story.

    blank

    Developers, GitHub, and the Microsoft cloud


    If you look at where software engineering is heading right now, one thing is obvious. The center of gravity has moved to GitHub.

    Code lives there.
    Collaboration lives there.
    Security feedback lives there.
    AI-assisted development lives there.

    GitHub is the place where modern engineering teams spend their day. Microsoft Azure is where those workloads run, scale, and connect into the rest of the enterprise. Being recognized as GitHub’s EMEA partner of the year means we are trusted to connect those worlds and make them work as one coherent platform.

    That includes topics like:

    • Designing end-to-end CI/CD with GitHub Actions, Azure DevOps where needed, and Azure as the target runtime.
    • Bringing GitHub Advanced Security and Microsoft Defender for Cloud together into one security narrative.
    • Rolling out GitHub Copilot in a way that fits each client’s compliance, governance, and developer culture.

    For teams, this is where the magic happens. Less context switching, more automation, and a development experience that really feels “cloud native” instead of stitched together.


    What this means for me as “Mr. Microsoft”


    On a personal level, this award feels like a checkpoint on a longer journey.

    For years I have been talking to clients about moving from “just using Git” to building a real developer platform – with GitHub, Azure, the Microsoft intelligent cloud, and now increasingly AI agents and Copilot in the mix.

    When GitHub now says, in effect, “Capgemini is one of our key partners for EMEA,” it reinforces exactly that mission:

    Help enterprises transform how they build software.
    Make the developer experience first-class.
    Anchor everything in a secure, scalable Microsoft Cloud foundation.

    Inside Capgemini, it is also a huge motivation boost for all our Microsoft and GitHub practitioners. From the engineers who automate the pipelines, to the architects who design secure landing zones, to the change managers who help teams adopt new ways of working – this award belongs to all of them.


    Where we go from here


    An award is nice. What really matters is what we do with it.

    For me, the next steps are clear:

    • Double down on GitHub plus Azure as the default backbone for application modernization and greenfield builds.
    • Bring more AI into the development lifecycle in a responsible way: Copilot, AI-powered security, and eventually fleets of AI agents running on Azure that support engineering teams instead of replacing them.
    • Share more stories, patterns, and lessons learned from real client projects – so that others can build on them.

    As “Mr. Microsoft,” I will continue to focus on exactly this: connecting the dots between GitHub, Microsoft Cloud, and concrete business outcomes. This award is a strong sign that we are on the right track – but the most interesting work is still ahead of us.

    Stay clever. Stay collaborative. Stay shipping.
    Your Mr. Microsoft,
    Uwe Zabel.


    🚀 Curious how GitHub, Microsoft Azure, and real-world developer productivity fit together in practice? Follow my journey on Mr. Microsoft’s thoughts—where cloud, AI, and business strategy converge.
    Or ping me directly—because building the future works better as a team.


  • Play “The Legend of Zelda: A Link to the Past” in Your Browser – Nostalgia Gaming Meets Modern Web Tech

    Play “The Legend of Zelda: A Link to the Past” in Your Browser – Nostalgia Gaming Meets Modern Web Tech


    Play “The Legend of Zelda” in Your Browser
    Nostalgia Gaming Meets Modern Web Tech


    You know that feeling when an old melody from your childhood suddenly plays and your brain instantly teleports back 20+ years? That’s me every time I hear the Zelda intro theme. 🧝‍♂️🎶

    Many of us grew up saving Hyrule one dungeon at a time – and suddenly realize in 2025 that those pixelated adventures are still very much part of our DNA. The fun twist today: you don’t need an old SNES or even a Switch Online subscription to revisit them. You can literally fire up The Legend of Zelda: A Link to the Past… in your browser. 🎮

    If you love A Link to the Past as much as I do, Head over to:

    Play The Legend of Zelda Online

    Welcome to Hyrule-as-a-Service.

    blank

    Zelda in the browser:
    nostalgia meets web technology


    Playing A Link to the Past in the browser is more than “oh cool, it runs in EDGE.” It’s a beautiful collision of three things I really care about:

    • timeless game design
    • the evolution of web technology
    • and digital preservation

    In the 90s, A Link to the Past squeezed an entire epic into a 16-bit cartridge. Today, modern JavaScript, HTML5 canvas, and clever emulation techniques can recreate that same experience inside a tab. Alongside your Outlook Web, Azure Portal and Teams window.

    For me, that’s the magic: the same browser I use to design cloud architectures and write about Microsoft technology is now also a time machine back to my childhood Hyrule. No extra hardware, no emulator installation marathons. Just click, load, play.

    Don’t get me wrong. There are a lot of browser games these days. But this one especially bridges the time for me back to my childhood. And therefore makes it so christal clear how technology has been evolved.


    From cartridges to canvas:
    why this is technically exciting


    From a technologist’s point of view, running a 90s console classic in the browser is a brilliant showcase of how far the web platform has come. Back when A Link to the Past launched, a website was mostly text and a few images. Today, the browser is effectively a cross-platform runtime:

    • JavaScript drives the game logic and emulation
    • HTML5 canvas (and sometimes WebGL) handles rendering
    • Modern browsers provide input, audio, and performance that’s “good enough” for fast-paced games

    The HTML5 Zelda map project The Verge highlighted back in 2015 already showed the potential. A fully scrollable, zoomable view of Hyrule, rendered in the browser with no plugins, no Flash, no Java applets. Just standards-based web tech.

    Now, combine those techniques with ROM emulation in JavaScript and you move from “map viewer” to “fully playable game.” That’s not just fan service – it’s a demonstration of how flexible and powerful the browser has become as a universal application layer. The browser has become the primary stage for modern user interfaces – powering both consumer applications and bespoke enterprise software.


    Why replaying A Link to the Past still matters


    You could argue:

    “Uwe, we have Tears of the Kingdom. Why bother with a 16-bit top-down Zelda?”

    Because A Link to the Past is basically game design in its purest, most elegant form. No overwhelming skill trees. No 200-hour open world. Just:

    • clear progression
    • smart dungeon puzzles
    • tight combat
    • and a world that feels handcrafted screen by screen

    Playing it again – this time in a browser – is like looking at the blueprint behind modern Zelda titles. You can see the DNA that would later grow into Breath of the Wild and Tears of the Kingdom. The dual-world mechanic, the non-linear exploration, the feeling that curiosity always gets rewarded. This game was a pioneer in so many ways. And the basic story is still the same in modern Zelda titles.

    And because it runs in a tab, it becomes a low-friction “coffee-break game”:
    Ten minutes of Hyrule between two Teams calls.
    One dungeon after you finish that PowerPoint.
    A quick detour to Kakariko instead of doom-scrolling LinkedIn.

    That blend of deep nostalgia and modern convenience is surprisingly powerful. Okay, the keyboard controls are a bit clumsy. But for a quick detour through old memorys it is enough.


    What this says about the future of games and the web


    For an IT person like me, The Legend of Zelda running in the browser is more than a fun nostalgia hack. It’s basically a metaphor for application modernization. We’re taking something built for very specific infrastructure and giving it a new life on a completely different platform. Without losing the soul of the original meaning.

    In the enterprise world, we’re doing exactly the same thing with our business apps:

    • We decouple software from fixed infrastructure and move it into containers, PaaS services, WebApps, and managed databases.
    • We turn thick clients into browser-based frontends running on Azure, often wrapped with modern identity, observability, and security.
    • We preserve the core logic and data model, while updating UX, integration patterns, and automation capabilities.

    Zelda in the browser is the retro cousin of that story. The experience matters more than the box it originally shipped in. A good game – just like a good ERP module, a pricing engine, or a custom LOB app – should be able to outlive the platform it was born on. From a pure technology and preservation standpoint, the idea that your childhood adventure and your modern cloud-native workloads can coexist in the same browser window is… kind of wonderful. 💾


    Why this hits home for a lifelong Zelda nerd


    I’ve been playing Zelda since I was young – from the 16-bit era all the way to Breath of the Wild and Tears of the Kingdom. The shift from pixel-perfect 2D to massive open-world sandboxes on modern hardware has been incredible to watch.

    But in my heart, there will always be a special place for that overhead view, the spin-attack, and the feeling of unlocking a new piece of the map one room at a time.

    So when I can open my browser, jump to classicjoy.games, and walk across Hyrule Field without dusting off old hardware… that’s not just nostalgia. It’s proof that good design, supported by evolving technology, can remain accessible for decades.

    And honestly? That gives me hope – not just for games, but for all the digital experiences we’re building today on Azure, Microsoft 365, and the modern web. If we do it right, the things we build now might still be meaningful, and playable or usable, for the next generation.

    Stay clever. Stay nostalgic. Stay playable.
    Your Mr. Microsoft,
    Uwe Zabel


    🚀 Curious how retro games, modern browsers, and cloud-first experiences intersect? Follow my journey here on Mr. Microsoft’s thoughts—where cloud, AI, and business strategy converge.
    Or ping me directly—because building the future (and preserving the past) works better as a team.

  • Cloudflare Outage: What Went Wrong And What It Means For Modern Cloud Architectures

    Cloudflare Outage: What Went Wrong And What It Means For Modern Cloud Architectures


    Cloudflare Outage: What Went Wrong And What It Means For Modern Cloud Architectures


    When one config file sneezes and half the internet catches a cold, you know you’ve had a day. Yesterday’s Cloudflare outage was exactly that: a very modern reminder that our digital world hangs together on a surprisingly small number of very critical components – and that even “simple” changes can have global blast radius. 🌍💥

    Below I’ll walk you through what happened, why it matters for large IT landscapes, and what we – as architects, engineers and decision-makers – should take away for security, high availability, and well-architected design.


    What actually happened at Cloudflare?


    On November 18, 2025, Cloudflare experienced a major global outage that rippled across a huge part of the internet. Many sites and services either became very slow, started returning HTTP 500 errors, or simply stopped responding for a while. Platforms affected included X, Spotify, Uber, IKEA, news sites, and several AI services like ChatGPT, Copilot and others that themselves run on hyperscale cloud backends.

    The root cause was not a massive DDoS attack, but something that sounds almost mundane:

    A routine configuration change in a service behind Cloudflare’s bot-mitigation and threat-traffic handling triggered a latent bug. That bug caused the underlying service to start crashing, which cascaded through Cloudflare’s network and produced widespread errors. Cloudflare’s CTO explicitly clarified that this was not an attack, but a bug that had slipped through testing and only surfaced under real-world conditions.

    In other words:

    One config change. One hidden bug. Millions of users suddenly staring at error pages.

    The incident lasted under two hours before Cloudflare rolled out a fix, but two hours where up to 20% of the internet’s websites rely on you feels like an eternity.


    Why this outage was such a big deal


    Cloudflare sits in the critical path for a huge portion of global traffic: CDN, DNS, DDoS protection, bot mitigation, zero trust access, you name it. Many companies have Cloudflare between their users and their application – even when the actual app runs on a hyperscaler like Microsoft Azure, AWS or Google Cloud.

    That means:

    If Cloudflare has a bad day, thousands of “perfectly healthy” backends look broken.
    SLAs, error budgets and uptime charts for those backends don’t matter if users never reach them.

    From an enterprise perspective, this outage was a textbook illustration of concentration risk:

    You might already run in multiple regions, on highly redundant infrastructure with auto-healing and blue-green deployments. But if your entire edge story goes through a single external provider, that provider just became one of your biggest single points of failure.


    Security bug or reliability bug?
    Spoiler: both.


    Interestingly, the trouble started in Cloudflare’s bot-mitigation / threat-traffic subsystem – the very part meant to protect customers from malicious traffic.

    That highlights a paradox we often see in large environments:

    Every security feature is also part of your critical path.
    Every mitigation layer is also potential failure surface.

    So we have to think about these dimensions together, not as separate tracks:

    Security, Reliability, Performance, Operations

    For Cloudflare, a configuration change in a security-adjacent component led to a reliability crisis. For us as architects, that’s a reminder to treat:

    Security controls as high-availability components
    Threat-detection systems as production-critical services
    Policy engines as carefully as we treat core APIs

    Security that takes your systems down isn’t security – it is just a different kind of denial-of-service.


    Cloudflare, hyperscalers and the “stack of trust”


    One misconception I still encounter in customer conversations:

    “We are on Azure / AWS / Google Cloud, so we are covered for this kind of thing.”

    Nope

    Most modern architectures actually sit on a layered “stack of trust”:

    At the bottom, hyperscalers like Microsoft Azure, AWS, and Google Cloud provide compute, storage, networking and managed services.
    On top, providers like Cloudflare deliver edge security, CDN and performance optimization.
    Then come your own platforms: Kubernetes clusters, PaaS components, data platforms.
    At the top, your business apps and APIs.

    Yesterday’s outage showed that a failure at the edge layer can make all the robust design at the cloud layer effectively invisible to users for a period of time. The cloud may be fine. Your Kubernetes cluster may be humming. But users are still locked out.

    For hyperscalers, this is a double-edged sword:

    On the one hand, outages like this strengthen the argument for first-party services (Azure Front Door, AWS CloudFront, Google Cloud Armor, etc.) and tighter integration across the stack.
    On the other hand, customers will increasingly demand multi-provider strategies at the edge, not just in compute.

    This isn’t “Cloudflare vs hyperscalers” – it’s about understanding your full dependency tree and designing for graceful degradation.


    What this should trigger in large IT environments


    If you run a sizable environment – especially on Microsoft Azure or another hyperscaler – this outage is the perfect excuse to sit down with your architects, SREs and security leads and ask some uncomfortable questions.

    For example:

    Do we have a “plan B” for DNS, routing and WAF in a crisis?

    Do we know exactly which critical user journeys depend on Cloudflare or a similar edge provider?
    If that provider has a 90-minute outage, what actually happens to our business, not just our dashboards?
    Do users see a friendly fallback page, or just raw 500s?

    From a Well-Architected Framework perspective (Azure Well-Architected, AWS Well-Architected, Google Cloud architecture frameworks all share similar pillars), this incident hits several areas at once:

    Reliability: external dependencies as failure domains; chaos testing across providers.
    Security: ensuring security changes and threat-mitigation configs are deployed with guardrails and can be rolled back quickly.
    Operational excellence: clear runbooks for widespread upstream incidents; communication to business stakeholders.

    If your resilience story stops at “we run in two regions”, you are missing a big piece of the picture.


    Designing for failure at the edge


    So what can we actually do differently?

    A few patterns are becoming more and more important in cloud-first architectures:

    Multi-edge or multi-CDN setups
    Some organizations already use two edge networks in an active-passive or active-active design. That is not trivial – DNS, certificates, WAF rules, caching and routing must stay in sync – but for truly critical services it can be worth the complexity.

    Pro-tip: start small. Put one well-defined API or product line behind a dual-edge setup and learn from that experiment before you scale it out.

    Graceful degradation and “known good paths”
    Accept that, once in a while, some upstream will fail. The question is: can you degrade gracefully? For example:

    Show a cached version of content instead of a hard error.
    Offer a simplified, low-dependency status page that bypasses complex edge logic.
    Keep “must-have” services reachable via a simpler, less smart path (even if performance is worse).

    Configuration discipline and blast-radius control
    Yesterday was “just” a config rollout gone wrong. That sounds small – until it isn’t.

    Some things we should all be doing religiously:

    Bake critical config into the same pipelines, testing and approvals as code.
    Use staged rollouts and canaries for security and routing changes, not just for application code.
    Limit the blast radius: if a rule set crashes a service, it should take out a shard or region, not the whole globe.

    This is where the Well-Architected mindset stops being a slide deck and becomes a survival skill.


    What this means for you, me, and our cloud future


    For most end users, yesterday was “the internet is broken again” day. For us in IT, it should be another uncomfortable but valuable reminder:

    We live in a world of deeply interconnected platforms. Our users don’t care whether the issue sat in Cloudflare’s bot engine, an Azure region, or a misconfigured Kubernetes ingress. They care that their service was down.

    So our job is not just to pick powerful platforms, but to:

    • Understand the full dependency chain end-to-end
    • Design for security and reliability as a single, shared concern
    • Continuously test what happens when one of those critical pillars fails

    The next outage will come – from some provider, somewhere in your stack. The question is not whether, but how ready you are to ride it out.

    Stay clever. Stay resilient. Stay well-architected.
    Your Mr. Microsoft,
    Uwe Zabel


    🚀 Curious how global outages, Cloudflare, and modern cloud architectures intersect? Follow my journey here on Mr. Microsoft’s thoughts—where cloud, AI, and business strategy converge.
    Or ping me directly—because building the future works better as a team.

  • From Lab Bench to Launch Day: What Really Makes Research Spin-Offs Succeed?

    From Lab Bench to Launch Day: What Really Makes Research Spin-Offs Succeed?


    From Lab Bench to Launch Day: What Really Makes Research Spin-Offs Succeed?


    If you’ve ever watched a prototype jump from a university lab into the real world and thought “wow, that escalated quickly,” you’re in good company. Back in my student days at the Christian-Albrechts-Universität zu Kiel I dug deep into spin-off ventures from public research. Today, with a few more battle scars from enterprise IT and cloud programs, the topic feels even more relevant: how do we turn publicly funded knowledge into real companies, real jobs, and real impact?

    Short answer: it’s not luck. It’s a repeatable mix of people, capability, and ecosystem—tuned for speed. Let’s unpack the playbook.


    What Counts as a Research Spin-Off


    A spin-off from public research is a company founded to commercialize knowledge, IP, or prototypes that originated inside universities or public research institutes. Think: novel materials, biotech processes, AI algorithms, robotics, med-tech devices—often “deep tech” with a non-trivial path to market.

    Why it matters:

    • It’s the fastest tech-transfer lane from public investment to private value creation (jobs, exports, tax revenue).
    • Small high-tech firms historically show outsized growth versus incumbents when they hit product-market fit.
    • With the right scaffolding (funding, IP rules, cloud, partners), spin-offs become regional innovation flywheels.

    In plain terms: spin-offs are how curiosity becomes commerce.


    The Strategy Lens: Resources and Capabilities Beat Hype


    In my paper from 2009 I leaned on two classics:

    • Resource-Based View (RBV): sustainable advantage stems from assets that are valuable, rare, hard to imitate, and well organized.
    • Dynamic Capabilities: it’s not just what you have, it’s how fast you sense opportunities, seize them, and reconfigure your business as the market moves.

    For spin-offs, that translates to: hire great people, wrap them in an operating model that learns quickly, and build partnerships that compound your strengths. Hype helps you trend on launch day; capabilities keep you alive in year two.


    Four Drivers You Can Actually Control


    Lots of factors influence success (timing, regulation, luck). Focus on what’s in your hands.

    1) Human Capital: Teams Ship, Papers Don’t

    Spin-offs live or die on the founding team’s skills and chemistry. You need scientific depth and market depth—plus the grit to iterate through uncertainty. The winning pattern I continue to see:

    • A technical founder who can explain the “why now” in crisp business English.
    • A commercially minded co-founder who can price, package, and sell to the first ten customers.
    • An early operator who quietly fixes everything from supplier agreements to compliance checklists.

    Hiring tip: prioritize “learners with throughput.” In a spin-off, speed compounds.

    2) Entrepreneurial Orientation (EO): Decide Fast, Learn Faster

    EO is the cultural fuel—proactiveness, calculated risk-taking, and a bias for experimentation. The best teams frame hypotheses (about customers, pricing, channels), run short cycles, and make small bets that unlock bigger bets. It’s science, just pointed at business.

    3) Network Capability: Your Partners Are Part of Your Product

    University tech-transfer offices, clinical or industry validators, pilot customers, cloud vendors, and manufacturing partners—if you can coordinate that network, you shorten your path to revenue. Strong partners lend credibility when you don’t yet have logos of your own.

    Practical move: map your partner graph early. Know who gives access (to data, users, facilities), who gives trust (brands, regulators), and who gives scale (channels, cloud, factories).

    4) Embeddedness: Build Inside the Right Ecosystem

    Location still matters. Being embedded in a region with labs, funding, testbeds, and anchor customers reduces friction. Tap alumni networks, local industry clusters, and government programs; align your milestones to the grants and procurement cycles that actually exist.


    Funding, IP, and the “First-Customer” Problem


    Most research spin-offs don’t fail because the science is wrong. They fail in the transfer from prototype to product:

    • Funding: Bridge the “valley of death” with staged finance (grants → seed → strategic pilots).
    • IP: Structure licenses cleanly—clarity on fields of use, sublicensing, equity vs. royalty mix—so you can fundraise without legal fog.
    • First Customers: Replace theoretical markets with a concrete pilot that proves a real business outcome (savings, compliance, speed).

    Reality check: your first product is not the paper. It’s the smallest packaged solution a customer will pay for, plus services that make it work.


    Cloud as a Force Multiplier (Hello, Azure 👋)


    Compared to the environment I studied back then, spin-offs now have a superpower: hyperscale cloud.

    • Build faster: managed databases, AI models, DevOps pipelines. There is no need to reinvent the plumbing.
    • Prove compliance: identity, encryption, logging, and policy enforcement are table stakes you can adopt, not rebuild.
    • Scale with grace: from a lab pilot to a national rollout without rewriting your stack.

    If you’re in regulated industries or government-adjacent domains, sovereign cloud options (e.g., EU data boundaries, external key management, partner-operated national clouds) can remove blockers early. The result: you spend your euros on product, not undifferentiated infrastructure.


    A Simple Execution Blueprint


    Not a silver bullet, just a battle-tested sequence that works:

    1. Team up intentionally: complement the science with go-to-market muscle from day one.
    2. Package the first offer: turn the research into a narrowly defined, billable outcome.
    3. Land a reference pilot: choose a lighthouse customer who will speak publicly when you deliver.
    4. Instrument everything: metrics for usage, reliability, and unit economics; learn fast or pivot early.
    5. Lean on the cloud: ship secure, observable, automatable services without slowing R&D.
    6. Grow the network: partners for credibility, capacity, and channels—renew them as you scale.

    Bottom Line


    Great science starts the story. Great execution finishes it. When human capital, entrepreneurial culture, partner networks, and the right ecosystem click together—amplified by a secure cloud foundation—research spin-offs stop being fragile and start becoming flywheels. That’s good for founders, good for regions, and honestly, good for all of us who want to see ideas ship.

    Stay clever. Stay entrepreneurial. Stay connected.
    Your Mr. Microsoft,
    Uwe Zabel


    🚀 Curious how research spin-offs, Azure, and go-to-market execution intersect? Follow my journey on Mr. Microsoft’s thoughts—where cloud, AI, and business strategy converge.
    Or ping me directly—because building the future works better as a team.

  • Thirty-Five Years of Windows: From 3.0 Beginnings to Copilot Days

    Thirty-Five Years of Windows: From 3.0 Beginnings to Copilot Days


    Thirty-Five Years of Windows:
    From 3.0 Beginnings to Copilot Days


    I was eight years old in 1990 when Windows 3.0 landed with showtime flair. My own journey had started a little bit earlier on a Commodore Plus/4, typing BASIC line by line from a computer magazine. Later saving it to tape, and learning the quiet thrill of making a machine do exactly what I wanted. But the day a 486 Desktop arrived at my home in the mid 90s and I typed win into the DOS console was a different game.

    Suddenly computing wasn’t just console and coding and curiosity. It was a daily operating system for my life: school projects, games, hardware tinkering, the first modem squeals. And then, from early 2000s onward, the full rush of the internet. First with dual ISDN connection and 128kb, later the first DSL connection with 768kb. Since this time mid of the 90s, Windows has been my instrument—professionally and privately. It is evolving alongside my career from retail clerc, to productmanager to cloud architect and now as “Mr. Microsoft.”


    From Program Manager to the Start Button:
    Windows Finds Its Voice


    Windows 3.0 and 3.1 gave structure: Program Manager, File Manager, TrueType fonts, a UI you could actually live in. They made PCs feel less like terminals and more like creative studios. Then Windows 95 put the Start button in our hands and Plug and Play on our desks. Consumer excitement with real utility. Windows 98 kept the web close; Windows ME taught us caution; Windows 2000 and NT proved Windows could be enterprise-serious.

    Windows XP unified home and office with a long, dependable run. Vista stretched the model (not without pain), but it raised the bar for security. Windows 7 refined the everyday—fast, familiar, stable. Windows 8 bet early on touch and modern apps. It wasn’t everyone’s favorite, but it pointed at where devices were heading. Windows 10 turned Windows into a service with evergreen updates. Windows 11 polished the craft—calmer visuals, stronger baselines, modern silicon features. And it set the stage for what matters now: identity, cloud, and AI.

    Windows 3.1 floppy disks and handbook

    The Workbench Grows Up:
    Identity, Devices, and the Cloud-First Shift


    Once upon a time, a Windows rollout meant imaging rooms, weekend patch marathons, and hunting drivers on CDs. Today, my daily toolbox looks very different. Microsoft Entra ID as the identity backbone, Intune for zero-touch provisioning, Conditional Access and Defender for endpoint posture. Additionally Windows Update for Business rings that move at the pace of risk. Files aren’t “on a share” anymore. They live where people work—OneDrive, SharePoint, Teams—protected by sensitivity labels, DLP, and encryption at rest and in transit.

    That shift changed how we design. We don’t just deploy machines; we operate a living fabric of identity, devices, and services. Governance isn’t a checklist—it’s design. If you lead an enterprise today, that’s the mindset: make the secure, compliant path the easy, default path.


    From Win32 to the Web to AI Agents:
    The Developer Story Keeps Widening


    I cut my teeth on BASIC and learned the Windows APIs that came after. Win32, then .NET, then a world where the browser, REST, and Graph made “Windows development” also mean “cloud development.” Visual Studio and VS Code, GitHub, Dev Box, Azure DevOps and Pipelines—our workbench is no longer a PC; it’s a platform constellation. Today, that constellation includes Copilot and Azure AI, where natural language becomes the glue between intent and implementation. The lesson from Windows 3.x still applies: when abstractions get good, new behaviors become normal. Don’t get me wrong, I am not a developer and never was. I am an Architect and trusted advisor to my clients. But I am playing along with it all the time.


    Upgrades, Games, and the Human Factor


    I still remember swapping RAM sticks, adding a Sound Blaster, sliding in a Voodoo card, and the simple pride of a self-built rig booting first try. Windows grew with that tinkerer spirit—DirectX evolved, game mode reduced jitter, GPU drivers became less drama. Meanwhile laptops turned into all-day companions with great keyboards, color-accurate panels, and TPM-backed security that just works. The best tools eventually disappear into the background so you can focus on work… or a late-night session of Age of Empires.


    What the Cloud and AI Era Means for Us Pros and for Everyone Else


    For IT pros, the cloud turned projects into practices. Join a device to Entra ID, enforce Conditional Access, provision via Intune, wrap data in Purview sensitivity labels, monitor with Defender and Sentinel, and govern with policy as code. Compliance, sovereignty, and resilience aren’t side quests—they’re built into the pipeline.

    For everyone else, the change is just as real. Collaboration is co-authoring in Word as you talk in Teams. Photos and files follow you; sign-in is a tap on your phone instead of a password you’ll forget. And now, AI is moving from novelty to utility. Copilot in Windows and Microsoft 365 shortens the distance between an idea and its first draft. Between a meeting and its action points, between raw data and a narrative worth sharing. The PC is still the most private, personal runtime you own. Now with an assistant that respects identity, policy, and the boundaries you set.


    Why Thirty-Five Years Still Matter


    Windows endured because of a cultural promise: carry the past forward while nudging it into the future. Compatibility built trust; innovation kept momentum. That’s the blueprint I use with clients across Azure, Microsoft 365, Dynamics, and the wider ecosystem. Modernize without breaking muscle memory; ship value without leaking risk; measure outcomes, not noise.

    I started on a Commodore Plus/4, grew up through DOS and Windows 3.x, and built a career on the waves that followed. The tools changed; the curiosity didn’t. If anything, the cloud-and-AI era gives us more leverage than ever. Provided we design with governance, lead with identity, and keep the user in the center of the frame.

    Windows 3 boot screen

    Your Turn


    Where did Windows first “click” for you—3.1, 95, XP, 7, 8, 10 or 11? How are cloud and Copilot changing the way you work, and what would you modernize next if you could start on Monday? I’d love to hear your story. Drop a comment or write me a message here.

    Stay clever. Stay responsible. Stay scalable.
    Your Mr. Microsoft,
    Uwe Zabel


    🚀 Curious about Windows, Azure, and how AI is reshaping real work—without breaking compliance? Follow my journey on zabu.cloud—where cloud, AI, and business strategy converge.
    Or ping me directly—because building the future works better as a team.

  • Microsoft AI Tour Frankfurt: How Agentic AI Is Transforming Application Modernization

    Microsoft AI Tour Frankfurt: How Agentic AI Is Transforming Application Modernization


    Microsoft AI Tour Frankfurt: How Agentic AI Is Transforming Application Modernization


    Yesterday’s Microsoft AI Tour in Frankfurt was a powerful reminder of what happens when technology, strategy, and real-world solutions meet on the same stage.
    No theory. No buzzword bingo. Just practical AI in motion.

    We were there as a sponsor with Sogeti – Part of Capgemini, showcasing what AI really looks like when it moves beyond the hype: accelerating application modernization at scale, reducing technical debt, and enabling companies to become truly AI-ready.

    Our booth carried exactly that message:

    “This is what AI really looks like.”

    Not abstract. Not future talk. Real workloads. Real code. Real business value.


    THANK YOU, MICROSOFT – AND EVERYONE WHO MADE THIS POSSIBLE


    Huge appreciation to the Microsoft team for the invitation and the platform to share our work.
    Special thanks to Sandra Ahlgrimm and Julia Kordick for the outstanding partner orchestration on-site.

    And of course – a massive shout-out to our own team:

    • Manuel Kaiser & Kristina Peteln – for a high-impact lightning talk on Business Application Transformation – Reinvented by Agentic AI. Sharp message, strong demo, zero fluff.
    • GitHub Team – for the great exchange around Copilot, Secure DevOps, and AI-assisted engineering.
    • Our Alliances & Sogeti colleagues – for planning, logistics, and the “OneCapgemini” execution behind the scenes:
      Jessica Bois, Christopher Friedrich, Berry van der Stroom – and everyone who helped make booth 504 the place for deep modernization talks.

    I personally had dozens of impactful discussions: CIOs, architects, platform owners, and engineering leads – all asking the same core question:

    “How do we modernize our applications fast enough to benefit from AI instead of being disrupted by it?”

    That question leads us straight into the real topic of the decade.


    WHY AGENTIC-AI-BASED APPLICATION MODERNIZATION MATTERS NOW


    Modernization used to be a technical initiative.
    Today, it’s a survival strategy.

    Legacy systems aren’t just slow or expensive. They block AI adoption. They block scalability. They block talent. They block innovation. And Operations are oftens expensive and bulky.

    Agentic AI changes the game:

    • 🚀 Modernization at industrial speed
      Automated code analysis, pattern detection, refactoring, and migration – executed by AI agents, not human brute force.
    • 🔁 Continuous modernization, not one-time migration
      Systems evolve in sync with business, not every 7–10 years in a crisis.
    • 🔐 Security & compliance by default
      Legacy risk disappears when workloads move to modern, governed, observable platforms.
    • 🧠 AI-native architecture becomes standard
      Event-driven systems, microservices, Copilot-ready engineering environments, cloud-optimized cost models.

    Or in simpler words:

    Modernization is no longer about “upgrading tech.”
    It’s about enabling the enterprise to think, act, and scale in an AI-driven world.

    And that’s exactly why we built GenSuite – our AI-accelerated modernization engine that analyzes, transforms, and migrates entire application landscapes with automated agents at its core.

    This isn’t PowerPoint. We’re doing it today – and the interest at the booth confirmed:
    This topic just moved from IT-department level to board-level priority.


    EVENT IMPRESSIONS



    WHAT HAPPENS NEXT


    We’ll feed all learnings, conversations, and signals from Frankfurt into our upcoming modernization playbooks, Copilot adoption frameworks, and agentic-AI reference architectures.

    If you’re asking yourself any of these questions…

    • “How do we modernize 5,000+ apps without a 5-year budget?”
    • “How do we make our landscape (Agentic-) AI-ready?”
    • “How do we remove legacy blockers and enable AI everywhere?”

    …then let’s talk.

    The companies that master AI-driven modernization now won’t just reduce cost.
    They’ll set the speed of their entire market.

    Stay clever. Stay responsible. Stay scalable.
    Your Mr. Microsoft,
    Uwe Zabel


    Want to explore what Agentic AI-powered modernization can do for your application landscape?
    Follow my journey on zabu.cloud—where cloud, AI, and business strategy converge.
    Or ping me directly—because building the future works better as a team.

  • High Availability of Web Applications in Microsoft Azure

    High Availability of Web Applications in Microsoft Azure


    High Availability of Web Applications in Microsoft Azure


    Building and operating a global web service on Microsoft Azure is a bit like running an airport that never sleeps. Flights—your user requests—arrive from every time zone, every minute, every day. The challenge? Keep every gate open, every runway clear, and every passenger happy, no matter what happens behind the scenes.

    High availability isn’t a nice-to-have. It’s the baseline. In the cloud, downtime equals lost trust, lost transactions, and lost opportunity. This article dives deep into how Azure helps you design for resilience, scalability, and performance at a global scale.


    Understanding High Availability in Azure


    At its core, high availability (HA) means ensuring your application remains accessible even when individual components fail. Azure’s global infrastructure, spanning more than 60 regions, gives you the raw capability to design systems that can survive hardware failures, regional outages, and maintenance windows without your users even noticing.

    In my book SAP auf Hyperscaler Clouds (Chapter 3), I discuss this principle in detail. How architectural redundancy and smart routing are the real backbone of digital resilience. While SAP landscapes are a textbook example of mission-critical systems, the same mindset applies to any web application that serves a distributed user base.

    To achieve true high availability in Azure, you need to think across three layers:

    1. Application-level redundancy – multiple instances of your app running in parallel.
    2. Regional distribution – deploying across Azure regions to mitigate datacenter-level risks.
    3. Global routing optimization – intelligently directing users to the best-performing endpoint.

    That’s where Azure’s native services like Load Balancer and Traffic Manager come into play.


    Azure Load Balancer: Keeping the Flow Smooth


    Imagine your backend servers as airport gates. The Azure Load Balancer acts as the tower controller—it decides which gate each incoming flight should use, balancing arrivals to prevent congestion.

    Technically speaking, Azure Load Balancer distributes inbound network traffic across multiple healthy backend instances, ensuring no single server becomes a bottleneck. It monitors instance health through probes and automatically routes traffic away from unresponsive nodes.

    This setup not only improves performance but also enables zero-downtime maintenance. You can update, patch, or replace backend systems while keeping your service online.

    For multi-tier applications like a web front end, an API layer, and a database tier, the Load Balancer can be deployed at each layer to distribute workloads effectively. The result: users experience consistent responsiveness even as traffic spikes or infrastructure evolves.

    Pro tip: Combine the Load Balancer with Availability Sets or Availability Zones to further harden your environment. Azure automatically spreads virtual machines across fault and update domains to protect against hardware or maintenance events.

    blank
    Load balancer in a three-tier application, source: https://docs.microsoft.com

    Azure Traffic Manager: Bringing the World Closer


    While the Load Balancer optimizes traffic within a region, Azure Traffic Manager optimizes traffic across regions.

    Think of it as your global air traffic control system that is directing users to the nearest, fastest, or healthiest “airport” (your regional deployment). Traffic Manager uses DNS-based routing and supports various policies, such as:

    • Performance routing – sends users to the closest endpoint with the lowest latency.
    • Priority routing – defines a primary region and fails over to secondary ones in case of outage.
    • Geographic routing – directs traffic based on user location to meet data sovereignty or compliance needs.

    By deploying your web application in multiple Azure regions—say, West Europe, North Europe, and East US—you ensure global coverage. Traffic Manager ensures users in Frankfurt hit West Europe while users in Chicago go to East US.

    This approach dramatically reduces latency and provides geo-redundancy—two critical ingredients for delivering premium digital experiences worldwide.

    blank
    Source: https://docs.microsoft.com

    Achieving “Five Nines”: 99.999% Availability


    Many enterprises set their sights on the holy grail of uptime: 99.999% availability, also known as “five nines.” It translates to just 5.25 minutes of downtime per year. Sounds ambitious? It is. But with Azure’s building blocks, it’s realistic.

    Here’s what it takes:

    1. Deploy across multiple Azure regions for regional redundancy.
    2. Use Azure Load Balancer within each region for local high availability.
    3. Layer Azure Traffic Manager on top to globally route users and fail over between regions.
    4. Automate failover and health checks to eliminate human reaction time.
    5. Integrate monitoring and alerting through Azure Monitor and Application Insights.

    By combining these services, you architect a self-healing system where failure in one region doesn’t mean downtime—it just triggers intelligent rerouting.

    In practice, I’ve seen this pattern successfully used not only for web frontends but also for SAP systems, API gateways, and data services that require enterprise-grade reliability.


    Best Practices for Azure High Availability


    A few operational lessons stand out:

    • Plan for failure, not for perfection. Assume that components will fail—and design around that.
    • Distribute workloads regionally using Azure’s paired-region model. Each region has a built-in partner for disaster recovery scenarios.
    • Use managed services like Azure Front Door or Azure App Service Environment when possible—they come with built-in HA and global routing.
    • Monitor continuously. Visibility equals resilience. Configure Application Insights and Azure Monitor to detect anomalies before they hit the user experience.
    • Test your failover strategy. Simulate outages to validate whether your setup truly delivers continuous availability.

    Conclusion: Reliability Is the New UX


    In the cloud, users rarely remember when something worked flawlessly, but they never forget when it didn’t. High availability isn’t just about uptime metrics; it’s about trust.

    Azure gives you the architectural canvas, but it’s your strategy, the way you weave together Load Balancer, Traffic Manager, monitoring, and redundancy, that defines your success.

    For those who want to go deeper, I unpack these concepts extensively in Chapter 3 of my book “SAP auf Hyperscaler Clouds, where enterprise-grade reliability meets practical cloud design.

    Because in the end, availability isn’t an afterthought. It’s the architecture of confidence.

    Stay clever. Stay responsible. Stay scalable.
    Your Mr. Microsoft,
    Uwe Zabel


    🚀 Curious how Microsoft Azure keeps your apps available—anytime, anywhere?
    Follow my journey on zabu.cloud—where cloud, AI, and business strategy converge.
    Or ping me directly—because building the future works better as a team.

  • Finding Balance: Between Home Office and Human Connection ☕💼

    Finding Balance: Between Home Office and Human Connection ☕💼


    Finding Balance: Between Home Office and Human Connection ☕💼


    This week reminded me of something simple — and yet easy to forget.

    Human connection doesn’t happen through screens. It happens in hallways, over coffee, and sometimes… at a hotel breakfast buffet. That’s where I saw a small sign that said:

    “Be happy for no reason.”

    It stuck with me. Because in German, we only have one word — “Glück” — for both being happy and being lucky. In English, there are two.

    And that difference made me think: maybe happiness isn’t about luck at all. Maybe it’s about presence.


    The Magic of Meeting in Person ✨


    Earlier this week, I spent a day with my team in Hannover — and just a few days later, another in Erfurt. Two different cities, two different teams, one shared experience: connection. Many of the people I met, I had only seen in Teams meetings before. But meeting face-to-face changes everything.

    You learn what motivates them, what challenges them, and what makes them laugh. Those are the small, invisible threads that build real teams — the kind of trust and understanding that can’t be scheduled into a 30-minute video call.

    Erfurt itself was a highlight. The city’s old town, especially the Krämerbrücke, is a living piece of history — a handcrafted masterpiece of culture and tradition. Even our Capgemini office there feels symbolic: a beautiful old building full of people working on cutting-edge cloud solutions.

    It’s a reminder that innovation and heritage aren’t opposites. They coexist — just like people do, when they meet and create together.


    The Other Side of the Story 🏡💻


    But let’s be honest: the home office changed everything. It gave us time back — for families, for quiet focus, for life. It allowed flexibility that most of us had only dreamed of before 2020.

    Working from home means joining a call right after breakfast with your kids. It means being there when the delivery arrives or when school finishes early. It’s not just about convenience — it’s about presence at home.

    So yes, being back in the office feels energizing. But the silence of the home office has its own value too.


    The Real Challenge: Balance ⚖️


    There’s no perfect formula. Too much remote work, and we risk becoming isolated bubbles of productivity. Too much office time, and we lose the focus, calm, and family life that remote work brought us. Finding balance isn’t about company policy or attendance percentages.

    It’s about awareness — knowing what fuels you, your team, and your relationships. For me, weeks like this one show the best of both worlds: Deep work at home, deep connection in person.

    And maybe that’s what that breakfast sign was really about. Happiness doesn’t depend on luck — or location. It comes from being intentional, wherever you are.

    Stay clever. Stay responsible. Stay scalable.
    Your Mr. Microsoft,
    Uwe Zabel


    🚀 Curious how cloud, culture, and connection shape the future of work? Follow my journey on zabu.cloud — where cloud, AI, and business strategy converge. Or ping me directly — because building the future works better as a team.