[{"content":"I\u0026rsquo;ve been to plenty of vendor events and briefings over the years but never an AWS Summit, so heading to London on Wednesday for the first time I genuinely don\u0026rsquo;t know what to expect. I\u0026rsquo;m planning to get there when doors open at 8am partly to beat the keynote crowds and partly because I\u0026rsquo;ve already had to make some hard choices about how the day runs.\nPicking the Sessions I had four sessions I wanted to attend: the keynote on agentic AI, a workshop on rapid prototyping with Kiro, a zero trust for AI security session, and a fast-track VMware migration workshop. Some run 2–3 hours, the zero trust session clashes directly with Kiro, and doing all of them would mean spending the entire day in rooms with no time to actually see any of it, so I\u0026rsquo;ve cut two. I\u0026rsquo;m skipping the keynote, which at my first AWS Summit feels like it should be a bigger deal than it is, and dropping the AI security session because it runs at the same time as Kiro and that\u0026rsquo;s not a close call. The keynote will be streamed and I\u0026rsquo;ll catch it later. The VMware workshop and Kiro are what I\u0026rsquo;m actually there for.\nWhy Kiro Last year and earlier this year I attended some remote AWS sessions, which is part of what pushed me toward the AWS AI Practitioner cert, a natural next step from work I was already doing that gave me a clearer picture of where AWS was heading with AI tooling. Kiro came up in those sessions when it was still in preview, but I wasn\u0026rsquo;t working in a way where it was immediately relevant so I left it there.\nAs my day-to-day work has shifted further into agentic AI, the spec-driven approach that Kiro is built around has started to make a lot more sense. When you\u0026rsquo;re working with AI in real workflows rather than in isolation, defining the intent upfront and letting the tooling work from that is a different proposition to prompting your way to something and hoping the output is consistent. I\u0026rsquo;ve been doing enough of this now to understand why that distinction matters, which is why Kiro has gone from a session I\u0026rsquo;d probably skip to one I\u0026rsquo;m least willing to miss.\nThe VMware Workshop Most of what I\u0026rsquo;ve seen on VMware migration to AWS over the last 18 months has been partner briefings and marketing decks. The direction is clear enough but the actual mechanics of how you do it at scale tend to disappear behind the messaging. A proper workshop where you get into the technical detail of what the migration actually involves is a different proposition, and I want to see what fast-track means when you get past the headline.\nEverything Else If the schedule gives me a window I\u0026rsquo;d like to get to the Sports Zone, which is where AWS runs its F1 partnership demos alongside NFL and NBA analytics. The F1 and AWS data partnership has been running long enough that there\u0026rsquo;s usually something genuinely interesting to look at rather than just branding, and apparently there\u0026rsquo;s a sweepstakes on the day for a Silverstone package to the British Grand Prix. I\u0026rsquo;m not going to pretend that isn\u0026rsquo;t a factor. Beyond that there\u0026rsquo;s an industry zone with live demos across financial services and retail, a startup zone, and a sustainability area with a Natural History Museum partnership, though whether I actually make it past the Sports Zone given my schedule is an open question.\nWhat I\u0026rsquo;ll Be Watching For What I\u0026rsquo;ll be paying attention to across the day is whether the application layer appears anywhere in the conversation. The Summit narrative is very much infrastructure and AI capability, agentic systems, sovereign cloud, all of it moving fast. In my day-to-day work I spend a lot of time with the applications running on top of that infrastructure, and they\u0026rsquo;re frequently the part that doesn\u0026rsquo;t move when everything else does. You can have a well-architected AWS environment and still be running applications that were never designed for any of this.\nI\u0026rsquo;ve also received several emails since I registered telling me to register and secure my spot, which I did weeks ago, so there\u0026rsquo;s a reasonable chance I\u0026rsquo;ll arrive at ExCeL at 8am and find out the system has no idea who I am.\n","permalink":"https://kashifnazir.com/blog/aws-summit-london-2026-pre-summit/","summary":"\u003cp\u003eI\u0026rsquo;ve been to plenty of vendor events and briefings over the years but never an AWS Summit, so heading to London on Wednesday for the first time I genuinely don\u0026rsquo;t know what to expect. I\u0026rsquo;m planning to get there when doors open at 8am partly to beat the keynote crowds and partly because I\u0026rsquo;ve already had to make some hard choices about how the day runs.\u003c/p\u003e\n\u003cfigure\u003e\n    \u003cimg loading=\"lazy\" src=\"dlr-docklands-station.jpg\"\n         alt=\"DLR train arriving at a Docklands station in London\"/\u003e \n\u003c/figure\u003e\n\n\u003ch2 id=\"picking-the-sessions\"\u003ePicking the Sessions\u003c/h2\u003e\n\u003cfigure\u003e\n    \u003cimg loading=\"lazy\" src=\"session-decision-flowchart.svg\"\n         alt=\"Flowchart showing four session options with Kiro and VMware selected, Keynote and Zero Trust deprioritised\"/\u003e \n\u003c/figure\u003e\n\n\u003cp\u003eI had four sessions I wanted to attend: the keynote on agentic AI, a workshop on rapid prototyping with Kiro, a zero trust for AI security session, and a fast-track VMware migration workshop. Some run 2–3 hours, the zero trust session clashes directly with Kiro, and doing all of them would mean spending the entire day in rooms with no time to actually see any of it, so I\u0026rsquo;ve cut two. I\u0026rsquo;m skipping the keynote, which at my first AWS Summit feels like it should be a bigger deal than it is, and dropping the AI security session because it runs at the same time as Kiro and that\u0026rsquo;s not a close call. The keynote will be streamed and I\u0026rsquo;ll catch it later. The VMware workshop and Kiro are what I\u0026rsquo;m actually there for.\u003c/p\u003e","title":"AWS Summit London 2026 — Pre-Summit"},{"content":"Hardware gets kept because it runs what it\u0026rsquo;s always run, and the refresh conversation gets deferred because nothing appears broken. The ceiling only shows up when someone tries to do something new on top of it. I had that exact experience last Saturday trying to containerise the apps on my home server.\nThe Plan I\u0026rsquo;ve been wanting to use containers more outside of just reading about them, so running them at home on something real was the obvious next step. Set up a git repo for version control, VS Code as the editor, and OpenAI\u0026rsquo;s Codex to handle the agentic coding tasks directly on the machine, so I had everything in place to move quickly on something I hadn\u0026rsquo;t done before. The plan was to start with one small component and see how it handled it before doing anything else.\nGetting WSL Running Getting everything set up on the machine took a little longer than expected, with a BIOS update needed before WSL would play ball. WSL, Windows Subsystem for Linux, provides the Linux layer that Docker needs to run on Windows. Docker is what actually runs the containers, each one a self-contained unit that packages an application and everything it needs to run in isolation from everything else on the machine.\nThe CPU Hit My machine is a 2013-era i5-4670K, four cores, four logical processors, and while it runs everything currently on it without complaint, that Linux layer isn\u0026rsquo;t free. Once the first container came up the CPU immediately took a significant hit, and one container for one small component was already too much. The plan to containerise the rest of the apps wasn\u0026rsquo;t going anywhere on this hardware, so I swapped that component out for a Python script and left it there. There are other ways to squeeze more out of a machine this age but nothing that changes the fundamental problem, so the containerisation plan waits until the hardware does.\nThe Bigger Picture The machine is also still on Windows 10 because it can\u0026rsquo;t meet Windows 11\u0026rsquo;s hardware requirements, which at this point says everything about the underlying spec that the CPU hit already implied. Microsoft ended mainstream support in October 2025, but consumers in the EEA got a free one-year security update extension after European consumer groups pushed back under the Digital Markets Act, while elsewhere it costs $30.\nThe enterprise version of this plays out most visibly during OS migrations. Moving from Windows 7 to Windows 10, the RAM requirement was the thing that caught organisations out, not because the machines were old, but because the specific hardware needed to upgrade them was no longer available. I\u0026rsquo;ve seen high spec laptops written off because the single-sided RAM they needed to get from 4GB to the Windows 10 minimum simply wasn\u0026rsquo;t being made anymore, and with that part no longer in production there was no way around it. That\u0026rsquo;s a harder conversation than \u0026rsquo;the hardware is old\u0026rsquo; because nothing about it felt inevitable until it suddenly was.\nWhat\u0026rsquo;s Next The ESU window closing later this year is what forces the decision on my end, and when it does the plan is to replace the machine and build properly from scratch with containers and likely a Kubernetes cluster from the start. Fingers crossed component prices are somewhere reasonable by then.\n","permalink":"https://kashifnazir.com/blog/why-i-couldnt-containerise-my-home-server/","summary":"\u003cp\u003eHardware gets kept because it runs what it\u0026rsquo;s always run, and the refresh conversation gets deferred because nothing appears broken. The ceiling only shows up when someone tries to do something new on top of it. I had that exact experience last Saturday trying to containerise the apps on my home server.\u003c/p\u003e\n\u003cfigure\u003e\n    \u003cimg loading=\"lazy\" src=\"containers-hero.jpg\"\n         alt=\"Colourful shipping containers stacked at a port\"/\u003e \n\u003c/figure\u003e\n\n\u003ch2 id=\"the-plan\"\u003eThe Plan\u003c/h2\u003e\n\u003cp\u003eI\u0026rsquo;ve been wanting to use containers more outside of just reading about them, so running them at home on something real was the obvious next step. Set up a git repo for version control, VS Code as the editor, and OpenAI\u0026rsquo;s Codex to handle the agentic coding tasks directly on the machine, so I had everything in place to move quickly on something I hadn\u0026rsquo;t done before. The plan was to start with one small component and see how it handled it before doing anything else.\u003c/p\u003e","title":"Why I Couldn't Containerise My Home Server"},{"content":"I came across AI Warehouse while studying for the AWS AI Practitioner cert through the Stephanie Maarek course. I lost a few hours on their YouTube channel watching reinforcement learning agents figure out games from scratch, and when I found they had a downloadable Windows simulator I wanted to try it myself. Their Red Light, Green Light scenario, the one from Squid Games, lets you run different shaped agents through the course and adjust the training parameters to see what changes.\nThe basic idea with reinforcement learning is that agents are not told how to do something. They are given a goal, a set of possible actions, and feedback on whether what they did worked. Everything else they figure out through repetition. In the Red Light, Green Light scenario, the goal is to cross the finish line without moving during a red light, and the agents start with no knowledge of what that means.\n500 Runs Of Sprinting Into A Wall Early runs: the agents sprint forward with no awareness of the lights. For the first few hundred runs I left the brain configuration at 20m and let the agents go. The simulator lets you set this anywhere from 100k up to 100m, and it controls how much experience the agent can draw on when deciding what to do.\nThe simulator lets you scale the agent brain config from 100k up to 100m.\nThe early behaviour was exactly what you would expect: every agent sprinted flat out, ignored the red light completely, and got eliminated. By run 30 a few were lasting slightly longer through pure luck, but the strategy was identical. Run and hope.\nClick to play The person-shaped agent never learns to walk properly and instead inches forward by bashing itself along the floor. Walking turns out to be genuinely hard to learn through reinforcement learning, so the person-shaped agent never figured it out. Instead it developed a technique of bashing itself against the floor, sort of crawling by repeatedly smacking into the ground and inching forward. By run 2050 it was getting further using this method and bashing harder, which suggested the agent was refining the approach even though it looked absurd. The four-legged agent had an easier time with locomotion but was surviving red lights mostly by accident, moving slowly enough between lights that it scraped through rather than making a conscious decision to stop.\nClick to play The four-legged agent survives red lights mostly by accident rather than by making a deliberate stop. After 500 runs with the 20m brain config, a handful of agents could occasionally survive one red light but none had completed the course on camera. The app has a counter that tracks completions, and I could see the yellow horse standing at the end of the course, so at least one had made it while I was not recording.\nOne Parameter Change, Immediate Results Click to play At 500 runs, changing the brain config changes the behaviour immediately: the agents start waiting during red lights. At run 500 I changed the brain configuration from 20m to 100m. The behaviour changed on the next run. Not gradually over 50 more runs. Immediately. Every agent went from sprinting and hoping to waiting during red lights and moving during green.\nThe shift from 20m to 100m changed behaviour on the very next run.\nThe 20m and 100m values do not control how many training runs the agent does or how long it trains for. They control the scale of experience the agent draws on when making each decision. With 20m, the agents were essentially short-sighted, reacting to whatever was directly in front of them with a small pool of experience to reference. With 100m, they had access to a much larger pool of accumulated patterns from their training. The effect is that the agent shifts from reactive behaviour, see green light, run, to conservative, pattern-based behaviour: I know from experience that a red light is coming, so I should be ready to stop.\nThat distinction between reactive and strategic is exactly the difference between sprinting into elimination and waiting for the right moment to move, and it happened in a single parameter change rather than requiring hundreds more runs.\nClick to play With the larger brain config, the blue four-legged agent completes the course. Within a few more runs the four-legged blue agent completed the full course, waiting through red lights and moving when it could. The purple spider managed it too, and its approach was to wait patiently through every light and then leap onto the finish line at the end.\nClick to play The purple spider waits through the lights and then launches itself across the finish. All the agents were performing dramatically better by this point. The pink one looked close to succeeding but would need more runs. The combination of accumulated training and a brain config large enough to actually use that training was what made it work. Runs alone were not enough with a small brain config because the agent could not draw on enough experience to behave strategically. A large brain config alone would not have helped either because the agent needs something to have learned from.\nCapacity Before Experience Is Useless I expected a bigger brain config to mean slower progress, like expanding a container that takes longer to fill. The experience was already there from 500 runs of failing, the agents just could not draw on enough of it to behave differently until the capacity increased. Reinforcement learning and large language model training are different mechanisms, but the same tension keeps showing up: you need enough capacity to make learned experience usable. With LLMs, that is why initial training requires so much compute. The patterns have to be earned at scale first, and once they exist you can distill them into something smaller, but that initial capacity is what makes the learning worth anything.\nIf you want to try the simulator yourself, AI Warehouse has it available for Windows. It is a good way to spend an afternoon if you are curious about what reinforcement learning looks like when you can actually watch it happen.\n","permalink":"https://kashifnazir.com/blog/using-ai-reinforcement-learning/","summary":"\u003cp\u003eI came across \u003ca href=\"https://www.youtube.com/@aiwarehouse\"\u003eAI Warehouse\u003c/a\u003e while studying for the AWS AI Practitioner cert through the \u003ca href=\"https://www.udemy.com/course/aws-ai-practitioner-certified/\"\u003eStephanie Maarek course\u003c/a\u003e. I lost a few hours on their YouTube channel watching reinforcement learning agents figure out games from scratch, and when I found they had a downloadable Windows simulator I wanted to try it myself. Their \u003ca href=\"https://www.youtube.com/watch?v=M8eSyh4YlbI\"\u003eRed Light, Green Light scenario\u003c/a\u003e, the one from Squid Games, lets you run different shaped agents through the course and adjust the training parameters to see what changes.\u003c/p\u003e","title":"Watching AI Learn to Play Red Light, Green Light"},{"content":"Most migrations don\u0026rsquo;t start because someone planned one, they start because half the organisation is already using something unsanctioned and you\u0026rsquo;re formalising it before it turns into a compliance problem. I\u0026rsquo;ve seen that across different types of tech, and the failure mode is always the same: the pilot gets all the investment and the rollout gets whatever\u0026rsquo;s left, which is usually not much.\nThe hard part of a migration is usually the stretch between a successful pilot and a real rollout.\nThe expectation gap Before any pilot starts, people have already decided what the new thing will do for them. With an OS migration, it\u0026rsquo;s \u0026ldquo;the new version will fix everything.\u0026rdquo; With an AI tool, it\u0026rsquo;s \u0026ldquo;this will transform how we work.\u0026rdquo; Neither of those is true, and when reality doesn\u0026rsquo;t match the expectation, people don\u0026rsquo;t explore further.\nWith a Copilot rollout for a small business, the team expected the AI features to be the big win, but in practice the thing that actually changed their working day was Teams transcription. Being able to pull what was said in a meeting into a usable starting point for documentation was huge, especially for technical people who don\u0026rsquo;t like writing things up. The AI piece was useful but not transformative in the way they\u0026rsquo;d imagined, and if nobody had managed that expectation upfront the whole thing would have been written off as a disappointment, when actually the value was real, just not where they expected it.\nI\u0026rsquo;ve watched people move from Windows 7 to Windows 10 and then get frustrated that Adobe Reader still can\u0026rsquo;t do the thing they wanted it to do. New operating system, same third-party limitations, but people blame the migration because that\u0026rsquo;s the change they noticed.\nPicking your pilot group If you pick a department because it\u0026rsquo;s convenient, or because the manager volunteered, your pilot data is fiction. You need people who\u0026rsquo;ll actually push through friction and give you feedback you can act on, and the difference between someone who\u0026rsquo;ll explore a new tool and someone who\u0026rsquo;ll file a ticket at the first unexpected behaviour is the difference between a pilot that tells you something useful and one that tells you nothing.\nThis applies whether you\u0026rsquo;re testing an OS image or an AI assistant, and if your pilot group isn\u0026rsquo;t representative of how the wider organisation works, you\u0026rsquo;re validating the wrong thing.\nA useful pilot group is representative, curious, and willing to push through friction long enough to tell you what actually broke.\nThe training problem People don\u0026rsquo;t read documentation until they\u0026rsquo;re stuck, and by then they\u0026rsquo;re frustrated and looking for confirmation that the new thing is worse than the old thing. Training sessions alone don\u0026rsquo;t work either because you can run a great session on a Tuesday and by Thursday people have forgotten half of it because they haven\u0026rsquo;t had to use it under real conditions yet.\nWhat actually works is availability, having someone around who can answer questions in the moment while the person is trying to do their actual job with the new tool. That\u0026rsquo;s where champions come in, not as advocates or cheerleaders, but as a live support layer that formal IT support can\u0026rsquo;t provide at scale. Someone sitting three desks away who already knows the new system and can say \u0026ldquo;yeah, that button moved, it\u0026rsquo;s under settings now\u0026rdquo; is worth more than a 40-page deployment guide.\nYou can demo an AI tool beautifully in a training session, but the moment someone tries to use it on their own data with their own workflow they hit friction that nobody covered, and having someone nearby who\u0026rsquo;s already past that friction is the difference between the tool getting adopted and getting abandoned.\nThis is what most real training looks like: someone nearby explaining the new thing in the moment it stops being theoretical.\nThe bedding-in period I worked on a migration where some sites were jumping from XP straight to Windows 10, and after the deployment I went up to a remote site once a week for a bedding-in period. It was just being there, answering questions, showing the team that they hadn\u0026rsquo;t been handed a new system and then abandoned. Most of what I did in those visits was small stuff: someone couldn\u0026rsquo;t find a setting, someone\u0026rsquo;s printer wasn\u0026rsquo;t mapped, someone wanted to know why their desktop looked different. None of it was technically complex but all of it mattered to the people asking.\nFor another company, I travelled across the US for a Windows 7 to Windows 10 migration. I was there for the image build and application packaging, making sure things worked, but the real lesson was watching the US site lead do her job. She knew every remote office personally. When we visited sites across different states, she wasn\u0026rsquo;t just rolling out software. She had relationships with the people in those offices. They trusted her, and that trust meant the migration landed in a way it wouldn\u0026rsquo;t have if it had been purely contractor-led.\nLarge rollouts are never only about the image, the package, or the tool. They\u0026rsquo;re about whether trust travels with the change.\nContractors bring scale, technical capability, and bodies, but the people users already know and trust bring something contractors can\u0026rsquo;t, which is the confidence that someone who understands their work is watching out for them. The sites with a known, trusted presence during the transition had fewer issues on every large migration I\u0026rsquo;ve worked on, not because the tech was better but because people felt supported enough to actually learn the new system rather than fighting it.\nWhen you move from a pilot of 50 people who had dedicated support to a rollout of 5,000 who don\u0026rsquo;t, the tool doesn\u0026rsquo;t get worse but the support disappears, and without it people quietly stop using the thing.\nThe post-pilot void The transition from \u0026ldquo;50 people using it with support\u0026rdquo; to \u0026ldquo;the whole organisation using it without\u0026rdquo; just doesn\u0026rsquo;t get designed. The pilot had a project manager, a champion, a feedback loop, and executive attention, and the rollout has a deployment date and an FAQ document.\nIf you don\u0026rsquo;t plan the rollout properly you end up back where you started, with people finding their own solutions, adopting their own tools, and creating exactly the kind of shadow IT problem that triggered the migration in the first place.\nScale changes the support model even when the tool stays the same. That\u0026rsquo;s the gap most pilots never test.\nI\u0026rsquo;ve done this across different decades, different countries, and different types of technology, and the human problems have been the same every time.\nPilots prove the technology can work. Rollouts prove whether the organisation can absorb it.\n","permalink":"https://kashifnazir.com/blog/pilot-to-production-gap-nobody-plans-for/","summary":"\u003cp\u003eMost migrations don\u0026rsquo;t start because someone planned one, they start because half the organisation is already using something unsanctioned and you\u0026rsquo;re formalising it before it turns into a compliance problem. I\u0026rsquo;ve seen that across different types of tech, and the failure mode is always the same: the pilot gets all the investment and the rollout gets whatever\u0026rsquo;s left, which is usually not much.\u003c/p\u003e\n\u003cfigure\u003e\n    \u003cimg loading=\"lazy\" src=\"rollout-at-scale-office.png\"\n         alt=\"A large open-plan office filled with rows of desks and computer screens\"/\u003e \u003cfigcaption\u003e\n            \u003cp\u003eThe hard part of a migration is usually the stretch between a successful pilot and a real rollout.\u003c/p\u003e","title":"Pilot to Production: The Gap Nobody Plans For"},{"content":"I\u0026rsquo;ve held AWS certifications since 2019, starting with Cloud Practitioner and then working through Solutions Architect Associate, SysOps, Developer Associate, and Solutions Architect Professional within about a year (It was Covid year after all). I recertified the SA Pro in 2023 and passed the AI Practitioner in February this year. The SA Pro is due again, exam\u0026rsquo;s booked for end of June, so I\u0026rsquo;m back studying.\nMy study method has been the same every time, which is watch a Stephane Maarek course on Udemy, make notes in OneNote as I go, grind practice exams. The notes are dense and compressed, topics separated by slashes, exam questions dropped in wherever they\u0026rsquo;re relevant. The formatting is all over the place because it\u0026rsquo;s written for speed of recall, not for anyone else to read. I\u0026rsquo;ve used this approach for every cert I\u0026rsquo;ve passed and never had a reason to change it.\nFor the AI Practitioner I wanted to mix learning and practical, so I built two custom GPTs to test whether AI could improve a process I already had, feeding them the same source material: Stephane Maarek\u0026rsquo;s course slides, the AWS exam guide, and my own notes from going through the course. One was supposed to help me produce study notes, the other was a coach for concepts and practice questions.\nThe note GPT The note GPT was supposed to produce clean, consistent study notes I could paste straight into OneNote, with each service summarised in a line or two, \u0026ldquo;exam signal\u0026rdquo; callouts flagging what to look for, and \u0026ldquo;exam lock-in\u0026rdquo; sections for things that are easy to forget under pressure. The first few outputs looked decent but the GPT kept drifting on formatting, so I started adding rules to the system prompt to fix those things, and then those rules created their own edge cases, so I added more rules on top of those.\nThe system prompt ended up with placement-first insertion logic, deduplication checks, a mandatory response structure, OneNote formatting compliance rules, and a dual-layer audit that was supposed to catch formatting violations before they reached me, and it read like a specification document for a software system. Every time I sat down to study I\u0026rsquo;d work through practice questions or a new section of the course, find something that needed adding to my notes, and end up back in the GPT configuration sorting out whatever formatting issue had crept in since last time, so the studying and the prompt engineering were competing for the same time.\nsystem-prompt.md — Note Creation GPT NOTES INTAKE \u0026amp; SOURCE OF TRUTH (ABSOLUTE)\nAsk once only. Ask immediately. Treat notes as read-only PLACEMENT-FIRST RULE (HARD)\nBefore drafting content, determine silently:\nWhere the student expects to find this Which existing chooser it sharpens Minimum lines required to remove ambiguity Preference order: One-line chooser → Inline contrast → Micro-patch (1–3 lines) → Full insert (last resort)\nPRE-PATCH GATE (MANDATORY)\nDoes this introduce a new decision signal? Does this sharpen an existing chooser? Would a student miss this exam question without it? If all answers are \u0026ldquo;no\u0026rdquo; → No change. SECTION SCAN \u0026amp; DEDUPLICATION (STRICT)\nAlready present → No change Partially present → Patch missing decision trigger only Missing → Minimal insert Conflicting → Correct and replace MANDATORY RESPONSE STRUCTURE (NON-NEGOTIABLE)\nChange type: Insert / Patch / Rewrite / No change Placement: Section name (verbatim) Insert in notes: Insert-ready content only INSERT BLOCK ENFORCEMENT (CRITICAL)\nThe entire \u0026ldquo;Insert in notes\u0026rdquo; block must independently pass full OneNote formatting compliance.\nONENOTE RENDERING SANITY CHECK (DUAL-LAYER HARD FAIL)\nAudit 1 — Global Structure Audit 2 — Insert Block Only If ANY rule fails: Discard output. Rewrite from scratch. Re-audit. Repeat until 100% compliant. Formatting violations are treated as incorrect answers. BOLDING RULES (STRICT — EXPANDED)\nAlways bold: Section headings, Sub-headings, Rule labels, Service names, Contrast labels, Chooser bullets If a chooser arrow appears, the entire chooser must be bold. The system prompt for the note GPT. This started as a few paragraphs. The notes it produced looked good and the formatting was consistent, but that consistency actually made them harder to study from. My own notes have uneven emphasis because I naturally write more about things I found difficult and barely anything about things that clicked first time. The GPT treated every service the same way, so scanning for the stuff I actually needed to revise meant reading through everything at the same pace.\nAWS AI Practitioner — GPT Notes Exam Mental Model\nAWS exam bias:\nPrefer managed services Prefer simplest viable solution Prefer lowest operational overhead Bedrock for GenAI / SageMaker for custom ML Rules-based logic if ML adds no value Exam lock-in: Rules beat ML when outcomes must be exact\n🔵 AMAZON BEDROCK\nAmazon Bedrock — Managed access to foundation models for GenAI inference. No infrastructure management; not for custom training from scratch. Exam signal: \u0026ldquo;build GenAI app\u0026rdquo;, \u0026ldquo;use foundation models\u0026rdquo;, \u0026ldquo;no servers\u0026rdquo;\nBedrock Guardrails — Enforces content filters, PII masking, grounding checks. Blocks hate, violence, sexual content, misconduct, prompt attacks. Exam signal: \u0026ldquo;mask PII\u0026rdquo;, \u0026ldquo;block unsafe output\u0026rdquo;, \u0026ldquo;regulated industry\u0026rdquo;\n🟢 AMAZON SAGEMAKER\nAmazon SageMaker — Full ML lifecycle: build, train, deploy, monitor. Used for custom ML, fine-tuning, and training from scratch. Exam signal: \u0026ldquo;custom model\u0026rdquo;, \u0026ldquo;train your own model\u0026rdquo;\nSageMaker Canvas — No-code ML for analysts. Build and train models without programming. Exam signal: \u0026ldquo;business analyst\u0026rdquo;, \u0026ldquo;no coding\u0026rdquo;\n🟠 AI APPLICATION SERVICES\nAmazon Comprehend — Sentiment analysis, entity recognition, key phrase extraction. Text input only. Exam signal: \u0026ldquo;analyze customer reviews\u0026rdquo;, \u0026ldquo;extract entities from text\u0026rdquo;\nAmazon Rekognition — Image and video analysis + moderation. Detects objects, faces, text, unsafe content. Exam signal: \u0026ldquo;image moderation\u0026rdquo;, \u0026ldquo;detect objects in images\u0026rdquo;\nThe GPT\u0026#39;s output. Consistent structure, but everything looks the same. I used the notes and passed the exam, but studying from them felt like I was adapting to the tool\u0026rsquo;s format rather than having notes that matched how I actually think. By the time I sat the exam I\u0026rsquo;d already decided I was going back to my own method for the next cert.\nOneNote — SA Pro / Cloud Practitioner AWS Regions and AZ Regions - Region is made of AZ\u0026rsquo;s (usually 3 with min 3 and max 6). Why do you choose a region: Compliance, Proximity to Customers, Available service within a region, Pricing AZ - Each availability zone (AZ) is one or more discrete data centers with redundant power, networking, and connectivity / Used AZ ID to uniquely identify the AZ across two AWS accounts\nAWS IAM / Global service/ Controls AWS resources only / non-explicit deny for new accounts / User is a permanent named / group is a collection of users / Role is the authentication method / permissions are applied to user, group or role, set in policy docs (JSON) / console passwords, access keys (access key identifier and secret access key) and server certificates / access keys should be rotated for IAM users / IAM:PassRole needed to assign a role to an AWS resource (EC2)\nPlacement groups\nclustered (group in single AZ / low latency) High Performance Computing (HPC) Spread (single instance on distinct hardware and so isolated. Means hardware failure will only affect one instance / MAX 7 instances per group per AZ / Can span across AZ) Partition (multiple instances together on distinct hardware) (hadoop, cassandra, kafka) / (up to 7 partitions per AZ but can contain 100\u0026rsquo;s of instances in a single partition) An engineering team wants to examine the feasibility of the user data feature of Amazon EC2 for an upcoming project. Which of the following are true about the EC2 user data configuration?\nBy default, scripts entered as user data are executed with root user privileges By default, user data runs only during the boot cycle when you first launch an instance My own notes from earlier certs. The coach GPT The other GPT had a simpler brief, with a system prompt about a page long that had rules about adapting to my confidence level and staying within the exam domains but didn\u0026rsquo;t need formatting constraints because it wasn\u0026rsquo;t producing anything I had to paste into another tool.\nsystem-prompt.md — Coach GPT Role You are a focused study tutor for the AWS Certified AI Practitioner (AIF-C01) exam. Your sole purpose is to teach concepts, reinforce understanding, and generate original exam-style practice questions.\nCore Teaching Goals\nTeach AWS AI, ML, and Generative AI concepts clearly Prioritise understanding over memorisation Explain: what a service is, when to use it, why over alternatives Default Teaching Response Pattern\nDirect answer or definition Short, structured explanation Exam relevance Optional follow-up question or task Adapt depth dynamically: simplify if unsure, go deeper when confidence is clear Behaviour Constraints\nStay focused on teaching and questions only Do not compile or edit notes Do not rewrite content into study sheets Do not provide motivational coaching The entire coach GPT system prompt. That\u0026#39;s it. The note GPT\u0026#39;s is still scrolling. I used it most in the evenings when something from the practice questions wasn\u0026rsquo;t making sense. I\u0026rsquo;d ask it about the topic, get it to explain it, and then have it quiz me on that area until I felt solid on it. The full practice exams were for proper studying, this was more for quickly checking things that weren\u0026rsquo;t clicking and making sure I actually understood them rather than just recognising the right answer. The system prompt never needed updating because the GPT\u0026rsquo;s job was narrow enough that the original instructions covered it.\nWhat I\u0026rsquo;m doing for the Pro recert I\u0026rsquo;m back to my own notes for the SA Pro, Stephane Maarek\u0026rsquo;s course and OneNote in the same compressed format I\u0026rsquo;ve always used. I am thinking about trying Claude as a study partner this time though, more in the coaching role than note creation. I haven\u0026rsquo;t used it for studying yet but I\u0026rsquo;ve been using it for the website build and I\u0026rsquo;m curious whether it handles that kind of back-and-forth differently to ChatGPT. I\u0026rsquo;ll write that up separately once I\u0026rsquo;ve actually done it and have something to compare.\nI\u0026rsquo;m also looking at whether moving from OneNote to something markdown-based would make it easier for AI tools to work with my notes directly. Given what happened with the note GPT I\u0026rsquo;m cautious about that. I might use AI for skeleton notes and fill in the detail myself, or I might just keep it as a Q\u0026amp;A partner and leave my notes alone entirely. I\u0026rsquo;ll figure that out as I go and try to focus on studying.\nThe SA Pro exam is end of June 2026. I\u0026rsquo;ll write up the Claude experiment afterwards.\n","permalink":"https://kashifnazir.com/blog/ai-for-aws-certs/","summary":"\u003cp\u003eI\u0026rsquo;ve held AWS certifications since 2019, starting with Cloud Practitioner and then working through Solutions Architect Associate, SysOps, Developer Associate, and Solutions Architect Professional within about a year (It was Covid year after all). I recertified the SA Pro in 2023 and passed the AI Practitioner in February this year. The SA Pro is due again, exam\u0026rsquo;s booked for end of June, so I\u0026rsquo;m back studying.\u003c/p\u003e\n\u003cp\u003eMy study method has been the same every time, which is watch a \u003ca href=\"https://www.udemy.com/user/stephane-maarek/\"\u003eStephane Maarek\u003c/a\u003e course on Udemy, make notes in OneNote as I go, grind practice exams. The notes are dense and compressed, topics separated by slashes, exam questions dropped in wherever they\u0026rsquo;re relevant. The formatting is all over the place because it\u0026rsquo;s written for speed of recall, not for anyone else to read. I\u0026rsquo;ve used this approach for every cert I\u0026rsquo;ve passed and never had a reason to change it.\u003c/p\u003e","title":"AI for AWS Certs"},{"content":"I\u0026rsquo;m recertifying my AWS Solutions Architect Professional cert for the second time right now, so when I decided to build a personal site the temptation to go straight to Route53, CloudFront, S3, and Terraform was real. I asked ChatGPT, Claude, and Gemini what they thought before starting, and while they disagreed on a few things they all said to use GitHub Pages and not overthink the hosting. Gemini included a cost comparison that had an EKS cluster as one of the options at £150-200/month, which is overkill for basically any website, but it helped make the point that GitHub Pages with a custom domain was the obvious starting point. I could have spent weeks on infrastructure before writing a single post, or I could just start writing.\nAll three agreed: start with GitHub Pages, document the ambitious plan, build it later.\nWhat I didn\u0026rsquo;t use WordPress, Squarespace, and Substack would have been quicker, but I wanted to understand the build myself even if it ended up being simple. Self-hosting on AWS from day one would have meant weeks of Terraform and cert automation before I\u0026rsquo;d got anything live. I also looked at some impressive portfolio sites and briefly wanted something like that, but the custom assets alone (illustrations, animations, 3D elements) would have taken longer than the infrastructure. I had Codex generate about ten homepage variants with different palettes and fonts to compare side by side, but it was Claude\u0026rsquo;s approach of asking me questions about what I actually liked that helped more. It was pretty clear early on that the site I\u0026rsquo;d get done in two weeks of evenings and weekends wouldn\u0026rsquo;t be the finished version, and I was fine with that.\nLusion\u0026#39;s portfolio — the kind of site I briefly wanted. Custom 3D elements, illustrations, and animations that would have taken longer than the infrastructure. Chat-first websites While looking into all of this I noticed a few companies have replaced their traditional website with a chat interface. Satisfi Labs rebuilt their site around what they call a \u0026ldquo;chatsite,\u0026rdquo; a two-column layout with nav on the left and AI chat on the right, with a toggle between chat and traditional views. Nearly 60% of Google searches now end without a click, and there\u0026rsquo;s a growing argument that page hierarchies are becoming less relevant when people show up with a question and expect an answer.\nsatisfilabs.com Satisfi Labs replaced their traditional site with a chat interface. The \u0026#39;Classic Site\u0026#39; toggle at the bottom left is the fallback. For a personal site the question is whether someone wanting to know what I think about multi-cloud architecture would rather browse posts or just ask. Right now it\u0026rsquo;s browse, because that\u0026rsquo;s what I\u0026rsquo;ve built. When I rebuild in a year I\u0026rsquo;ll be thinking about whether a conversational layer alongside the written content makes sense.\nDesigning for AI agents There\u0026rsquo;s a weirder version of that question: will a human even visit the site? AI agents already crawl the web to answer questions on behalf of users. If someone asks AI about something I\u0026rsquo;ve written, it fetches my site and summarises it, and that might be as far as it goes.\nIt reminded me of the early web, when people stuffed pages with white text on white backgrounds so search engine crawlers could read keywords that were invisible to humans. Google eventually got smart enough to penalise all of it and building for humans became the right approach. The modern crawlers are language models, and they don\u0026rsquo;t need hidden keywords, they need clean, structured, machine-readable content.\nThere\u0026rsquo;s already a proposed standard for this called llms.txt, a markdown file at your site root that gives AI models a structured summary of your content. Jeremy Howard proposed it in September 2024 and companies like Anthropic, Cursor, Vercel, and Stripe use it. The major LLM providers haven\u0026rsquo;t implemented automatic discovery of it yet, so the practical impact right now is debatable. But the file takes minutes to write and costs nothing, so I added one while researching this article.\nkashifnazir.com/llms.txt ```markdown # Kashif Nazir \u003e Senior Technical Architect writing about cloud architecture, \u003e platform modernisation, and learning in public. ## Blog Posts - [Building This Site](/blog/building-this-site/): Two weeks, two AI agents, zero web development experience. - [Why GitHub Pages Over AWS](/blog/why-github-pages-over-aws/): Deferring complexity and why it's the right first move. ... ## About - [About](/about/): Background, experience, and what this site is for. ``` The full file is at kashifnazir.com/llms.txt — a curated summary of the site for AI models. Hugo already does most of the work here without any extra effort. It generates static HTML with minimal JavaScript, and since a lot of AI crawlers don\u0026rsquo;t execute JavaScript at all, content behind client-side rendering is just invisible to them. A static site is actually easier for AI to read than something built on a heavier framework. There are a few other things worth setting up like JSON-LD structured data and making sure robots.txt allows AI user agents, but none of it takes long. It\u0026rsquo;s basically the opposite of the old white-text-keywords approach. Instead of hiding stuff for machines while making the page look good for humans, you\u0026rsquo;re making content clean enough that both can use it, which is a much easier problem.\nThe AWS stack I\u0026rsquo;m going to build Once I\u0026rsquo;ve been publishing for a while and have content worth migrating, the plan is to move from GitHub Pages to a self-hosted setup: Route53 for DNS (moving from Namecheap for programmatic control via Terraform), S3 for hosting the Hugo build output, and CloudFront in front for HTTPS with ACM, edge caching, and HTTP/2.\nDeploy Developer git push to main GitHub Actions Hugo build + output artifact S3 Sync aws s3 sync to hosting bucket CloudFront Invalidation Purge edge caches after deploy Serve User Requests kashifnazir.com 53 Route 53 DNS lookup → CloudFront distribution CloudFront HTTPS (ACM), edge cache, HTTP/2 S3 Origin Static files served to CloudFront Manage Terraform All infra as code — state in S3 + DynamoDB locking CloudWatch Alarms, metrics, error rate monitoring £ AWS Budgets Cost alerts — target \u0026lt;£5/month ACM TLS certificate provisioning + auto-renewal The planned AWS stack — deploy pipeline on the left, request path on the right, infrastructure management underneath. All provisioned via Terraform. The whole stack gets defined in Terraform with state in S3 and DynamoDB locking, so someone could reproduce it from scratch with a single terraform apply. CI/CD stays on GitHub Actions: push to main, Hugo builds, output syncs to S3, CloudFront invalidates. Should still deploy in under 60 seconds. Monitoring is CloudWatch alarms and AWS Budgets alerts, and the whole thing should cost under £5/month. I haven\u0026rsquo;t built any of this yet, and writing it out now is partly to think through the decisions before I\u0026rsquo;m actually doing it, and partly so there\u0026rsquo;s something to compare against when I do.\nWhat comes after If AI agents become the main way people find my content, the visual design matters less than the data model. Clean HTML, structured data, llms.txt, maybe an API endpoint that lets agents query content directly. If the chatsite idea keeps growing, maybe the rebuild includes a conversational layer alongside the written articles, not replacing them but giving people another way to find what\u0026rsquo;s in them. Or maybe it stays as static HTML on AWS instead of GitHub Pages and nothing else changes. I\u0026rsquo;ll make that call with a year of watching this stuff behind me.\nThis site runs on Hugo + PaperMod, deployed to GitHub Pages. The AWS migration plan above is the Year 1 capstone.\n","permalink":"https://kashifnazir.com/blog/architecture-unbuilt/","summary":"\u003cp\u003eI\u0026rsquo;m recertifying my AWS Solutions Architect Professional cert for the second time right now, so when I decided to build a personal site the temptation to go straight to Route53, CloudFront, S3, and Terraform was real. I asked ChatGPT, Claude, and Gemini what they thought before starting, and while they disagreed on a few things they all said to use GitHub Pages and not overthink the hosting. Gemini included a cost comparison that had an EKS cluster as one of the options at £150-200/month, which is overkill for basically any website, but it helped make the point that GitHub Pages with a custom domain was the obvious starting point. I could have spent weeks on infrastructure before writing a single post, or I could just start writing.\u003c/p\u003e","title":"The Architecture Unbuilt"},{"content":"I\u0026rsquo;m not a web developer I\u0026rsquo;m a senior technical architect and my day job is application compatibility, migration, and platform modernisation, figuring out why software breaks when you move it between platforms and fixing it. The tools I reach for are Sysinternals, WinDbg, and Process Hacker, not CSS and JavaScript. Building a website from scratch wasn\u0026rsquo;t exactly in my wheelhouse.\nI\u0026rsquo;ve wanted a personal site for years but kept putting it off because I didn\u0026rsquo;t want something that looked like it was built in 2003 on GeoCities (though I do miss the flame borders). I also didn\u0026rsquo;t want to just use Squarespace or WordPress because I wanted to understand the build. When I moved into a strategy role last year where thought leadership is actually part of the job, the timing finally made sense. AI tools had collapsed the barrier too. What would have taken me weeks of learning web development took about two weeks of evenings and weekends.\nBefore writing any code I gave the same brief to ChatGPT, Claude, and Gemini: build a professional site to document a learning journey in AI, AWS, Kubernetes, IaC, Python, and Linux. I wanted to see where they agreed and whether any of them would talk me out of a bad idea. All three independently said GitHub Pages. I have an AWS Solutions Architect Professional cert and the temptation to build a proper stack with Route53, CloudFront, S3, and Terraform was hard to resist, but they were right. Self-hosting a site that serves text is over-engineering for the sake of it, and I\u0026rsquo;d spend all my time maintaining infrastructure instead of actually writing.\nClaude suggested something I liked though: don\u0026rsquo;t build the AWS stack now, but write about what you would build and why you\u0026rsquo;re not building it yet, then do it for real later as a capstone project. Use the free thing now, document the ambitious thing, upgrade when you\u0026rsquo;re ready.\nSo the stack is Hugo, PaperMod theme, GitHub Pages, Namecheap for the domain, ProtonMail for email. The whole thing runs at under £4 a month.\nFirst pass — functional but generic.\nDark theme. Both looked like every other AI portfolio.\nRunning two AIs side by side Rather than picking one AI and going with it, I ran two: Anthropic\u0026rsquo;s Claude and OpenAI\u0026rsquo;s Codex. Same brief, same inputs, different branches in the same repo. I used Git worktrees to keep both active simultaneously, one worktree per branch, one AI per worktree.\nI mostly wanted to see how they approached the same problem differently, and worktrees meant I could switch between them without losing context. It also turned into an unexpectedly good way to learn Git. Managing two AI-driven branches at the same time taught me more about branching, merging, and rolling back than any tutorial ever did.\nClaude was more opinionated about design decisions and better at self-correcting when something wasn\u0026rsquo;t working. Codex had a tendency to get stuck in loops. I wanted both versions to have a feature where a data packet follows you down the homepage as you scroll, like a signal travelling through a circuit board. Codex hit issues with tracking the scroll position and spent hours going in circles trying different approaches. I eventually pointed it at Claude\u0026rsquo;s working implementation on the other branch just to get it unstuck. Once it could see a working approach it adapted and moved on, but without that nudge I don\u0026rsquo;t think it would have got there.\nCodex stuck in a loop trying to track scroll position for the data packet animation. Claude had its own problems though. At one point it hit its token limit mid-session, and when I asked it to continue it came back on the wrong worktree. It had silently switched to the Codex branch and started committing there. It took about six commits before I noticed. The changes weren\u0026rsquo;t catastrophic and I could roll them back, but now I verify the worktree and branch as the first step after every rate limit or session restart.\nThe worktree problem in one picture. Claude switched branches after a rate limit and committed to the wrong one.\nThe generic AI look The early output from both AIs looked like every other AI-generated website on the internet. Same gradients, same hero layout, same generic tech portfolio aesthetic. They looked fine but they also looked like everything else.\nCodex\u0026#39;s early output — polished but generic. Claude\u0026#39;s first pass — same brief, same template energy. I kept trying to push for something more distinctive but the results stayed generic. At one point Claude was fairly blunt about it: if you don\u0026rsquo;t want the site to look like AI made it, you need to start making decisions yourself, because AI will default to what it\u0026rsquo;s seen on the internet, which is everything. So I stopped asking the AI to \u0026ldquo;make it look better\u0026rdquo; and started getting it to ask me questions instead. What colours do I actually like? What design motifs mean something to me? What sites do I admire and why?\nOne thing that worked well was getting Codex to generate ten different versions of the homepage with different colour palettes, fonts, and layouts, each deployed as a page I could visit and compare. It\u0026rsquo;s much easier to rule things out when you can see them all at once, and doing that by hand would have required me to actually be a web developer.\nTen colour and layout variants generated by Codex — easier to rule things out when you can see them all at once. Comparing those side by side is how I landed on a warm palette (cream, terracotta, warm darks) instead of the cold tech blue that AI defaults to. It\u0026rsquo;s also how I got to the circuit board spine, a PCB-style trace that threads through the homepage connecting sections like nodes on a board. Not because AI suggested it, but because when I was asked what visual metaphors resonated with me, I kept coming back to circuits and signal paths. That felt like mine, not a template.\nWhat shipped The site is at kashifnazir.com, built with Hugo and PaperMod and deployed to GitHub Pages. The homepage has a circuit spine running through it with scroll-driven animations powered by GSAP, custom typography, and a nav with a monogram that draws itself on load. First-time visitors get a loading screen, and there are editorial blog and project layouts plus a 404 page with a \u0026ldquo;packet lost\u0026rdquo; message. The CSS ended up at about 2,200 lines and the JavaScript handles the circuit animations, scroll triggers, page transitions, and a bunch of interaction details I never would have thought to add myself.\nIt\u0026rsquo;s not going to win any design awards but it looks like something I\u0026rsquo;d actually want to visit. Going from nothing to a site I\u0026rsquo;m happy to put my name on in two weeks of evenings and weekends is a decent result for someone whose job has nothing to do with web development.\nkashifnazir.com The finished homepage — warm palette, circuit spine, editorial layout. Looking back Asking three AIs the same question before I started took about twenty minutes and saved me from over-engineering the hosting. I\u0026rsquo;d do that again for any project where I\u0026rsquo;m not sure of the approach.\nThe design only started looking like mine once I stopped giving the AI instructions and started answering its questions. Vague briefs got generic results. Having to actually say what I liked and why forced decisions I wouldn\u0026rsquo;t have made otherwise.\nGit knowledge matters more than I expected when you\u0026rsquo;re working with AI agents. I ended up on the wrong branch, committed to the wrong worktree, and had to roll things back. Knowing how to fix that without panicking made those moments annoying instead of catastrophic.\nThe site launched with two blog posts and no project screenshots. I could have waited another month to have everything polished, but I know myself well enough to know I\u0026rsquo;d have found ten more reasons not to.\nThis site is built with Hugo + PaperMod, deployed on GitHub Pages. The source is at Github.com/thekashifnazir. If you want to see what the Codex version looked like, that\u0026rsquo;s a story for another post.\n","permalink":"https://kashifnazir.com/blog/building-this-site/","summary":"\u003ch2 id=\"im-not-a-web-developer\"\u003eI\u0026rsquo;m not a web developer\u003c/h2\u003e\n\u003cp\u003eI\u0026rsquo;m a senior technical architect and my day job is application compatibility, migration, and platform modernisation, figuring out why software breaks when you move it between platforms and fixing it. The tools I reach for are Sysinternals, WinDbg, and Process Hacker, not CSS and JavaScript. Building a website from scratch wasn\u0026rsquo;t exactly in my wheelhouse.\u003c/p\u003e\n\u003cp\u003eI\u0026rsquo;ve wanted a personal site for years but kept putting it off because I didn\u0026rsquo;t want something that looked like it was built in 2003 on GeoCities (though I do miss the flame borders). I also didn\u0026rsquo;t want to just use Squarespace or WordPress because I wanted to understand the build. When I moved into a strategy role last year where thought leadership is actually part of the job, the timing finally made sense. AI tools had collapsed the barrier too. What would have taken me weeks of learning web development took about two weeks of evenings and weekends.\u003c/p\u003e","title":"Building This Site"},{"content":"kashifnazir.com is the first public build of my personal site: Hugo, PaperMod, GitHub Pages, and a lot of iterative work with Claude and Codex to get it from generic template to something that actually feels like mine.\nThe project is really two things at once: a live site and a record of how it was built. That includes the design direction, the circuit motif, the decision to launch on GitHub Pages first, and the AWS architecture I deliberately chose not to build yet.\nIf you want the full write-up, start with these two articles:\nBuilding This Site covers the actual build, the two-AI workflow, and how the visual direction came together. The Architecture Unbuilt covers the hosting decision, why GitHub Pages won for launch, and the AWS stack planned for a later migration. This page stays here as the project anchor for the site itself. Future projects will sit alongside it as the portfolio grows.\n","permalink":"https://kashifnazir.com/projects/site-build-log/","summary":"\u003cp\u003e\u003ccode\u003ekashifnazir.com\u003c/code\u003e is the first public build of my personal site: Hugo, PaperMod, GitHub Pages, and a lot of iterative work with Claude and Codex to get it from generic template to something that actually feels like mine.\u003c/p\u003e\n\u003cp\u003eThe project is really two things at once: a live site and a record of how it was built. That includes the design direction, the circuit motif, the decision to launch on GitHub Pages first, and the AWS architecture I deliberately chose not to build yet.\u003c/p\u003e","title":"Building kashifnazir.com"},{"content":"What I do I sit between strategy and engineering, shaping architecture decisions, evaluating emerging tech, and still getting my hands dirty when the problem is interesting enough.\nI got here the long way round over about a decade through customer service, helpdesk, second-line support, and EUC before getting deeper into application packaging and cloud migration. I moved into managing teams and delivery, and now I\u0026rsquo;m more focused on R\u0026amp;D and technical strategy across multiple product lines, including where AI fits into what we build. I still work closely with pre-sales on solution design, because that\u0026rsquo;s where the interesting problems start.\nThis site is where I write about what I\u0026rsquo;m learning and building across cloud, infrastructure, and AI, plus whatever else I find interesting.\nCurrent focus AI evaluation and adoption AWS architecture and migration planning Kubernetes orchestration and platform workflows Infrastructure as Code with Terraform Python for automation and tooling Linux systems and networking Beyond the terminal I\u0026rsquo;ve lived abroad twice, with a year in the USA and a year in Brussels, and travelled properly well beyond that. When I visit a new city food and culture are the two things I\u0026rsquo;ll always make time for.\nI cook a lot and it\u0026rsquo;s usually something from my gran or mom\u0026rsquo;s recipes, whatever cuisine I\u0026rsquo;ve most recently fallen in love with, or just whatever needs using up.\nI\u0026rsquo;ve been training martial arts since my twenties, starting with Muay Thai, and it keeps me grounded. I picked up grappling while I was in Brussels and I\u0026rsquo;m nearly two years into BJJ now, still working on the basics. It uses the same part of my brain as troubleshooting: you\u0026rsquo;re stuck, nothing obvious is working, and you have to stay calm and work through options. Except the consequences are more immediate\u0026hellip;..\nWhy this site I\u0026rsquo;ve wanted a proper website for years but the time investment never made sense. A move into a strategy role and AI tools collapsing the barrier changed that. After a decade of solving interesting problems without writing any of it down, it\u0026rsquo;s time to start.\n","permalink":"https://kashifnazir.com/about/","summary":"\u003ch1 id=\"what-i-do\"\u003eWhat I do\u003c/h1\u003e\n\u003cp\u003eI sit between strategy and engineering, shaping architecture decisions, evaluating emerging tech, and still getting my hands dirty when the problem is interesting enough.\u003c/p\u003e\n\u003cp\u003eI got here the long way round over about a decade through customer service, helpdesk, second-line support, and EUC before getting deeper into application packaging and cloud migration. I moved into managing teams and delivery, and now I\u0026rsquo;m more focused on R\u0026amp;D and technical strategy across multiple product lines, including where AI fits into what we build. I still work closely with pre-sales on solution design, because that\u0026rsquo;s where the interesting problems start.\u003c/p\u003e","title":"About"}]