This talk is a reflection on the things I’ve learnt having spent the last 17 years (and counting) providing infrastructure to the engineering communities at ARM Ltd.
ARM engineering engages in a wide variety of engineering disciplines to produce, enable and support it’s products. This, in turn, creates varied demand on the internal infrastructure required to enable it. From large HPC clusters that have been used in pretty much the same way for 20+ years, through weird and wacky custom pieces of hardware, to the modern infrastructure required for efficient software development.
The talk will discuss some of the challenges of providing and evolving the internal infrastructure needed for ARM to function, and reflect on changes resulting from more recent enablers such as cloud computing and home working.
I started my career as an EDA software engineer, writing RTL and gate-level simulators, then joined ARM in 1999 to focus on engineering productivity. I’m currently responsible for the overall architecture of our engineering platform, as well as leading exploration into future innovations/evolutions such as cloud-based engineering, big data workflows, and infrastructure as code. I have particular interest in all forms of workflow orchestration (how we allocate and run tasks on our compute resources), and in improving our capabilities in data capture, processing and visualisation (story telling).