<p><strong>About KatRisk</strong></p><p>KatRisk is a leading provider of catastrophe risk modeling solutions, dedicated to helping businesses and organizations understand, manage, and mitigate the risks associated with natural disasters. Our innovative technology and expertise enable clients to make informed decisions, optimize their risk management strategies, and safeguard their assets and operations against potential catastrophes. </p><p><strong>Role Overview</strong></p><p>SpatialKat is known for its performance and reliability, we are looking to build on this and transform this world-class modeling engine into a scalable, cloud-native platform while preserving its industry-leading performance.</p><p>We’re seeking a hands-on Engineering Leader with deep C++ expertise to own the reliability, performance, and evolution of our catastrophe risk assessment platform. You’ll lead and mentor engineers while contributing code, partnering closely with Data Engineering, Science, and Product to deliver high-impact features and modernize parts of our stack.</p><p>This platform ingests historical and meteorological data, runs distributed peril specific catastrophe risk simulations, and supports map-based visualization and advanced data queries used by enterprise customers.</p><p>You’ll lead a small but highly skilled team, work cross-functionally with product, science, and data engineering, and drive a strategic transformation: from a high-performance monolith to a scalable, cloud-native system.</p><p><strong>Responsibilities</strong></p><ul><li>Own & evolve the core C++ modeling system: Maintain, refactor, and enhance an established codebase that drives large-scale peril specific loss simulations with complex financial modelling.</li><li>Implement new catastrophe risk and financial models as they are designed.</li><li>Modernize the stack: Refactor legacy components toward a modular, containerized architecture with improved deployment automation.</li><li>Data-driven simulation accuracy: Work with historical and meteorological datasets to ensure scientifically sound, reproducible results in partnership with our Science team.</li><li>Performance & reliability at scale: Profile and optimize an I/O-intensive architecture (distributed processes, partial in-memory reads, lz4 binary outputs ~500 MB/core) to meet strict enterprise SLAs both on client infrastructure deployed environments and as a cloud based SaaS solution.</li><li>Guide integration: With APIs and visualization components to support new user experiences.</li><li>Distributed compute orchestration: Enhance the server-side job scheduler, proxies, and API daemon that coordinate asynchronous batch processing for API clients (including the web front end).</li><li>Modernize the web layer: Lead the transition from R Shiny components to a more traditional web stack (IIS/Apache, JavaScript, HTML, CSS) while keeping the system loosely coupled.</li><li>Database stewardship: Guide data modeling and performance tuning on our SQL Server backend; ensure data quality, lineage, and operational resilience.</li><li>Engineering leadership: Set technical direction, establish coding standards and CI/CD practices, mentor engineers, and drive pragmatic execution across a cross-disciplinary team.</li><li>Security & reliability: Strengthen authentication (pluggable approach), observability, and incident response; champion testing and automation throughout the stack.</li><li>Customer impact: Collaborate with Product and customer-facing teams to translate enterprise needs into roadmaps, features, client specific consulting projects, and measurable outcomes.</li></ul><p><strong>Tech Stack</strong></p><ul><li>Core: C++</li><li>Scripting/Modeling: R (including legacy R Shiny components), Batch/Shell scripting</li><li>Web: JavaScript, HTML, CSS; Nginx, IIS/Apache (migration path)</li><li>Data: Microsoft SQL Server, GIT repositories</li><li>Systems: Windows, Linux, AWS, and Azure environments</li><li>Distributed processing: Job scheduler, API daemon (controller), per-engine proxies, multi-process distribution via system calls</li><li>Formats/Compression: lz4 binary outputs for high-throughput I/O</li><li>Visualization: Map-based geospatial views and query tooling</li></ul><p><strong>Requirements</strong></p><p>Required</p><ul><li>Strong proficiency in C++ for scientific/engineering or high-performance systems.</li><li>Experience working with large datasets and performance-sensitive pipelines (I/O intensive workflows, compression, concurrency).</li><li>Solid debugging, profiling, and optimization skills across Linux/Windows environments.</li><li>Demonstrated ability to lead or mentor a small engineering team.</li><li>Ability to collaborate effectively with Data Engineering, Science, and Product teams; clear written and verbal communication.</li></ul><p>Preferred</p><ul><li>Experience with geospatial data, risk or simulation systems.</li><li>Familiarity with cloud computing, containerization, or distributed systems.</li><li>Experience modernizing or migrating legacy systems.</li><li>Familiarity with geospatial data/visualization and map-based UIs.</li><li>Experience with SQL Server performance tuning and data modeling.</li><li>Exposure to R and/or migrating R Shiny workloads to modern web stacks.</li><li>Distributed systems experience (job scheduling, batch processing, multi-process orchestration, API-driven controllers) on windows and linux environments within a cloud and client side infrastructure.</li><li>Building enterprise-grade systems with strong authentication, observability, and SLAs.</li><li>Knowledge of catastrophe modelling concepts and understanding of insurance based financial structures.</li></ul>