Take a look behind the scenes with our infrastructure team!

Since this is all about infrastructure, there’ll be a bit of tech-jargon to deal with, but we’ll do our best to keep things simple and straightforward. Let’s get started!
What is infrastructure maintenance and why is it important?
Infrastructure maintenance is the (we reckon) unfairly maligned cousin of Game updates. It refers to the upkeep, patching, optimisation, and scaling of the technical foundations that power the worlds of Gielinor!
Infrastructure maintenance includes activities like:
- Keeping the world spinning: Hardware and Software maintenance lets us replace aging kit and patch vulnerabilities to keep the gremlins out, or migrate services to more modern or scalable setups.
- Protecting your items: Tuning our databases to make sure your items like your Max cape are stored resiliently and can be retrieved quickly.
- Smooth and efficient clicks: Maintenance on the network makes sure your frantic Inferno run mouse clicks reach us as smoothly and efficiently as possible.
- Issue detection and resolution: Improvements to monitoring and automation ensure we can detect and resolve issues before players even notice them.
- Feeding the hamsters: Otherwise they have a tendency to go for our eyes!
There’s a Dan Simmons quote about entropy which we shan’t repeat here – but the gist of it is that all systems eventually decay, no matter how well built they are. Just like roadworks, we need ongoing maintenance to keep our infrastructure functional. Our work keeps our games stable, fast, and secure, and ideally it does all that without you lot even noticing.
How does infrastructure maintenance benefit players?
We’d like to share some specific instances when infrastructure maintenance has benefited you lot in recent years.
We’re sure you remember the Login Lockout situation. It’s been very important to all of us that these types of situations are prevented as far as possible! Which is exactly why one of the key areas we’ve been focusing on is improving our backup and continuous data protection features, to help mitigate the risk of anything like that happening again.
Here are a few more examples:
- Reducing the need for offline maintenance: Yes, our game worlds used to be physical machines. We moved over to virtual machines to enable live migration to help lessen the need for the game to come offline.
- Top tick performance: Unlike the parasitic bugs of the world, ticks are extremely important and we’ve upgraded our hardware for the servers to be optimised for accurate ticks.
- Improved stability: Game Worlds have a real-time-like execution model, we’ve switched to a pauseless garbage collector — yes, that’s a real thing — via tuned JVM (Java Virtual Machines), which helps smooth out tick performance.
- Increased reliability with Jagex Accounts: Although still a fairly new system, we’ve already made improvements by deploying into a modern and scalable Amazon Web Services EKS (Elastic Kubernetes Service) / RDS (Relational Database Service) infrastructure.
- This allows us to easily deploy, manage, and scale containerized applications without the complexity of managing the underlying infrastructure, which helps massively!
- Mitigating downtime: Recently we have been migrating from an aging DC (Data Centre) to a new Tier 3 DC, which gives us additional reliability and helps mitigate the risk of power incidents.
We’re always thinking about the future of our infrastructure, and learning wherever we can. For example, the team working on the now-defunct Project Zanaris did some fantastic work that will help us modernise and migrate more systems into AWS, reducing the need for offline maintenance.
As previously said, ticks are king here, so we’re looking to change our virtualisation platform for Game Worlds to move from a co-scheduling CPU scheduler (ESXi) to an asynchronous scheduler (Linux CFS/PreemptRT tuning under KVM). While there’s a lot of technical wording here, the key takeaway is that doing this will help reduce missed game ticks under high load conditions.
Common Complaints
Downtime is great for wrapping up chores and actually getting work done, but we know you’d rather be back to second-screening your skills or clinching a second off your ToB time. While our work is important for the long-term health of the game, we want to be open with you about some of the most common pain points we’ve seen when the game comes down.
Why does infrastructure maintenance require downtime?
You might be surprised to know that it generally doesn’t. Most of our infrastructure maintenance activities are achieved without any downtime at all.
However, there are certain activities – more than we would like – which cannot be achieved while keeping our games online. In part, this is the nature of working on fantastic games with a 24-year history; some software components have been with us for a long time and are not as horizontally scalable or dynamic as we would like. In some cases, this precludes approaches like blue-green deployments (seamless switching between two versions) or rolling upgrades (live update parts of a system bit by bit).
This doesn’t mean we’re complacent about downtime. We’re always trying to ensure that our updates and changes impact the fewest people for the shortest amount of time.
Why does it take sooooo long? #letmein
Look, we all want Nan out of that cage as quickly as possible.
When we schedule downtime for infrastructure maintenance, rather than doing it online behind the scenes, this is often because we are working on parts of the infrastructure involved with state persistence. The main two areas for this are billing and entitlements and, most importantly for you, player saves.
When Jagex performs significant works on these systems, some of which store data in Jagex’s own proprietary formats, our overriding focus is not on speed but accuracy. Most of the maintenance window is taken up validating that player saves and the game state are correct before opening the games back up to players.
Nobody likes a rollback, and part of the reason downtime takes so long is that we’re doing our best to make sure rollbacks are never required due to infrastructure work.
Regardless, for each piece of key maintenance we perform, especially when it involves downtime, the team always comes together for a retrospective review to see what we can improve on next time!
Why is infrastructure maintenance downtime typically global? Why aren’t maintenance activities scheduled regionally?
OSRS and RuneScape are fairly unique in that our players are not region locked. A consequence of this is that game state and other backend services are global constructs. When we perform disruptive maintenance to those global backend services, global downtime is required.
A lot of the region-specific maintenance we do is already transparent to players. We aim to do more things on a per-region basis where possible.
Our maintenance is normally timed between 08:00 UTC and 14:00 UTC, which matches our working hours here in the UK. We know it’s frustrating for those of you in far-flung time zones, but these hours mean we have access to the whole team to help when emergencies arise.
We have looked at other potential time windows to occasionally shift the impact between demographics, but these are areas that still need further discussion.
That’s a wrap! As always, you can keep up-to-date on upcoming maintenance windows through our socials, Player Support articles or the Game Status page.
We hope this gives you some better insight into our continued efforts to improve your experience and why it’s sometimes necessary to put the world on snooze. If you enjoyed this style of newspost, then let us know! The team are really keen to share more potato peelings in future “Spudworks” posts.
We appreciate everyone for taking the time to read. May RNG be ever on your side!

Mods Kraken, Vxp, Bash, Maniac, Vallcore, Haydon, Maniac, Cky, Qwert, Drax, M0iqp, Ibex, Adad, Roman … and 🐹
The Infrastructure J-Mods