over 2 years ago - rocket2guns - Direct link
RESOLVED AT 5:34AM.

SOME DIFFICULTY RESTARTING A SERVICE AFTER MAINTENANCE, BUT FIXED. APOLOGIES FOR INCONVIENCE.


Scheduled Maintenance is **estimated** to begin at 4:45AM NZST (EDITED) (roughly 2 hours from now) and **estimate** it will last for a minimum of 40 minutes.

We are preparing all we need to so that the planned outage can occur and have it run fast. Please note this is the first time the service will be down, and we will be implementing important changes based on the existing load and previous outages. The service is top of the line, with the maximum available hardware. We have barely been stressing the raw processing power, but we need to make some changes to ensure timely responses. This is a very complicated update, involving our backend (database and companion services, like gateway and monitoring) and the client itself. It also includes content updates. We will do our best to resolve the maintenance as quickly as possible, but as it is the first time we will be doing it - it is possible it could take longer.

How do we prepare for this?
We have been testing the deployment of these changes on an exact replica of all our backend systems, being database as well as a variety of service instances that "orbit" the database. Think of it like astronauts doing dry runs on earth, before going into orbit to repair a satellite. While the dry runs are helpful, it's very stressful to do this work on a live service. Even more so, when it is the first time, and we have touched nearly every core service with these changes.

What happens during it?
First we work through our pre-shutdown checklists that we have written down. Then we take the companion systems that orbit the database down. This will cause connections to be lost. Then we wait for those services to finish what they were doing, and the database stops servicing requests. It's now at rest. Previously, it's been servicing around 800 concurrent request at any single moment in time - it's been quite busy.

Our final step is to verify that everything is silent. We do a special kind of backup at this point, and begin the delicate work of going through our update instructions. We have rehearsed this, but ultimately there's nothing quite like actually going and doing it. Breath will be held. Our process involves little verification steps, and we've also practiced what do at each step if the step fails.

Once the updates are applied, turn orbiting systems will be turned on again, and we will have preloaded our clients with the new version of the game. This means we will be able to be the first back into the game, and make a final verification that the production environment, orbiting systems, database, and the client are all in sync and working.

What happens if final verification fails?
Immediately the orbiting services will be disconnected. This would kill any connections, although it would be very unlikely anyone had the update in the time we could check. We would then load back in our backup we had made, and roll back all the orbit services and the game client version. This would obviously not be a fun time for anyone.

How will we know whats happening
We will try keep you posted, but I will doing many of the changes myself - so this time I'm going to be the engineer who needs some space. We will have some of our staff available to give updates as we can. Updates will be given by Steam, and by discord.
over 2 years ago - Heightmare - Direct link
Originally posted by AdD♛K♛ng♛C♛sper♛ ۞: Hey guys,

Good to support some Kiwis!
I have a question though.

Can we get some KIWI unique animals placed into the map? I dont see why any lore would prevent this and it would give the game some KIWI elements.

Please do :D

Who knows, maybe you'll see something native to NZ appearing in the near future ;)