MatthewKilp
Owner
82 posts
5 reputation

IGN: MatthewKilp
By MatthewKilp » 3 months ago

As you may be aware, at around 16:30GMT today, our server went offline. I was on at the time, and thought little of it, putting the timeout error down to my internet (which isn't the most reliable thing). 7 minutes later, I was alerted the server was offline, and proceeded to restart the bungeecord proxy - this allowed players to join, however I was then made aware of certain features - all being database driven - not working. Following this, I proceeded to access the database, and was presented with this: https://i.imgur.com/bXJG0hG.png

By this stage it had become clear that our database server had been compromised in some form or another. At this stage, I took the decision to shut down our database entirely, to prevent any remote backups of data being modified, and once the backups from this morning were restored, re-installed the server from scratch to ensure any security loopholes that may have been overseen were no longer present. This is primarily the reason the outage dragged out as long as it did - installing the required software took some time, in addition to the large amounts of time taken to import the several gigabytes of database files back onto the server. 

To make it clear: no personal data was involved in this breach. All emails and website account passwords are stored on our web server, and all passwords are hashed with a strong algorithm, so that in the unlikely event these were exposed, they would take hundreds of years to crack (providing they were strong passwords). The bitcoin ransom message makes it clear the hackers claim to have a copy of the data, although it is unlikely they are going through it - the data compromised is all chat / command / message logs, punishment logs, playtime, and other general stats - again, I want to make it clear that no passwords other than mine have been compromised, and unless you shared passwords or personal information in chat, you have nothing to worry about.

As of 6:00PM GMT, our servers were back up and fully reachable. This takes the total downtime to just under 1 hour 30 minutes.

Going foward, we have learnt from the issues, and are taking action to prevent anything like this happening again. Primarily, we will be checking over all aspects of our server security, and tightening up all security measures - we've already changed all passwords that could have been used to gain access to the database server involved in today's incident. In addition, we are planning on increasing the frequency of our database backups to twice daily (from once) so that data recovery results in less data loss (as in the future, more may be stored in our databases). In addition, we plan on adding a pull-style backup server to our infrastructure in order to keep a secondary offsite backup copy of data that is isolated from our network, so in the event of one of our servers being compromised, our backups are unable to be removed.

If you have any concerns about this, or want more information, please contact me, and I'll answer as many of your questions as I can.






Need support quickly? Please email [email protected]