Update 1: The server and all services are back online. Thank you for your patience.
We are performing emergency maintenance on this server which will make all/most services on this server inaccessible.
Affecting Server - Darwin
UPDATE 2: We have completed our checks and all issues are resolved. We will now close this incident. Thank you again for your patience.
UPDATE 1: We have resolved the main issues with the PHP installs with a reinstall of the appropriate versions. We are running our final checks, and with that lowering the Priority from Critical to Medium. This is for impact status only. Our team is actively working on the checks. Thank you for your patience.
We found some of the PHP versions on the Darwin DirectAdmin server were corrupted and we are working to resolve this as soon as possible.
If you face 403 error on your PHP sites, this will be related - we are sorry for the inconvenience caused.
Affecting Server - Greyhound
UPDATE 2: Tests have been completed and all services are running as expected. This maintainence will now be set as completed. Thank you.
UPDATE 1: Upgrade has been completed and we are now running some random tests to ensure all services are stable.
We will be upgrading the Lucee server Greyhound on the 1st of October at 6PM (UK/London time) from version 5.4.3.16 to 5.4.6.9.
Expected downtime is expected to be minimal with this patch version requiring a standard Lucee restart.
NOTE: This is the last version of Lucee before we move to version 6 which will be a major upgrade. This is being planned and tested internally and once we have an update this will be shared with all Lucee customers.
Affecting Server - Blackwell
UPDATE: A solution has been put in place but we are moving forward with the migration from our older notes to our new servers.
We are currently looking into a disk issue on the Blackwell node that is causing some services such as mail to fail. We are actively working on this.
Affecting System - Dell Chassis
UPDATE 4: Our upgraded PSUs have been running for the past 4 days without any incidents, we will continue to closely monitor but will be marking this issue as resolved. Thank you everyone for your patience while we worked through this issue.
UPDATE 3: New PSUs have been inserted into our Dell Blade Chassis which are now using 3000w Titanium grade PSUs with new redundancy configuration. We will be performing further minor updates over the next 48 and heavily monitorinig the situation on this chassis.
UPDATE 2: We have identified a possible cause of the issues being detected on our Dell chassis which we have put in place a solution which we are closely monitoring.
UPDATE 1: While monitoring over the past 24 hours we have once again dectected an issue on one of our Dell blade chassis which our DC engineers are working on now.
We are currently investigating an issue with one of our Dell chassis and the nodes within which appears to be a network level issue. We will update this status report as soon as we can.
UPDATE: Migration has been successfully completed and the Newton server will be shut down.
Due to the decommission of our Singapore hosting location, we will be performing a standard cPanel migration of all accounts hosted on the node 'Newton' on Wednesday the 24th of April 2024 at 9PM UTC time to new servers in the US (West Coast, Los Angeles).
Domain/DNS Actions Required by Customers:
Expected Downtime: Minimal due to cPanel direct transfer between servers
Reason for move:
Our 2024 plans involve moving away from our Singapore hosting location and consolidating to our brand new US, West Coast service provider with great connections to Asia and Oceania. Customers will benefit from faster servers using a range of high performing AMD & Intel CPUs, using the latest software, as well as a mix of Enterprise SSD and NVMe hard drives (RAID Protected). We will also be providing across our entire standard hosting range further included backups within JetBackup as well as upgrades to version 5 of the software.
If you prefer to move to our UK servers, please contact a member of our sales team who will be able to arrange that for you.
Recommendations for Customers:
With any hardware change or process, we strongly recommend taking a full backup of your account before the 24th of April in case of any data issues during the migration. We will do everything possible to ensure a stable and clean transfer but as with everything in tech, things can happen unexpectedly. We will have the old servers for a period after the migration in case of any issues to revert back or re-transfer anything that is required but please do make your own copy of your key files (databases especially).
Time of Migration:
24th of April 2024
9PM UTC
COORDINATED UNIVERSAL TIME
5AM (UTC+8)
SINGAPORE TIME
Affecting System - Coventry Rack Move & Upgrades
UPDATE 4: We are happy to confirm all servers are back online and services are running normally. We will closely monitor all systems during the night and resolve any issues that may arise. This status will now be set to "Resolved". If you do face any issues please do contact our support team who will be happy to help. Thank you for your patience and we hope you enjoy the upgrade network & power services.
UPDATE 3: Networks are being updated now to the new racks and we should start seeing services coming back online. We will update this page once we have verified everything.
UPDATE 2: Servers are now being loaded into the new racks. Once racked and power checks are completed, network updates will be started to map our subnets etc to the new facility.
UPDATE 1: We have started to shut down servers and engineers at the data centre are beginning to move our servers into their new racks.
DETAILS:
On Thursday the 7th of March at 11PM (UTC) we will be performing a rack change at our UK data centre to a new facility based at the same location. This will mean servers will be disconnected, moved and re-inserted into our new racks.
Actions Required by Customers: No actions required, but please see below 'Recommendations for Customers'.
Expected Downtime: ~50 minutes (likely 30-40 minutes)
This move will also affect our internal mail services which power our incoming mail to the help desk but you will still receive help desk ticket updates via email, but please view the ticket in your client portal to reply to any tickets there instead of by email.
Reason for move:
Our move will provide a number of critical upgrades and benefits to our services which will include:
Recommendations for Customers:
As with any change to systems, hardware or moves, we always recommend for customers to take a backup of their critical data before the date of the move and store on their local machines (or services outside of Host Media' network, e.g. DropBox, OneDrive etc) in case of any failure. This should be done as part of your normal backup processes for locally stored backups.
Time of Migration:
7th of March 2024
11PM UTC
COORDINATED UNIVERSAL TIME
5PM CST (UTC-6)
CENTRAL STANDARD TIME
Affecting Server - Richmond
We will be performing a standard cPanel migration of all accounts hosted on the US based node 'Richmond' on Wednesday the 22nd of February 2024 at 7AM UTC time to new servers.
Domain/DNS Actions Required by Customers:
Expected Downtime: Minimal due to cPanel direct transfer between servers
Reason for move:
Our 2024 plans include improving our US based services with the start of moving our current US nodes to our brand new data centre/server supplier. Our new shared and reseller hosting servers will be primarily hosted on the west coast in Los Angeles, with great connections to all of Asia and of course all of the Americas, it is the perfect location for our new range of hosting. Customers will also benefit from faster servers using a range of high performing AMD & Intel CPUs, using the latest software, as well as a mix of Enterprise SSD and NVMe hard drives (RAID Protected). We will also be providing across our entire standard hosting range further included backups within JetBackup as well as upgrades to version 5 of the software.
Recommendations for Customers:
With any hardware change or process, we strongly recommend taking a full backup of your account before the 22nd of February in case of any data issues during the migration. We will do everything possible to ensure a stable and clean transfer but as with everything in tech, things can happen unexpectedly. We will have the old servers for a period after the migration in case of any issues to revert back or re-transfer anything that is required but please do make your own copy of your key files (databases especially).
Time of Migration:
22nd of February 2024
7AM UTC
COORDINATED UNIVERSAL TIME
1AM CST (UTC-6)
CENTRAL STANDARD TIME
Affecting Server - Darwin
UPDATE 5: We are monitoring the services but at present everything looks normal and we will be working on disk upgrades in the coming weeks with plans being drawn up.
UPDATE 4: Services are now all back online and we will be running a post check. One issue was found during this that hadn't come up before related to one of the disks which we will be looking at replacing in the hardware RAID asap once all services are settled. Thank you for your patience and understanding - we are very sorry for the downtime caused.
UPDATE 3: FSCK scan is running and disk data being optimised at the same time. This can take some time to run so we can't provide an exact ETA but hopefully less than 1 hour.
UPDATE 2: Server parts list fully OK, disks are now being fully scanned after the initial tests and partitions data corrected with FSCK.
UPDATE 1: Due to te server having some issues previously we are having to run further checks on this box before running a full disk scan to ensure we are not patching for it to have another issue in the future. We are sorry for this long unexpected downtime but we are working on getting everything back online asap.
We are working on an issue with the Darwin server after updates were made to the Kernel of the server. These checks include a full disk scan which is taking some time.
Details of issue:
A CloudLinux update was performed on the DirectAdmin server Darwin which was suggested by the CloudLinux support team in a check/scan of the server to improve features and services on the box. Once the updates were made a soft reboot was required to ensure all services were updated. Downtime expected was less than 5 minutes and planned for 11 PM to avoid peak times.
Once the server was rebooted the server was able to boot back online properly and presenting with kernal issues.
UPDATE: Reboot completed and all services running normally.
We will be performing a standard reboot of the cPanel node 'Newton' (Singapore) to apply some minor fixes and updates. We expect downtime to be minimal (~10min).
Affecting Server - Darwin
UPDATE 6: We are closing this status log as we have left it open for a number of days to ensure people saw the details and we are not getting any new reports of issues. If you do face any issues please open a tech support ticket.
UPDATE 5: If you are facing issues with your database, please check your JetBackups for possible restore points or contact our support team to assist you. We have some reports of database users missing which is causing some websites to show a database connection error.
UPDATE 4: All services have been stable for some time, if you are facing any database connection issues please get in touch with our tech support team via the help desk.
UPDATE 3: We are investigating some SQL issues post the disk fix and working to have this resolved asap.
UPDATE 2: The disk clean and FSCK was completed and the server is now back online with all services started. If you have any issues please contact a member of the technical team.
UPDATE 1: We are running a disk clean which can take some time but once completed we will confirm the next steps.
ISSUE: We are currently investigating an issue with the Darwin node and an unexpected downtime logged. We will update here when we understand the causes.
Affecting Server - Greyhound
UPDATE: Memory update and Lucee configuration completed.
We will be performing a memory upgrade on the Greyhound server at 6AM (UK/London time) on Thursday the 10th of August. Downtime is expected to be less than 30min while these updates are going through. We are sorry for the short notice of this upgrade but this is to help resolve issues found on the server that has caused some memory heavy services to be unstable.
Affecting Server - Blackwell
UPDATE 2
We are seeing some retry attempts on emails that have not come through but it appears some will not be able to be delivered. We are continuing out investigation to work out why this issue happened but resolved once we did a full restore of the EXIM configuration to cPanel defaults.
UPDATE 1
Mail should now be coming into all accounts inboxes, we have reverted some changes that were applied that caused mail to no longer flow into customers mail boxes, we are continuing our investigation to see what caused this and why it affected incoming mail.
ISSUE
We are aware of an email delivery for incoming emails to the cPanel server Blackwell, our teams are investigating this issue and working to resolve asap.
Sorry for the inconvenience caused and we hope to have all services back to normal soon.
Affecting Server - Blackwell
We will be performing a memory upgrade and software update on the node "Blackwell" on the 31st of March 2023 at 6AM BST (UK/London). These upgrades will correct a detected issue with the memory allocation on the server.
We expect downtime to be less than 30min.
Affecting System - UK Data Centre
UPDATE 13: We are awaiting a full report on the outage from our data centre, once we have this and analysed the findings we will post a status update with our own investigations in the events after the ~4 hour intermittent power outages. This status will be marked as closed as we continue with BAU support of our customers. Thank you for everyones patience during this time.
UPDATE 12: Restores of the available data has been completed and new remotely stored database backups tasks are now running alongside the full snapshots. We continue to look into the cause of the backup issues to generate a report.
UPDATE 11 - BACKUP UPDATE: At present JetBackup engineers do not understand why databases are missing from our DirectAdmin backups, they are awaiting the next backup job to run to analyse this further. Internally we are considering options for our DirectAdmin hosting going forward if such critical capabilities are missing or faulty.
UPDATE 10 - BACKUP UPDATE: Due to failures in some JetBackup based backups are missing certain data such as databases or completely unavailable even though a record exists. We have had JetBackups support looking at this and working out why this is so. We are very sorry for the inconvenience caused but we are asking those with missing data to provide their own backups for our technical teams to restore for them.
UPDATE 9 DISASTER RECOVERY PLAN: We are progressing with the full disaster recovery plan with the following stages which will be updated as we progress:
Important Notes - Please read:
UPDATE 8: Unfortunately it appears the disk became corrupted due to the constant power disruptions on the Darwin node that we are putting in place our full recovery plans. This will take some time and we will be working with customers to get them back online asap. We will post further updates here once this has been started. We are very sorry for the inconvenience this has certainly caused. A post debrief with the data centre and management will occur where we understand the full details of this.
UPDATE 7: Progress with the DirectAdmin node labelled Darwin is underway and we are attempting to repair the damage caused to the storage drives from the power outage. At the moment we don't have an ETA on the repair but will keep this incident report updated.
UPDATE 6: Most servers are confirmed online, the Darwin node appears to have disk issues which need to be repaired due to the power being suddenly stopped. We are working on this as quickly as possible but disk scans and corrections can take some time.
UPDATE 5: We are servers appearing to still be having issues and servers becoming active and then down again. The data centre are actively working on this and we are monitoring our services closely to get everything restored asap.
UPDATE 4: We are seeing most servers online, but a couple including the Darwin DirectAdmin server remains down. An engineer at the data centre is checking this now.
UPDATE 3: Power has been restored but we are still closely monitoring the situation and awaiting all servers to come back online.
UPDATE 2: Some racks have started to come back online but unfortunately does not include ours. The engineers are investigating the issue and we hope for updates soon.
UPDATE 1: You can track the data centre updates via https://status.ukservers.com/
We are currently working with our UK data centre on a power issue that is effecting the entire data hall. We will post updates as we get them.
Affecting Server - Blackwell
UPDATE 2: After a couple days all services have been seen running normally. This issue will be marked as resolved.
UPDATE 1: We have corrected the issue with the routing of mail to Mail Channels on the Blackwell server and resending all mail in the queue. We will keep monitoring to ensure the queue is worked through over the next few hours.
We are currently investigate a mail routing issue on the node Blackwell which is causing mail to remain stuck in the servers mail queue. We are actively working on this and once resolved all mail will attempt to be resent. We are sorry for the inconvenience caused.
UPDATE 8
The Eden server has been shut down. If you have any issues or questions please do get in touch with the team. Thank you,
UPDATE 7
We will be shutting down the Eden server on Monday the 24th at 10am, please ensure you have updated your domains DNS or A/MX:Records before this time.
UPDATE 6
Crossbox has been updated and working, we are monitoring the service but if you face any issues please do get in touch.
UPDATE 5
We have completed the migration and all accounts are now on the new server. Please login using the URL: https://darwin.dnshostnetwork.com:2222/ with the same logins, or access the client portal for SSO login to DirectAdmin.
New Server IP:
178.159.5.244
New DNS:
ns1.darwin.dnshostnetwork.com
ns2.darwin.dnshostnetwork.com
If you have any issues please open a technical support ticket to get the quickest updates and solutions.
NOTE: We are seeing an issue with Crossbox on our Darwin server and the Crossbox tech support team are investigating.
UPDATE 4
We have now migrated all sites and doing final tests and manually migrating any failed accounts. We hope to send all clients emails shortly with updates. Thank you again for your patience and once again sorry for the delayed migration as it has taken longer than planned.
UPDATE 3
We are in the final stages of migration with syncing of meta data and some user data. We expect this to be completed within 3 hours. Once completed we will be asking all customers to check their DirectAdmin logins on the new server and files/data to ensure there are no problems, ready then to update DNS/A:Records via domain providers. We will be keeping the Eden server online for a few days to allow time for everyone to check sites and make DNS changes. We do recommend taking a backup of your data from the old server just in case (we will have backups as well stored on our offsite backup storage).
UPDATE 2
We have completed 80% of the transfer from Eden to Darwin server, once this has been fully completed we will update all customers. Thank you for your patience.
UPDATE 1
We are continuing the restoration of accounts, progress has been a little slower than first thought but we are working on this with the highest priority. All services on the Eden server continue to be online and websites are loading so no downtime for any service.
We are currently migrating all accounts from the DirectAdmin server Eden to the Darwin node. Once completed we will email all customers reminding them to update any DNS/A:records.
If you have any questions please do get in touch with the team.
Affecting Server - Blackwell
In the early hours of today we found one of our Power Supply Units on the Blackwell server became faulty and our engineers removed the PSU as it caused minor power issues. Once this was corrected with minimal downtime we noticed slowly overtime CloudLinux started to fail but not on all websites. Our alerting failed to alert us to this as our tested features (ping/http/cPanel etc) were all green.
We resolved the issue with the support of cPanel to update our running software and after a number of tweaks all services are back online.
If you have any issues please do contact our support team.
We are sorry for the inconvenience caused by this downtime, we will continue to monitor this server for any issues.
Affecting System - UK Data Centre
UPDATE: All connections and BGP sessions have now been stable for 12 hours. We will close this issue now as the data centre will continue to monitor and currently happy with the resolutions.
UPDATE: Moving priority of ticket to 'Medium' while we await updates from the data centre.
UPDATE: Services have now become available, downtime <10min was recorded by our monitoring systems. We are awaiting an investigation report from the data centre.
We are currently investigating network issues at our UK data centre. Please stand by for further updates.
UPDATE 2: Migration has been completed and post-migration issues resolved on the few accounts that reported them. If you find any issues please contact a member of the support team who will be more than happy to assist.
UPDATE 1: The migration continues with a large number of accounts already moved. You may see some DNS propagation going through while we switch DNS settings. This can be in the form of a cPanel default screen. We hope to have everything completely soon.
On Monday the 24th of August at 02:00AM (UK, London timezone) we will be migrating all accounts from the server listed as Brunel (IP 5.101.142.88) to the node named Blackwell (IP: 5.101.173.45).
If you are using our DNS/nameservers you will not need to make any changes.
For reference our nameservers are:
If you are using A:Records (CloudFlare/Custom DNS Services) you will need to update your domains records to point to the new servers IP: 5.101.173.45 on the day of the migration.
Please note during the day if won't be possible to login to your new cPanel server via the client portal. You will be able to access and login using the direct cPanel URL of the new server: https://5.101.173.45:2083/
If you have any questions please contact a member of the sales/accounts team by clicking here.
Affecting Server - [S03] Linux cPanel ~ Lucee ~ London UK
UPDATE 2: All accounts are confirmed on the new server, we have been delayed by a few accounts that required manual migration but these have been completed.
UPDATE 1: The majority of accounts have been migrated and now running on our new server. If you face any issues please do contact a member of the support team.
Scheduled migration and upgrade of our S03 server to newer hardware with a new name.
If you use our DNS/nameservers you will not need to make any changes, but if you use A:Records please ensure you change the domains settings to point to the new IP 178.159.5.243.
Downtime will be minimal but due to the number of accounts to migrate, it will be a process over a number of hours.
Important Notes:
Affecting Server - Greyhound
UPDATE 1: All accounts have been migrated, please check to ensure you are now using the new IP on your domains DNS settings. If you find any issues please contact the team.
Scheduled migration and upgrade of our Beagle server to newer hardware.
If you use our DNS/nameservers you will not need to make any changes, but if you use A:Records please ensure you change the domains settings to point to the new IP 178.159.5.243.
Downtime will be minimal but due to the number of accounts to migrate, it will be a process over a number of hours.
Important Notes:
Affecting System - DC Network Rack 1
POST ISSUE REPORT: The issue was due to a misconfiguration in the racks firewall layer during an update to increase the IP ranges on our servers. This was corrected by onsite engineers that control the firewall systems.
UPDATE 2: Network access has been restored and servers are now loading. We will update with the cause once we have checked all hardware.
UPDATE 1: Engineers are onsite looking into the network issues and hope to have an update shortly.
ISSUE:
We are currently investigating network issues at our Coventry DC to one of our racks. Our alerting systems were triggered which proceeded with a P1 critical issue.
Affecting System - Stewart Node
UPDATE: Server move was completed without any issues.
We will be upgrading our Coventry racks for planned improvements. On the 28th at 5:30AM (UK/London time) we will be moving the physical server node named Stewart to our new racks at the same data centre.
We expect downtime to be minimal and engineers will be ensuring the smoothest of transition.
Detailed Timings:
Affecting System - Blackwell & Victoria Nodes
We will be upgrading our Coventry racks for planned improvements. On the 29th at 5:30AM (UK/London time) we will be moving the physical server nodes named Blackwell & Victoria to our new racks at the same data centre.
We expect downtime to be minimal and engineers will be ensuring the smoothest of transition.
Detailed Timings:
Affecting System - Churchill & Nelson Nodes
UPDATE: All services are running normally.
We will be upgrading our Coventry racks for planned improvements. On the 27th at 6AM (UK/London time) we will be moving the physical server nodes named Churchill and Nelson to our new racks at the same data centre.
We expect downtime to be minimal and engineers will be ensuring the smoothest of transition.
Detailed Timings:
Affecting System - Brunel Node
UPDATE: Server move completed without any issues.
We will be upgrading our Coventry racks for planned improvements. On the 26th at 6AM (UK/London time) we will be moving the physical server named Brunel to our new racks at the same data centre.
We expect downtime to be minimal and engineers will be ensuring the smoothest of transition.
We will be migrating all S07 Plesk accounts to a new Windows Plesk server, code-named, 'Austen' (named after Jane Austen was an English novelist known primarily for her novels).
We have scheduled the migration for the 2nd of November at 2am UK local time.
You will need to adjust your domains DNS/A:records to one of the following options at the above date:
Nameservers:
ns1.austen.dnshostnetwork.com
ns2.austen.dnshostnetwork.com
A:Record IP:
78.110.165.202
Affecting Server - Greyhound
We have performed a number of security updates on our Beagle Lucee server which required a number of reboots. All services are now running normally and we are monitoring the services.
Affecting Server - [S09] Linux cPanel ~ New Jersey US
We scheduled in a migration of the S09 server to our new pure SSD servers hosted in our new providers' data centre. Please see below details of the planned migration:
Migration Scheduled:
London, UK Time: 31/08/2020 10:00 AM
Eastern, US Time: 31/08/2020 5:00 AM
If you use A:Records to point your domain name to our servers you will need to update them to the IP: 51.81.109.178
If you use our DNS/nameservers you will not need to do anything.
All your data will be migrated by our team on the day, if you have any questions please speak with one of our sales team who will be able to provide more information about the process.
We will be performing a quick reboot of the Eden server to apply updates and general improvements to this service. Downtime is expected to be minimal as it will be a standard reboot.
Affecting System - Power Supply
At 3pm on the 18th of July the Coventry, UK data centre saw power issues which caused services to fail. If you are having any issues with your service please contact a member of the team. Our team will be monitoring and checkups on all our services to ensure they are running OK.
Below are the status history provided by the data centre.
Affecting Server - Churchill
UPDATE: Migration completed and if you have any issues or require any support please contact our support team. Thank you.
We will be migrating all cPanel accounts on the Churchill server to our brand new servers on the 28th of June at 23:00 (BST) to be ready for the Monday morning. The new server label is: Blackwell (named after the British Physician, Elizabeth Blackwell) and has the IP: 5.101.173.45
The new server is our latest range of hardware and is powered by a Dell 40 core enterprise server. We are rolling out more of these powerful servers at our UK location.
If you use our DNS/nameservers you will not need to change anything. If you use A:Records (CloudFlare for example) please make sure to update your IP address to use: 5.101.173.45
If you would prefer to be migrated sooner please just contact a member of the sales team who will book this in for you.
Affecting Server - [S21] Linux cPanel ~ Singapore
Migration has been completed and we have turned off the HTTP services on S21. If you face any issues please contact our support team. Thank you for your patience during this migration.
We are migrating all cPanel accounts from our S21 server to our new range of SSD based servers hosted in Singapore. Once the migration has been completed all customers will be notified. Due to the issues found on S21 it will cause the migration to be slower than expected but downtime will be minimal as services are still running on S21.
New server IP: 139.99.122.95
Affecting Server - Turing
Update: Reboot complete, total downtime 2min.
We are performing a full reboot of the Turing server to apply the latest Kernel and cPanel updates. Downtime <5min.
Affecting Server - Blackwell
Update: Reboot complete, total downtime 3min.
We are performing a full reboot of the Blackwell server to apply the latest Kernel and cPanel updates. Downtime <5min.
Affecting Server - Hawking
On the 7th of May at 6am UK time we will be migrating all accounts from the Hawking server to our new ColdFusion 11 server named Turing.
If you use A:Records you will need to change them to point to the IP: 5.101.142.85. If you use our nameservers then nothing will need to change.
One action will need to be taken by yourselves and that is the recreation of ColdFusion data sources in the CFManager in cPanel. Currently, our migration tools do not allow for data source names to be transferred but it is something our developers are working on to make future migrations easier.
You will find our new servers are much higher in specifications and also have features such as Fusion Reactor protection included to provide the best stability possible.
On Monday the 11th of May at 6am, UK, London time we will be migrating all accounts from the legacy S08 WordPress server in London to our new nodes in Coventry. The new servers provide greater power, speed and features which are in line with our WordPress feature matrix: https://www.hostmedia.co.uk/wordpress-hosting/feature-matrix/
If you use A:Records to point your domain name to our servers you will need to update them to the IP: 5.101.173.45
If you use our DNS/nameservers you will not need to do anything.
All your data will be migrated by our team on the day, if you have any questions please speak with one of our sales team who will be able to provide more information about the process.
UPDATE: Reboot was successful and all operations are normal.
We need to run a reboot of the London server cluster named 'Darwin' which operates a number of instances to correct a disk issue which is appearing. The downtime should be minimal.
Thank you for your understanding.
Affecting Server - [S01] Linux cPanel ~ Coventry UK
UPDATE: Migration completed. Thank you for your patience.
We will be migrating all accounts from the server listed as S01 to a new server code named: Brunel
Scheduled Date/Time: 08/02/2019 02:00 (Timezone: London, UK)
If you use A:Records to point your domain to our servers you will need to update them to point to: 5.101.142.88
Affecting Server - Churchill
UPDATE 3: All services have been running without indecent for over 24 hours now. This issue will now be closed.
UPDATE 2: We have tweaked a number of settings and reapplied the LiteSpeed web server. We will monitor to ensure all services continue to run as expected. Thank you,
UPDATE 1: We believe the issue is related to an issue with the LiteSpeed webserver, we are currently running all systems on the standard Apache webserver which the server can easily handle while we investigate further. All sites have been running normally for the past 3.45 hours. You can find further details here: https://status.hostmedia.co.uk/784190645
We are investigating issues reported by our monitoring systems on the instance Churchill which is going up and down. We will update further as soon as possible.
Affecting System - Coventry DC Networks
UPDATE: Network appears to be back to normal and we are awaiting further details on the upstream provider that caused some people to drop connections.
We are currently looking into an issue with one of our upstream providers that could be affecting some routing.
UPDATE: Migration has been completed successfully.
We will be migrating the accounts from the server listed as S02 to our new servers in our Coventry data centre.
If you use A:Records please make sure to update them between the listed times to use this IP: 5.101.142.88
Affecting Server - Hawking
UPDATE 1: After making some JVM changes the issue appears to have become stable but we will continue to closely monitor the service over the next couple of days.
We are investigating an issue with the ColdFusion services on our Hawking instance (S04) that is causing the service to suddenly stop.
Affecting Server - [S14] Linux cPanel ~ London UK
UPDATE: The migration completed without any issues and all accounts are now on the node Brunel. Please make sure you have updated your A:Records if you use them. We will be shutting down the old server shortly.
We will be migrating all accounts from the server listed as S14 to a new server code named: Brunel
Scheduled Date/Time: 08/12/2019 20:00 (Timezone: London, UK)
If you use A:Records to point your domain to our servers you will need to update them to point to: 5.101.142.88
Affecting System - DH1 Coventry Issue – 16th Nov
16/11/2019 – 21:30 – We are currently experiencing an issue with services at our Coventry site, further updates will follow shortly.
21:45 – Our onsite engineers have found BGP sessions to be flapping between our core routers in Coventry and London, further updates will follow shortly.
22:01 – Our onsite engineers have identified an attack against our core routing infrastructure at this site and are working to mitigate this.
22:31 – Our engineers have been unable to mitigate the attack against our routing infrastructure and we are still working on the issue. Service has been restored for some customers however the network is currently still unstable.
23:52 – Our engineers are going to bring forward the replacement of our routing equipment at our Coventry site which was scheduled for later this month under a planned maintenance window as we believe the new equipment should be better placed to deal with the attack. We hope to have service fully restored to all customers by 04:00 at the latest.
17/11/2019 – 01:22 – The new routing equipment has been racked and the configuration being loaded onto this, customers should expect further service disruption in the next thirty minutes when we move customers to the new routing equipment.
02:32 – Service should now be restored to the majority of customers at our Coventry site and the new routing equipment is successfully mitigating the attack on our equipment.
04:25 – The remaining customers should now be back online at our Coventry datacentre and customers are requested to open a support ticket if their service remains offline.
Affecting System - Lucee UK Servers
UPDATE: Since our changes, the Lucee service appears to be back to normal stability. We will continue to monitor the service closely. Thank you for your patience.
We will be performing some adjustments to our UK Lucee servers to correct a number of reported issues around the stability of the Lucee service.
Affecting System - UK Coventry Data Centre Network Issues
Network issues were reported at our Coventry, UK based data centre which the data centre team worked on to resolve. We are awaiting a full report from them to update our customers with.
We are sorry for the downtime seen by our customers and we will be working with the data centre to see what actions can be put in place to prevent this from happening again.
Affecting System - Node: Nelson, DC: Coventry UK
UPDATE 11: VMs have been restored, if you face any issues please open or update your support tickets so our team can investigate. Thank you so much to all our affected customers for their patience and understanding.
UPDATE 10: The restores are processing well, due to the amount of data this can take some time but we are working on this as quickly as we can.
UPDATE 9: We have been able to get a XEN server online and now starting to restore accounts.
UPDATE 8: We are continuing to work on the issue and hope to have our new XEN server online shortly. There has been issue within the XEN setup that caused our tests to fail.
UPDATE 7: Final tests on our 3rd server setup is almost completed.
UPDATE 6: Due to some kernel issues we have booted a 3rd server up as an alternative which is on a different network and will require new IPs to be allocated. We will update clients once we have more details.
UPDATE 5: We are continuing our setup of our alternative XEN Server. We will post our next update as soon as possible.
UPDATE 4: We have our alternative XEN Server partitioned and the final setup stage processing now. Once done restores of data will begin.
UPDATE 3: Due to a XFS corruption beyond repair we will be restoring backups on a secondary node as soon as possible to get all customers services back online.
UPDATE 2: We are continuing to run the XFS repair on the server, it is taking a little longer than expected and having our DC remote hands checking this.
UPDATE: We are running a full XFS repair on the drives as something appears to have become corrupted on the disk and causing the server not to boot properly into the OS.
We are currently investigating an issue with one of our new nodes at our Coventry DC (Node: Nelson). We are working on this with the highest priority.
Affecting System - Coventry DC
FINAL UPDATE: The issue has been fully resolved.
Issue Details:
The initial issue was due to the power circuit being tripped out, the DC team worked to move our racks to the backup circuits to ensure power was restored quickly to the affected servers. After 15 minutes the main power supply was routed back to our racks.
We started to check / bring back online all servers that were offline. While doing this we found the node Churchill didn't respond to our main controllers commands. After investigating it was found to be loading form the flash memory on the server instead of the main controller of the hard drives. We reconfigured the BIOS and restarted the machine which brought back the node and once tested we brought back online the instances.
We will be performing an update on the BIOS to ensure the correct hard drive controller is loaded in case of any future failures in power. This update will be happening at 9PM UK, London time today (4th of June) and a network status item will be available for reference.
UPDATE 3: After resolving a linking issue to our racks and correcting a possible long term issue our team are focusing on resolving the issue with our Churchill node.
UPDATE 2: DC engineers are continuing to work on the issue with our racks as further issues were found. We hope to have this resolved shortly.
UPDATE 1: All servers apart from the Churchill node has come back online. We are working on the issue.
We are currently resolving an issue with our racks at the Coventry DC. Further updates to come.
Affecting System - Coventry Data Centre
During a routine review by our electrician, we have identified a fault with the power distribution that supplies our racks at the Coventry data centre. There is a core distribution unit which needs to be replaced to ensure a stable service. This will require all power to our racks being removed for about 60 seconds whilst the fault is fixed.
Affecting Server - [S02] Linux cPanel/WHM ~ US1
UPDATE:Â Migration processed well and all accounts are now on the new server. We are now backing up all accounts on the old server before shutting it down.
We will be migrating customers from S10 server to newer servers. All affected customers will be updated via email, any customers using the Global Reseller Panel will have the details updated in their reseller control panel. Downtime will be minimal as the migration will be handled by the cPanel transfer.
New server IP:Â 81.92.218.156
Affecting System - Virtualisation
UPDATE 1: We have resolved the issues and all services are back to normal status. Thank you for your patience.
We are investigating an issue with our US based Xen servers which dropped network services. We are working to resolve this as soon as possible.
Affecting System - RAM Fault
UPDATE: All systems came back online shortly after the initial status update. If you find you are having any issues please do contact a member of the support team.
We detected a memory fault due to a faulty RAM card. This is being replaced now and services should be back online shortly.
Affecting Server - Churchill
We will be updating the servers BIOS to avoid boot up issues loading incorrect OS after unexpected downtime/shut downs. Downtime will be less than 5min as only a reboot is required to apply the changes.
Affecting Server - [S03] Linux cPanel ~ Lucee ~ London UK
UPDATE1: Services are back online and running normally. Thank you for your patience.
We are running updates on the disk and memory services of our S03 Lucee server. A reboot is processing now to apply these updates. We hope to have services back online within the next 5min. Sorry for the downtime caused.
Affecting Server - [S03] Linux cPanel ~ Lucee ~ London UK
Since our adjustments to the Lucee JVM all services appear stable. We will carry on monitoring the server closely and if any further issues occur we will open a new server status.
We have been server monitoring Lucee services and they have been stable during the night. We are continuing to monitor any and all load spikes to resolve any issues. We will update this status further when we know more.
During off-peak (UK night time) we are seeing high Lucee load on the server which appears to be causing the Lucee CML services to stall. We are monitoring and working on finding a fix.
We are investigating a high load on our S03 server which appears to have been the cause of the server requiring a forced reboot.
Affecting Server - Churchill
Reboot complete and updates applied. Downtime less than 1 minute.
We will be running an update and reboot of the Churchill instance to apply the latest updates. Downtime will be less than 10 minutes.
Affecting Server - [S11] Linux cPanel ~ Lucee ~ London UK
We have been dealing with disk issues within the core of the S11 instance. If you are seeing issues please open a support ticket and request a migration to our S03 Lucee server which is on our new platform. Please note S03 uses dedicated remote SQL servers so in your Lucee data sources or connection scripts please make sure to use 'remotesql' instead of 'localhost' on your settings.
Affecting Server - [S24] Linux cPanel ~ Lucee ~ London UK
We will be migrating accounts from our S24 server to our latest Lucee S03 server. Downtime will be minimal as we will be performing a direct transfer of accounts.
New server IP: 185.42.223.91
Affecting Server - [S14] Linux cPanel ~ London UK
Minor updates and a quick reboot of S14 to ensure stability of latest updates.
Affecting Server - [S14] Linux cPanel ~ London UK
UPDATE 2:Â We can confirm all services are running normally and now have CloudLinux running for better general performance and stability.
UPDATE 1:Â A fault with drive mappings was found causing unexpected downtime on the server and this is being fixed.
Upgrades being applied:Â CloudLinux & kernel updates.
Downtime:Â We will try and keep downtime to a minimum but downtime will be intermittent over a few hours.
Affecting Server - [S02] Linux cPanel/WHM ~ US1
UPDATE 1:Â Our new servers are going through tests now, we will be migrating customers to the new server in batches. We will contact those customers directly throughout the week. If you are still facing issues please open a ticket to sales to request a migration sooner. Currently S02 services are running normally.
We are monitoring our S02 server due to intermittent slowness that has been detected. We already have plans in place for a migration of this server to one our new servers being racked this week. We will continue to monitor and resolve any reported issues.
Affecting System - SolusVM
UPDATE 9: S11 - Our team has got as much data as we could possibly get from our backups and the faulty S11 server. If you have backups of SQL available please send them over to our tech team via a support ticket and we will get these uploaded straight away with the highest priority. We are also ensuring other servers are not affected by the same backup faults and issues that caused S11 to fail. As we always recommend, please ensure you keep local backups in case of failures such as this. We will be investing heavily in new backup solutions on all shared services in the coming months to prevent such issues from happening again.
UPDATE 8: S11 - Our team continue to bring up remaining websites with most back online. If you continue to have issues and haven't opened a ticket we highly recommend to create one in case the issue on your site is an isolated issue.
UPDATE 7: S11 - To help speed up restores of accounts we highly recommend if you have local copies of backups please do send them over to the main support team and they can get your services back online quicker.
UPDATE 6: S11 - File restores have processed but SQL databases failed to restore correctly. We are looking at alternative restore options now.
UPDATE 5: S11 - As we continue to bring more accounts back online, if you use A:Records instead of our name servers we strongly recommend changing your domains DNS to our name servers. This will help when we sync your domain to the new server IP addresses your domain will already be configured. Our global DNS network name servers are:
dns1.dnshostnetwork.com
dns2.dnshostnetwork.com
dns3.dnshostnetwork.com
UPDATE 4: S11 - Restores of accounts are proceed from data located on the server and on our remote backup servers. We are manually having to process these backups one by one. We will provide further updates as they come through.
UPDATE 3: S11 - We are attempting to restore available backups and overlay them with the latest data from the damage server. Other systems are being worked on by our 3rd party software support teams to resolve the issues as soon as possible.
UPDATE 2:Â All services on our Archer node are back online apart from one shared service S11. We are working on this issue as our top priority, we now have access to the data and migrating the data to a newer server to get services back online as quickly as possible for everything.
UPDATE 1: During a number our instances became unavailable, our team are working on this issue now with our 3rd party suppliers.
We are currently running checks and general maintenance on our Archer node, this includes the XEN services and SolusVM integration. You may see some services slow down but this will be kept to the minimal.
UPDATE 1: Services are back online and we are investigating the cause of the network issue.
We are investigating an issue with our S13 server at the Sydney data centre that has caused the S13 server to fail.
Affecting System - XenServer
UPDATE 2:Â All systems are OK and running normally. Thank you for your patience while we upgrade our services.
UPDATE 1: Services are coming back online and VM instances running. Downtime averaged 5min to complete the updates and bring instances back online. We will update this ticket once we have completed our checks.
We are running a reboot of the XenServer node Darwin to apply updates and to correct integration issues with Virtualizor. Thank you for your patience.
UPDATE 1: We are still migrating accounts to our new server. Due to the number of sites and data it is taking longer than expected. We hope to have further updates soon.
On Thursday the 28th of June at 10PM we will be migrating all accounts from S17 to S14 which is based on much newer systems. If you use A:Records to point your domain to our servers please update the IP to:Â 54.36.162.146
Thank you for your understanding while we process this migration.
Affecting Server - [S02] Linux cPanel/WHM ~ US1
UPDATE: The migration was completed but during some migrations and setups of ColdFusion DSNs a few already created DSNs were affected and locked out. If you have any issues with your DSNs please contact support who will recreate them for you. Thank you for your understanding.
On the 25th of June we will be migrating all US S12 ColdFusion 10 customers to our UK CF10 servers. As per our other US based CFML services we are moving all accounts to our UK based data centers. This move will also help with future plans for new ColdFusion services (pending final decisions from management). If you are using A:Records to point to our servers please make sure to update your domains IP to point to:Â 185.145.202.175
Once the migration has been completed you will need to setup your ColdFusion DSNs via the CFManager or by opening a support ticket if you prefer us to handle this for you - please note we will need the database details so we can set them up. Our transfer systems currently do not allow for migration of CF DSNs.
Thank you for your understanding.
Affecting Server - [S02] Linux cPanel/WHM ~ US1
On Thursday the 22th of June at 10PM we will be migrating all accounts from S23 to S01 which is based on much newer systems. If you use A:Records to point your domain to our servers please update the IP to:Â 78.157.200.45
Thank you for your understanding while we process this migration.
On the 27th of June at midnight (UK Time) we will be migrating all US Lucee accounts to our UK data centre. Our CFML services have been moving to our UK data centres the past years and now the final US based Lucee server will be moved. If you are using A:Records on your domain please make sure to change them to the new servers IP:Â 78.110.165.199
Thank you for your understanding. If you have any questions please do contact a member of the team.
Affecting Server - [S02] Linux cPanel/WHM ~ US1
UPDATE: The migration has been completed and all customers details confirmed as updated in their client portal. Any problems or questions please do contact a member of the team. Also please remember to update your A:Records if you do not use our nameservers. New server IP:Â 108.61.13.243
to resolve a number of performance issues we will be migrating our last cPanel shared hosting servers in Alexandria USA to our new US location in New Jersey. If you are using A:Records instead of DNS/nameservers you will need to update the IP to:Â 108.61.13.243
We are sorry for the short notice on this migration and we hope to have it complete as quickly as possible.
Affecting Server - [S09] Linux cPanel ~ New Jersey US
UPDATE: Updates have been applied and total downtime was less than 2min. Thank you.Â
Affecting Server - [S11] Linux cPanel ~ Lucee ~ London UK
We have corrected the issue which was due to another instance on the node causing a slow CPU.
We are investigating an issue with a slowness in the Lucee service on S11. The root of the issue appears to be a drain on the CPU resources (known as a CPU Steal).
We will be migrating our final Sydney servers to our new servers on the 3th of May at 7PM UK, London time.
New server IP:Â 139.99.163.84
Affecting Server - [S21] Linux cPanel ~ Singapore
We will be migrating our final Hong Kong servers to our new Singapore servers on the 4th of May at 7PM UK, London time.
New server IP:Â 139.99.17.25
Affecting System - Server Migration
On the 27th of April we will be finishing the final move from our Amsterdam servers to our latest German based servers. This is in line with our aims to focus our offering to the best possible locations for speed and data centre support.
Below are the final two servers S04 and S18 that will be migrated to server S05 with the IP:Â 144.76.231.221
S04:Â 185.181.8.171 =>Â 144.76.231.221
S18:Â 176.56.239.221 =>Â 144.76.231.221
All data, including emails, website files and database will automatically be migrated and there is nothing for you to do unless you are using A:Records on your domains. Please see below for details on IP/DNS changes.
Using A:Records?
If you are using A:Records to point your domain to a server you will need to update this to point to the new servers IP:Â 144.76.231.221
Using DNS/Nameservers Records?
You will not need to do anything as we will take care of the DNS change on our side.
Thank you for your understanding and we hope you enjoy the new services at our Germany location.
Affecting System - Dallas & London OnApp Cloud Network
New server IPs, cPanel links and statuses are listed below:
S12 (Completed)
69.168.236.13 => 74.84.148.22 (cPanel Login)
S15Â (Completed)
69.168.236.96 => 74.84.148.21 (cPanel Login)
S16Â (Completed)
69.168.236.49Â => 74.84.148.23Â (cPanel Login)
S23Â (Completed)
69.168.235.191 => 185.42.223.86 (cPanel Login)
Please make sure to update your A:Records to point to the updated server IPs. If you use our DNS servers then no IP change will be required.
No changes to your logins to cPanel or WHM. You can use the links above to access cPanel directly.
Our Dallas 1 and London 2 cloud node which is backed by OnApp requires an emergency migration. This is being handled by the OnApp team and data will be migrated by the engineers there. All Dallas and London based customers may see a small amount of downtime while the migration occurs and a new IP will be assigned. Please update your DNS to point to our servers, if you are required/prefer to use A:Records we will be providing details of the new IPs once the cloud migration has been completed.
We are sorry for the lateness of this notification, we have been working to try and avoid such a migration on the older platforms until we were ready to move to the new systems in New Jersey. Please keep an eye on this page for the latest updates.
Thank you for your understanding.Â
Affecting System - Darwin XenServer Node
We will be performing updates on our Darwin node at midnight tonight to ensure the latest security patches are applied. Downtime of the instances on the node will be minimal as only a standard reboot is required. We expect no more than 15min of downtime. Thank you for your understanding.
The migration from the US server to the UK server has been completed successfully. New IP:Â 104.238.186.101
Thank you
Affecting Server - [S03] Linux cPanel ~ Lucee ~ London UK
UPDATE 1: We have monitored the changes over the night and all services are running normally. Thank you for your patience during this update.
Affecting Server - [S02] Linux cPanel/WHM ~ US1
UPDATE 4: All systems appear to be running smoothly and we will be continuing to monitor the server closely. Thank you for your patience.
Affecting System - London DC 1
UPDATE 2: The issue has been resolved and services are now coming back online. The OnApp engineers confirmed the root cause in their systems and applied fixes. We are monitoring the servers while services start to come back online.
Affecting Server - [S21] Linux cPanel ~ Singapore
UPDATE: Server is running normally now. Planned updates to our Asian based servers are in the works and news will be released in the coming weeks. Thank you and sorry for the inconvenience caused.
Affecting Server - [S02] Linux cPanel/WHM ~ US1
UPDATE 8: We have run a reboot to apply some changes we have made to this server. We are monitoring its services. Thank you for your patience.
Affecting Server - [S22] Linux cPanel ~ Texas US
As our new US servers are online we will be migrating all US-based accounts to this new server. On the 17th of February we will be migrating all accounts on S22 to S09 server.
If you use our nameservers you will not need to make any changes. If you use A:Records you will need to update the IP to:Â 108.61.13.243
If you have any questions please contact a member of the team.
Affecting System - All Servers
We will be running updates on all servers on our network to correct the CPU Meltdown and Spectre vulnerabilities which have been in the news lately. One we have patched the servers a reboot will be required and downtime is expected to be less than 10min per server. We are working with our partners/suppliers to ensure all our servers hardware is looked at. The patches are for the operating systems on our servers. If you have a dedicated server or private cloud we will be sending details on the issues soon. You can open a support ticket and one of our support team will be happy to help patch your server.
We recommend everyone to run YUM / Windows updates on your servers to ensure you are running the latest version. Please feel free to contact a member of the team for more information.
Thank you.
UPDATE: We are currently investigating the cause of an unexpected downtime during the migration. All services are running normally and the migration has been completed.
With the latest server improvements being rolled out across our infrastructure we will be migrating all accounts from server S02 to a new server (keeping the same server name S02). This will begin at 8PM on Wednesday the 20th of December (2 weeks time). We will be running a full cPanel migration which means you will not need to do anything. If you are using our nameservers (DNS) there will be no change, but if you use A:Records then a change will be needed. The new server IP to point your domains to after the 20th will be:Â 185.145.200.53
Thank you and we hope you will enjoy the improved services.
UPDATE 3: We have services back online, we will require to run a quick reboot as we have installed software to help prevent this issue for the next couple of weeks ready for the planned migration.
Affecting Server - [S02] Linux cPanel/WHM ~ US1
UPDATE 1:Â A small spike in load caused our monitoring systems to flag an issue which slowed HTTP services. We will be scheduling a migration of accounts from this server to one of our new server setups for improved CPU performance. All services are running normally.
Affecting Server - [S03] Linux cPanel ~ Lucee ~ London UK
Update 1:Â Reboot completed and all services are coming back online. Downtime <3min. Thank you for patience.
We needed to reboot the instance to apply updates to disk and CPU settings. Thank you for your understanding.
Affecting System - Archer Coventry Node
UPDATE 1: All services are all back to normal running.
Affecting System - UK Data Centre Cloud
UPDATE 2: We have completed the replacement and all services are back online. We are monitoring the server and will update this status page if there are any further updates.
UPDATE 1: Server S02 has been shut down while we carry out the work. We are sorry for the downtime and hope to have services back online shortly.
Please be advised we've noticed issues with Hypervisor 7 (HV7). We have either the RAID cable or RAID controller faulty on this server node. This will require the node to be physically stopped to perform the repair. We will keep downtime to a minimum.
Updates to follow.
Affecting System - Data Centre Network
UPDATE 2: All servers and services are back online, we are monitoring and checking the cause as no router/network configuration was changed.
UPDATE 2: The engineers at the DC have corrected the IP issue - if you are using A:Records you will need to change to our DNS nameservers to be the best level of service.
UPDATE 3: We have completed all upgrades and monitored the services during the night. All services appear to be running normally. Thank you for your patience.
Affecting System - Network
We are investigating a network issue to our Dallas DC cloud servers. We are sorry for the unexpected downtime.
Affecting Server - [S25] Linux cPanel ~ CF 11 Server ~ London UK
We will be running an update on the ColdFusion services at 10PM UK/London time. A service restart will be performed which will cause some CFML downtime during the process (<10min). Thank you for your understanding.
Affecting System - Dallas DC - Network
UPDATE 1: The issue was caused by one of the clusters server nodes that failed and caused a chain reaction through the network. The server was replaced and services brought back online. We are monitoring the server cluster closely to help prevent this from happening again.
Affecting Server - [S24] Linux cPanel ~ Lucee ~ London UK
To apply system changes/updates we will be performing a reboot of the S24 server at midnght as defined (UK/London time). Downtime should be less than 10min.
We will be performing a ColdFusion service restart at 4AM UK/London time (10PM Dallas US Time).
We will be performing a ColdFusion service reboot tomorow (28th) morning at 5AM UK/London time (27th at 11PM Dallas US time) to apply Java updates to the ColdFusion service.
Downtime expected to be less than 5min.
S12 ColdFusion services are being updated with the latest stable version of Java. Service restarts will occur during this update. Thank you for your patience.
UPDATE 7: We have now completed the migration and updates. All services are back online and if you have already re-created your ColdFusion datasources you should see sites working as normal. If you have any questions please feel free to contact a member of the team. Thank you for your patience.
UPDATE 6: We are now applying the very latest ColdFusion updates. We have tried to import all ColdFusion datasources but unfortunately you may need to recreate the DSN via the CFManager.Â
UPDATE 5: ColdFusion has been configured and we are now applying our Apache updates to ensure smooth running of CF applications on the server. You may see HTTP and CF services go down but you will still be able to login to cPanel and access services there. Thank you for your patience.
UPDATE 4: We are having some issues with one of the connectors in ColdFusion and needing to run a reinstall of CF to ensure it applies the correct configuration. We are very sorry for the inconvinence this has caused. Our tests showed everything was working OK but it appears a Apache update caused some issues. We are working to have all ColdFusion based sites back online asap. Please note only the ColdFusion service is affected and all other services are running normally.
UPDATE 3: We are in the final stages of having services back online and running normally. We are sorry for the extended period of time this is taking, we our new systems some configurations was needed to our setup that matched the old server but didn't allow full functionality.
UPDATE 2: While we run the final configurations, Apache and ColdFusion services may not be working as expected or showing as 404 pages.
UPDATE 1: File have completed their transfer and we are finishing the config of our CFManager to verify and sync DSNs.
On the 23/JUNE/2017 at 23:00 London time (22:00 UTC+1 / 22:00 Greenwich Mean Time +1) we will be migrating all accounts from server S12 to our new Cloud platform. ColdFusion 10 will be continued on this new server and all CF config will be transferred during this process.
We will need to shut down services at this time to allow the transfer to run as fast as possible so downtime will be during off peak hours. We hope to have all accounts migrated by the morning on Saturday. A new IP will be assigned (69.168.236.13) and if you are using A:Records you will need to update your domains DNS to match. If you are using our DNS nameservers no change will be required.
If you have a dedicated IP we will provide a new IP but our new platfrom uses IPv6 for addionial IPs and IPv4 for main root IPs.
The new platfrom will allow us to provide a higher level of service to all customers.
If you have any questions or concerns please contact a member of the management team who will be happy to help.
Thank you for your understanding.
Affecting Server - [S24] Linux cPanel ~ Lucee ~ London UK
On Sunday the 18th of June we will be upgrading Lucee from 5.1 to the latest release and patch 5.2. Downtime will be restricted to Lucee services while we restart them and will be minimal.
Affecting System - US Data Center
Our US data center is currently investigating a network issue on our US ColdFusion servers (Washington, Walla Walla). We hope to have services back online asap.
Affecting Server - [S01] Linux cPanel ~ Coventry UK
UPDATE 3: The migration has been completed and if you have any questions or you are having any issues please do contact a member of the support team. Thank you for your patience and understanding during this migration.
UPDATE 2: We have completed nearly all of the migrations but some of the remaining accounts are larger than most accounts. If you have any questions please do contact a member of the support team via the help desk. Thank you for your patience.
UPDATE 1: The migration is still processing and we hope to have all accounts moved as soon as possible. If you have any concerns please contact a member of the team who will be happy to assist you.
We are migrating all accounts to a new instance due to some concerns of the stability and security of key features and services on the server. Once all data migrated the old IP will be allocated to the new instance.
Thank you for your patience.
UPDATE 9: After a night of all services running normally we are closing this ticket. We are monitoring the server closely to see if any further issues occur. Thank you for your patience and understanding during this email outage.
UPDATE 2: Instance reboot complete and services are back online. We are now checking the server load and logs to find the root of the issue.
Affecting Server - [S18] Linux cPanel ~ Amsterdam NL
UPDATE 5: A quick reboot is processing for CPU changes on the node. Thank you for your patience.
Affecting System - London Cloud Data Centre
UPDATE 2: The issue was related to a router at the data centre that became unaligned with the network and crashed. Replacements and re-configurations have been made at the data centre to help prevent this from occuring again.Â
Affecting System - Coventry Data Centre
A DDOS attack was detected at our Coventry DC at 00:44 AM UK/London time. It was quickly resolved with a total of 6min of interrupted network connections. All servers remained up during this time but some users may have seen some drop offs from their services.
Affecting Server - [S18] Linux cPanel ~ Amsterdam NL
UPDATE 2: We have installed CloudLinux to ensure single accounts can not over use CPU cores on this instance. We have also increase the SWAP memory as this was an item that was flagged during our investigation. Thank you for your patience.
Affecting Server - [S18] Linux cPanel ~ Amsterdam NL
UPDATE: Load has returned to normal and we are looking at CPU resource improvements. Thank you.
Affecting System - Texas DC
REPORT:Â The Dallas cloud is operated by OnApp, and the datacenter managed the hardware. The first alerted was an issue with some VMs being down due to disk IO reports. From logs it looks like a dying raid card. We had to go back and forth with the data center, as they didn't see any cause of it was a bad raid card. Eventually the raid card was replaced and the SAN brought back up and VMs turned back on.
UPDATE 17: A full report of the issue will be posted within the next 7 days.
UPDATE 16: At 1AM UK time all services were back online and running normally. We are gathering a report from all parties to provide. Thank you for your patience during this hardware outage.
UPDATE 15: The reboot has shown further issues within OnApp which the team are correcting now. Hardware is being replaced to ensure the stability of the services. Once the new server has been installed we will post a new update.
UDPATE 14: We are going to be doing a standard reboot of a number of instances to ensure everything is fully corrected. Downtime should be less than 5min. Thank you.
UPDATE 13: Services have come back online but we are awaiting the all clear from the engineers.
UDPATE 12: Work is ongoing at the data center and engineers from OnApp are working to resolve the issue asap. Thank you for your continued patience.
UPDATE 11: The issue has been located in the OnApp hypervision which engineers at the data center and the OnApp support team are investigating.
UPDATE 10: We are still seeing some issues with the host machines and hope to have this corrected asap.
UPDATE 9: We have completed fine sync and corrections. We are monitoring services to ensure everything is stable. Thank you again for your patience during this matter. We will post a report as soon as possible.
UPDATE 8: We are running some reboots of the host machine to ensure we fully fix the issues that caused the host machine to fail.
UPDATE 7: We are happy to confirm the instances below are back online. We are running some checks/tests on our OnApp Control Panel instance:
UPDATE 6: Now that the host machine is operating normally we are picking up logs and some issues which we can correct as we boot instances online.
UPDATE 5: We can confirm instances within the cloud are coming back online. Server S22 - 69.168.236.42 is back online. Others will be coming back online shortly.
UPDATE 4:Â The host machine has been started after the found hardware failure. The OnApp CP should startup soon and all other services shortly. We will update this status once all services are confirmed back online. Thank you so much for your patience.
UPDATE 3: Engineers are still working on the servers at the Softlayer data center in Texas USA. We hope to have a report and an ETA as soon as possible. Thank you for your continued patience and understanding.
UDPATE 2:Â A hardware fault has been found and tech engineers are checking the servers. We hope to have further updates with more detail shortly.
UPDATE 1: As per our new support policy for our Cloud platfroms OnApp support was informed of the issue and has detected a possible hardware to software fault. They are working with on site engineers to resolve the issue asap. We are sorry for this unexpected downtime. Thank you for your patience.
We are seeing an issue at our US Texas data center and the on site team is investigating.
Affected servers:
We are doing everything possible to get the affected services onine asap.
UPDATE 2: We are currently investigating an issue which required a reboot.Â
Our reporting software indicated a disk resource issue which we are investigating and correcting on S17 server (London, UK).
Affecting Server - [S21] Linux cPanel ~ Singapore
UPDATE 1: We have restored access to the server and all services are back online. We are now investigating the cause and once found we will put in measures to avoid this from happening again.
UPDATE 1: We are found the cause of the CPU spikes and correct this but we are looking into ways to ensure if the issue does occur on an account in the future it won't cause such issues. We will update this status once we know more.
We are seeing a number of CPU spikes on S23 (London), we are investigating the cause and looking at implementing systems to prevent the server rebooting due to this. We hope to have systems stable asap and we are sorry for the inconvience caused. We will keep this status open during the night while we monitor and investigate the CPU issues.
Affecting Server - [S24] Linux cPanel ~ Lucee ~ London UK
Due to storage system upgrades we need to perform a quick reboot of the server. Downtime is expected to be less than 10min. We are sorry for the short notice.
Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA
We will be upgrading MySQL to version 5.6 between 3AM to 5AM UK/London time on the 30th of January. Downtime will be minimal during this period. We expect downtime to be <30min.
Thank you for your patience.
Affecting System - Coventry DC
UPDATE 8: All customers have now been emailed a full report of the downtime.
We are currently experience issues with power at our Coventry Datacentre. The Generators and APCs are running but some customers may experience issues. We are working this as a matter of urgency to ensure we have this resolved as soon as we physically can. We are sorry for the inconvenience.
Update 1:
Our Coventry datacentre site experienced a power outage from Western Power at around 08:00 this morning, our generators started and took the load however after a few hours generators at the site developed faults.
Power has now been restored at the site but this is through our generators, Western Power are working to restore the power which should hopefully be completed shortly.
Update 2:
A further power outage has occurred since the initial restoration. Please be assured we are doing all the possibly can to restore all services.
Update 3:
Western Power assure us power will be restored soon to the affected data hall.
Update 4:
Full power has now been restored to the affected datahall. All services will be online momentarily. If you are still experiencing issues then please let us know
Affecting System - OnApp
We are running a quick reboot of our OnApp controller, during this time there will be no downtime of any customers services. Thank you for your patience while we action this control panel reboot.
Affecting Server - [S01] Linux cPanel ~ Coventry UK
UPDATE 5: CPU load has reduced to a normal level and we are monitoring services.
Affecting System - Bravo Node
UPDATE 3 - Full Report:
Firstly thank you for your patience during this migration period. We are in the middle of running the transfer/restore of the final VMs on the new node. It has been a long process and longer than we had hoped but all VMs have been migrated or in the final stages of transfer to our new UK data centre. Those that are fully restored appear to be running well with the final ones will be coming online asap. If you have any issues please do contact a member of the support team via a ticket for the fastest response. With all major migrations or issues we like to be very open about any issues and problems that may have occured during such tasks, please see below our report.
Problems/Issues Occured
We saw a number of issues which caused an increase length in the migration process. Firstly the amount of data that was transferred was a large amount but with eailer tests network speeds held at the top speed available over the network but during the 6th hour of the migration the speed started to drop. We had hoped this was a temporary loss of network speed and at points the connection dropped. Network issues had been reported on the old node server and one of the cases for the migration.
Total Downtime
In total there has been a total of 40 hours of downtime since we started the migration on Sunday morning.
Migration Reasons
There is a number of reasons for this migration but the main one was the hardware age in a data centre that was not performing at the level we wanted to provide to our customers. The server was starting to show issues in both memory and harddrive and to ensure customers data and services were not damaged in any way we decided to migrate to our UK data centres new hardware.
Conclusion
With any large scale migration there are lessons to be learnt and we have an internal review planned to see how we can improve our SolusVM migrations. Even though the migration was necessary we believe we can provide different options on how migration can be handled, for example transferring backup files over and then deploying them instead of the live data point. With the extent of downtime that was caused we will be providing all migrated customers VMs with 6 months of free hosting extentions - if you have any questions regarding this please do contact a member of the billing team. These extentions will be applied over the cause of the next 24 hours.
From everyone at Host Media we would like to thank all our customers for their patience and understanding during this migration period. Until we reach a good amount of time after all VMs have been checked and services appear stable this status issue will remain open for any further updates.
UPDATE 2: We have 3 VMs remaining in the migration and then all services would have been migrated. If you have any questions please do let our team know. Thank you for your patience and understanding.
UPDATE 1: We have transferred most VMs and the affected customers have been updated via support tickets. If you have any questions or issues please do get in touch with a member of the team.
We are now starting to migrate all VMs from the node Bravo to our Betelgeuse server. Please check your client portal for ticket updates which will contain details of your new IP address.
Thank you for your patience while we run this transfer.
UDPATE 1: All services came back online fine after the reboot and CloudLinux has also been installed. If you have any questions please do contact a member of the team.
We will be performing a server software upgrade on our S17 server which requires a server reboot. The reboot will be a standard reboot that will take up to 10min to complete.
Thank you for your understanding.
UPDATE 2: We have found the affected account and place in actions to help prevent the issue from happening again.
UDPATE 1: Scan complete and drives are running OK.
Affecting Server - [S18] Linux cPanel ~ Amsterdam NL
UPDATE:Â Services are all running normally. Thank you for your patience.
UPDATE: Services are coming back online and reboot was successful. Thank you for your patiance.
We are performing a reboot to correct a disk error and update software.
We are currently looking into an unexpected downtime on S17. We hope to have all services back online asap.
Affecting Other - Shared hosting platform
Update 1: The server and all services are back online. Thank you for your patience.
We are performing emergency maintenance on this server which will make all/most services on this server inaccessible.
Services affected include HTTPD and Webmail access, MySQL, POP/IMAP, SSH and FTP.
During this outage, access to your websites, email, files, databases, will not be possible. We apologize for this inconvenience and while we do not have an ETA for this procedure, we will continue providing updates as soon as possible.
Affecting Server - [S21] Linux cPanel ~ Singapore
UPDATE 1: Migration has been completed and all services are running on the new cloud instance. Thank you for your patience and understanding.
We will be performing a migration of all accounts from the current S21 Hong Kong server to a new cloud VM. This is due to network issues that has effected mail ports and connection speeds. Downtime will be minimal as we will be doing a direct transfer of accounts. If you are using A:Records please change your domains IP to point to:Â 45.126.124.59 - if you are using DNS then nothing is required to be changed.
If you have any questions please do contact us.
Affecting Server - [S11] Linux cPanel ~ Lucee ~ London UK
UPDATE 1: Services are running normally but our team are looking into our Lucee platform to see how improvements can be made to avoid memory overload from Lucee. We will ensure to update all effected customers as soon as possible. Thank you for your patience and if you have any questions please feel free to open a ticket to the management team who will be more than happy to answer infrastructure questions.
UPDATE 2: Our updates have been completed and services are running normally. Thank you for your patience during the reboot.
UPDATE 1:Â We will be performing a server reboot tonight at 10PM UK/London time. We will be increasing some resources allocated to the cloud VM to ensure performance is maintained to the highest level. Thank you for your patience.
We are investigating an issue with the cloud VM that causes some downtime for customers websites. We will be performing upgrades to the VM which may require a single reboot of the VM.
Thank you for your understanding and patience.
Affecting System - London Cloud - Shared/Reseller Services
UPDATE: Services came back online and all systems are running normally. Thank you.
We will be installing CloudLinux on our S15 server to help ensure CPU usage by accounts are kept to an acceptable level. This install requires a standard reboot of the server and we expect downtime to be under 10min. If you have any questions please contact a member of the team.
UPDATE 2: Services are back online and node issues are corrected. Sorry for the downtime.
Affecting System - DNS Hosted Platform
UPDATE: The network has been resolved and all services are back online.
Affecting System - Archer Node
UDPATE 6: We have been monitoring the services during the night and all services are now running on the latest kernel. We will continue to monitor the server as normal to ensure the previous issues do not occur again on this node. Thank you for your patience and understanding during this update.
Affecting System - Archer Node
We are investigating an issue on the Archer node. Our DC engineers are checking this now.
UPDATE: After our reboot the services are running normally and we are monitoring the situation.
Affecting System - Archer Node
UPDATE: Services are back online and we are investogating the root cause.
We have detected a system issue with the node hosting the SQL services listed above. Our engineering team applied system updates and scheduled a brief maintenance window to perform a server restart.
Date and Time:Â Aug-18-2016 10:00 GMT/UTC (Aug-18-2016 03:00 Local Time)
Please note: This event will reboot the server and a small amount of downtime will occur. Your data and configurations will not be affected by the reboot.
A reboot of server S15 is underway to apply new updates. Sorry for the downtime caused, this will be a max of 5min.
UPDATE: Service reboot corrected the issue and services are running normally.
Affecting System - IP routing issues
UPDATE: We have now resolved the issues, the main cause was a kernel error for the version the server requires. Plans are in place to move customers VMs and hosting accounts to our new Cloud solutions. Customers will be updated in the near future with planned migrations. Thank you for your patience.
Affecting Server - Linux cPanel ~ Legacy Platform
MAINTENANCE START TIME:Â 7:30 pm EDT 08/03/16
ESTIMATED DURATION:Â 1 day
STATUS:Â In Progress
At 8:30PM tonight (03 August, 2016 20:30-CDT) We will be taking the server Toronto 6 server offline in order to synchronize data across multiple disks and re-initialize backup services.
Our team has detected an issue that could result in heavy data loss if left unattended. This process could take up to 24 hours to complete, and all hosting services will be unavailable during that time.
The safety and consistency of your data is one of our highest priorities, and this has been determined to be the quickest and safest way to proceed. Our team will be actively managing the process throughout.
We sincerely apologize for the inconvenience, and will have all services restored as soon as possible.
Thank you for your patience.
Affecting System - MEL3 - Cluster Server
UPDATE 1: Services are back online and running normally. Thank you for your patience.
Affecting System - Global cPanel Cluster Network
We are currently working on an issue with our global cluster network. A network issue was identified which affects multiple servers. As such, some of the sites hosted may load slow or appear inaccessible. Our System Administration team is actively working on this now and we will update this post as more information becomes available.
We sincerely apologize for the inconvenience this issue has caused. We understand service reliability is of the utmost importance. If you have any further questions please let us know and we will do our best to answer them!
Affecting System - Coventry Data Centre Network Routing
UPDATE 2:
Services have been restored and we are investigating the issue with the Kernel version to help prevent this from happening in the future.
UPDATE 1:
Enginners are checking IP routing with a version of Kernel which appears to be the cause of the issues. We hope to have services back online asap. We are sorry for the downtime caused.
ISSUE:
We are currently investigating a major network issue connected to our Archer node. Engineers at the data centre are working to resolve this issue asap.
Affecting System - US Data Centre - Walla Walla
Update: services have returned to normal after network issues resolved the issues.
Engineers at our Walla Walla, US data centre are looking into packet loss issues on the network. All shared/reseller ColdFusion servers are currently affected.
Affecting System - Server Cluster
UPDATE 1: Services are back online and we are investogating what happened in full.
UPDATE 1:Â Reboot complete and disk increased successfully.
Affecting Server - OnApp Cloud
Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA
We will be rebooting the Adobe ColdFusion services to resolve timeout issues in Tomcat. We hope to keep downtime to a minimum and is expected to be less than 5min.
Update:
After investigating the issue it appears the server was overloaded and after a reboot the memory cleared and all services came back online. We are monitoring the server to see any further build up of memory usage.
Issue:
We are currently investigating issues with the Archer node. Our engineers are working on the issue.
Affecting System - DC Coventry Network
UPDATE 3: We found the kernel was causing the main issues which we have now corrected. We are looking into why this happened and how to try to prevent this in the future.
UPDATE 2: Our IP bridge for Archer node is no longer showing up for this server. Our team are working on correcting this to get all services back online asap.
UPDATE 1: We have lowered the traffic coming into the rack and now working to restore all services.
We are seeing a large amount of traffic hitting our servers. We are looking into the cause and working to resolve this asap. Sorry for any inconvenience caused.
Affecting System - US ColdFusion Data Centre
One of our data centres upstream network providers performed emergency maintenance which interrupted our service. All servers and sites are currently up at this time and downtime was <2min.
If you have any questions or concerns please feel free to contact us at any time.
Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA
UPDATE 1: The reboot and memory upgrades have been completed. Services are running well. If you see any issues or have any questions please do let us know. Thank you for your patience.
Affecting System - Network Issue
UPDATE 2: We can fully confirm this was a data centre network related issue and we are working with the DC to find out exactly what happened.
Affecting System - Archer
UPDATE 2: We have resolved the issues detected on our memory update and all services are back online. We are sorry for the inconvenience caused and this downtime was unexpected and nessasry to ensure node services would run smoothly and this does not cause larger issues in the future. If you have any questions or comments please contact the sales or management team who will be more than happy to help.
Affecting System - Archer Node
Update 1:Â The error at the data centre in regards to our IP subnet was human error which we our management team are working with the DC management to see how to prevent this from happening in the future. We are sorry for the downtime and if you have any comments or questions please feel free to contact a member of the team. Thank you for your understanding.
Update 6: The migration has been completed and all services are running normally. Thank you for your patience during this process and if you have any questions or comments please do let us know.
Update 5: We have started the transfer of S10 server. We hope to have this completed within the next 4-6 hours. Thank you for your patience.
Update 4: Server S11 has been migrated and now running on the new hardware.
Update 3: Due to complications with the transfer which were unforeseeable we had to stop the migration and rescheduled the migration of S10 server for Sunday 9PM. We are very sorry for the inconvenience caused and if you have any questions in the mean time please do let us know. Thank you for your understanding and patience.
Update 2: Due to the lower transfer speeds between the servers than expected we have rescheduled the migration of S11 server until tonight at 9PM. S10 is almost complete and should be back online shoryly.
Update 1:Â We have completed the migration of the dedicated VMs and now in the middle of migrating the 2 shared/reseller servers S10 and S11. Once this has been completed we will update this status. Thank you for your patience and bearing with us during this migration process.
We will be performing a node migration at our Coventry, UK data centre on Friday the 4th of September and starting from 10PM UK/London time.
Affected services:
Expected downtime: 2-4 hours per service
The new node is one of our top of the line servers and we hope you will see a general performance increase.
Affecting System - DeltaUK2 Node
UPDATE 2: There was a temporary network issue on the racks that host the DELTAUK2 at the Coventry DC which has been resolved.
Affecting System - Network Connections
UPDATE 2:Â One of the data centres network providers had experienced some issues. This has been resolved at the DC level and we will continue to monitor the situation.
Affecting System - Server Node
Affecting System - Power Supply
UPDATE: DC has resolved the power issues and services are now fully online. Thank you for your patience and understanding.
Affecting System - Node: Betelgeuse
UPDATE 5: All services are stable and the sync will take a number of hours to complete. We will mark this status update as resolved and provide further updates if required. Thank you for your patience.
On Sunday the 14th of June we will be running security updates on our XEN nodes. Services will be rebooted on Sunday between 1AM and 2AM. We do not expect any major downtime apart from the reboots. The reboots may take a little longer than normal to ensure all security updates are installed correctly.
Thank you for your understanding.
Affecting System - Coventry DC Network
UPDATE 6: The issue was caused by a wide spread issue affecting much of the UK's connectivity at the London Internet Exchange (LINX). We have disabled our peering at LINX for now and all services are running normally. We will provide further updates shortly.
Affecting System - UK Coventry Data Center
UPDATE 4: Data Centre Update:Â The problem was caused by a broadcast storm on our network and as a result a number of rack switches locked up which we had to reboot.
Affecting System - Network
UPDATE:Â One of our upstream network providers has been experiencing some issues. We have re-routed the network traffic around them for the time being. All servers and sites are currently up.
Affecting System - Nodes affected: CharlieUK2
UPDATE 3: We have corrected the issues and all VMs are back online. We are monitoring the services and investigating the network issue fully.
Affecting System - Network
UPDATE 1: Services were restored at 23:25.
Affecting System - Network (UK-DE)
UPDATE 2: Networks appear stable but we are monitoring.
On Sunday the 12th at 20:09PM UK time Apache made an automatic graceful restart. This caused an Apache log rotation and our external monitoring services picked up 2-3min of downtime. Other monitoring services showed sites and services running. If you have any questions about this outage please do contact a member of the team.
Affecting Server - VM/VPS (SolusVM) Group
Report:
The main cause of the issue was due to the XEN security updates that caused a failure in the boot up systems of the Archer node. Our team had to correct the boot up issues and run manual hardware reboots at the data centre. Once the node came back online all VMs loaded up successfully.
What we are planning to help prevent this from happening again:
Affecting Server - VM/VPS (SolusVM) Group
UPDATE 1: VMs are now all back online and we are checking to see what happened to cause all VMs to fail without warning.
Affecting Server - VM/VPS (SolusVM) Group
UPDATE 2: The outage to the majority of our servers should have been around 4 minutes. Those in racks 30, 13 and 27 have experienced upto ten minutes of downtime. This was due to the routing process restarting on our servers gateway device, we are looking into the cuase of this with Juniper and hope to have another update from them within the next 24 hours.
Affecting Server - [S11] Linux cPanel ~ Lucee ~ London UK
We have resolved the issue and will be publishing a full report shortly.
Affecting System - Coventry DC
UPDATE 1: The network has come back online after corrections by the Coventry DC team. We are investigating what happeneded and will update you as soon as possible.
Affecting Server - VM/VPS (SolusVM) Group
UPDATE 2: Updates have been sent via support tickets for clients with information regarding IP changes due to new DC node being deployed.
UPDATE 1: Migration has been rescheduled for the 5th of Feb.
Due to a decrease in performance on the node: AlphaUK2 we will be migrating all VMs from this node to the node:Â Betelgeuse. No IP changes will be required as we will be migrating the IP subnet over to the node.
Migration scheduled for: 03/Feb/2015 10PM UK/London Time Zone
Thank you.
Affecting Server - VM/VPS (SolusVM) Group
UPDATE 5: All data has been migrated and we have been testing the sites and VMs over the night. Load and performance has generally increased. If you have any questions or issues please do get in contact with a member of the support team. Thank you for your patience and support during this migration.
UPDATE 4: We are still migrating data and currently on the largest section of data to migrate. Once this has been completed we will update this status.
UPDATE 3:Â We are still working on migrating the final services over to the new node. We hope the speed will increase once off-peak time comes. Thank you for your patience.
UPDATE 2:Â The migration of data is still progressing, due to the faults in the BravoUK2 drive the transfer process has been slower than expected.
UPDATE 1:Â Betelgeuse has had its final checks and data is now migrating over from BravoUK2.
Due to faults found in the BravoUK2 server we are performing an emergency migration of all services instances to a new node which has been setup. This may take up to 10-12 hours for the complete data transfer which we will then switch over the IP subnets to ensure all clients services have the same IP. No domain or DNS updates will be needed.
New node name:Â Betelgeuse
We are sorry for the quickness of this migration but to ensure no data or customers services are effected we are pushing forward with this emergency migration.
We will keep this network status updated while we process this migration.
Affecting Server - VM/VPS (SolusVM) Group
Archer Downtime Report
Report Downtime Start-End Date/Time:
30/DEC/2014 13:05 - 31/DEC/2014 03:30
Cause:
The first reports showed issues with one of the 1TB SSD hard drives within the RAID10 configuration. This would not normally cause such issues due to the 7 other drives in the RAID setup. On further investigation we found a second hard drive had become faulty. This caused corruption in some files that controlled many elements of the Xen virtualisation setup which broke the network bridge between the node main domain and the VMs.
Fix:
We were able to restore the configuration files to allow networks to become available once again. No data loss has occured and the VM instances were running normally during this time but without a network connection to the outside world. We are continuing to monitor the server and any sign of disruption will be investigated straight away.
Future Prevention:
We are setting up new monitoring tasks on our RAID and hard drives company wide starting with the Archer node to help detect issues like this sooner.
UPDATE 11: As of 3:30am UK time we were able to correct the network issues on the server. We are monitoring the server heavily and will be making adjustments throughout the day to ensure services run smoothly. Further updates will be posted shortly. Thank you for your patience during this issue.
UPDATE 10: New server hardware has been requested directly with our UK data centre and we hope to have this deployed asap.
UPDATE 9: Due to the hardware failure on the drives the configuration setup for our virtualisation systems has become corrupted and we are looking at restoring/transferring VMs to a new node server asap. We hope to have further updates shortly.
UPDATE 8: We have the engineers at the data centre investigating their configuration and any faults there end. From all of us at Host Media we are very sorry for this long period of downtime.
UPDATE 7: We are still working on a network issue connected to the local network to the node. The network issue is a misconfiguration in the bridge routing of the IP to VM.
UPDATE 6: We have been able to access locally the VM consoles and now renetworking the IP configs as the network seems to have been lost during the issues.
UPDATE 5: The node has come back online after its faulty drive replacement. We are now working on restoring access to the VMs and hope to have our customers back online asap.
UPDATE 4: A faulty drive (one of our 1TB SSD) in our RAID collection has caused the drives to fail their sync and brought down the VMs. The data centre team are replacing the faulty drive and also checking the controllers. We hope to have the node back online in 10min and then try to boot all VMs.
UPDATE 3: It appears a RAID controller could be the cause of the issues on the 'Archer' node. We hope to have more for you soon and your websites/VMs back online.
UPDATE 2: We are seeing some slowness in VMs coming back online - our DC team are checking the status of our RAID10 controllers and our offices tech team are checking the status of VMs data. Further updates to come.
UPDATE 1:Â The server node is coming back online now and we are making sure all VMs come back online. We will update you as soon as we can confirm what happened to this node.
Affecting Server - [S01] Linux cPanel ~ Coventry UK
We are currently working on resolving load issues with S01 server and we have started migrating some customers accounts to our CloudLinux SSD servers. If you wish to be migrated to our newer hosting platform (CloudLinux SSD Servers) while we are correcting the issues please contact a member of the sales team.
We are sorry for any downtime and slowness of the website loading speeds. We hope to improve the stability as soon as possible.
Affecting Server - VM/VPS (SolusVM) Group
We will be rebooting the following server nodes at 8PM UK/London time to ensure the Kernal updates are fully applied.
Update type: Security
Servers Nodes: AlphaUK2, BravoUK2, CharlieUK2 and AlphaUS2
Shared Services: UK1, US1 and S5
Thank you.
Affecting Server - VM/VPS (SolusVM) Group
We had to run a manual reboot of the node BRAVO UK - The OS is coming back online now and we are monitoring. VPS services should be coming online shortly. Sorry for the unexpected downtime.
Affecting Server - VM/VPS (SolusVM) Group
UPDATE: Services are coming back online - we are investigating the cause of the downtime.
UPDATE 1: Network restored and services are coming back online. If you have any problems please contact the support team.
Affecting Server - [S01] Linux cPanel ~ Coventry UK
UPDATE 1: Services are now coming back online and we are monitoring the situation.
Affecting Server - Linux cPanel ~ Legacy Platform
UPDATE 1: We have completed our repairs on the SQL services and all SQL systems are back to normal. We are sorry for the inconvenience caused by the SQL downtimes.
Affecting System - Charlie UK Node
The node Charlie at our UK data centre was experiencing issues which was first reported as network based by our internal systems but on checking was due to a VM instance with corrupted data. We are investigating further but all VMs are coming back online. We will update you further as soon as we can. We are already prepping our new systems using Xen and look forward to hosting all our customers on these new systems soon.
Thank you and sorry for the downtime caused.
Affecting Server - [S01] Linux cPanel ~ Coventry UK
UPDATE 8: We have seen another automatic reboot from the server and we are investigating this now. Services are coming back online though.
Affecting Server - [S02] Linux cPanel/WHM ~ US1
UPDATE 1: The server is stable but load is still a little high for our liking and we are investigating now. We hope to have the servers load back to normal levels soon.
Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA
We are going to be running general updates and backup software updates on Sunday 10PM - Monday 1AM on our S6 CF10 server.
Expected downtime: 1hour.
Thank you for your understanding.
Affecting System - CPU
UPDATE 4: After 24 hours of normal running we are closing this issue. If you require any support or have any questions please do let us know.
Affecting System - Network
UPDATE 8: After 24 hours of normal running we are closing this issue. If you require any support or have any questions please do let us know.
Affecting System - UK Coventry DC Network
UPDATE 1: The outage was caused by an emergency reboot of our core routing platform at our Coventry site as recommended by JTAC engineers due to an error we were seeing on these racks. If you are seeing any issues please do let us know.
Affecting System - Network
UPDATE 1: We have corrected the issue and all services have been running fine for some time. We had one spike in traffic but that was resolved as soon as it was detected. We will continue to monitor the services and any further updates will be posted here. Thank you for your patience.
Affecting System - Germany Node - Charlie
We are currently looking into a high load issue on out Charlie node in the Germany DC. Services are coming back online now but any questions please contact us.
Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA
UPDATE 5: The new CF server have been deployed and our team are just checking all settings for the new system.
UPDATE 4: Our DC team have started setting up another CF server to spread load and accounts over too. We hope to have this online soon and offer some customers to be transferred over to this server. Further updates to come.
UPDATE 3: A service reboot is currently in progress due to a CPU load - our engineers are checking the cause now and will implement systems to stop this from happening again.
UPDATE 2: Reboot is now underway and we will be monitoring services once back online.
UPDATE 1: Further updates including memory updates will be happening tonight midnight UK/London time to improve stability more.
Issue:
We are currently seeing a number of ColdFusion service downtime issues which appear to be caused by threads being unable to be created due to resource issues.
Planned Maintenance:
To correct the mentioned issue we will be applying further memory resources to the S6 server and applying updates to the JVM/Heap settings to optimize the performance of the server.
Affecting System - Germany Data Centre
UPDATE 5: The issue has been fully resolved and all network activity is back to normal.
UPDATE 4: We believe to have found the main cause of the issue and now monitoring the network lines at the DC.
UPDATE 3: We have run a complete reconfigure of the network and the hardware for the routers and so far all networks have come back to a stable level. We are monitoring and ensure all services are back online.
UPDATE 2: A reboot of the server and network services was required to complete a full test of the servers network config. The server is coming back online now - we hope to have further updates soon.
UPDATE: The data centre has started speaking with network providers connecting the DC to the rest of the world (i.e. networks over to the UK etc) as the DC is currently unable to find an issue with the routers or networks within the centre.
We are currently seeing packet loss to our racks in the Germany data centre which was isolated to a couple IP's but now appears to effected entire subnets.
The engineers at the data centre are working as hard as they can to find the cause of the issue and we hope to have this resolved soon.
Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA
UPDATE: Due to some delays in the software updates we will be performing a CF service restart tonight instead. Downtime should be minimal.
Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA
UPATE: Services have now all come back online and the CFML services are running normally. If you are seeing any issues please do contact one of the support team via the client portal help desk. Thank you for your patience.
Affecting System - US Data Centre
We are investigating issues at our US data centre.
Update: We have completed the updates to the Plesk CP.
Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA
UPDATE: The server auto restarted the affected services but we will be investigating what caused this. If you need any assistance pleas contact the support team.
Affecting System - US Data Centre
The Alpha node, Media servers and Reseller servers became unresponsive due to heavy load through the network to these boxes. We have rebooted the servers and the team are now looking into this matter.
Affecting Server - [S02] Linux cPanel/WHM ~ US1
UPDATE: Network and services issues have been resolved. If you have any questions please feel free to contact the accounts team.
We are currently working on an unexpected failure on our US network. We hope to have this resolved shortly.
Affecting Server - Linux cPanel ~ Legacy Platform
Affecting Server - VM/VPS (SolusVM) Group
We are currently working on an issue affecting the Charlie UK node.
We are sorry for the unexpected downtime.
Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA
UPDATE: Last night our team completed restoring a backup of some SQL databases that corrupted and caused issues with service loading. We are moniting but all services appear to be running normally.
We are currently working on a disk issue on our S6 server which caused unexpected downtime.
Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA
UPDATE: All services are back online and the network issue has been corrected.
We are currently seeing a network issue at the data centre in the US. We are working on correctly this and hope to have everything back online soon.
Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA
UPDATE: We have our services back online and monitoring the network to ensure all issues have been resolved.
We are currently facing a DDoS attack on our US data centre network. We are doing everything we can to resolve this and have all websites back online soon.
Sorry for the inconvenience caused.
Affecting System - UK Charlie Server Cluster
UPDATE: The DC team have resolved the network issue but now looking into what happened and how to try and prevent this from happening again. If you need any further assistance do let us know or if you are having issues with your service.
Affecting System - US Data Centre Issues
UPDATE: We have corrected the issues with the server and plans are being put in place and all customers on these US media servers will be updated soon.
We are currently experiencing issues with our US data centre which we are working on. We hope to have updates shortly.
Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA
UPDATE/07.JAN.2014
- All services have been running stable for the last 12 hours and we are continuing to moniter the server.
UPDATE/06.JAN.2014
- We are continuing to see 1min downtimes on this server due to the ACF service stalling. We have our team working on this with a CF consultant.
UPDATE/03.JAN.2014
- 3PM Further to some more downtime logged within our systems we have made further adjustments to the CF memory systems and now monitoring our adjustments. We have further actions if required planned.
- 10AM We have made some changes to our CF services and now monitoring the server to ensure no further issues appear. We will continue to update this page once we have further information.
Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA
Adobe released a security hotfix APSB13-27 which is an important update which we have scheduled to be installed tomorrow Wednesday night (Nov 20th) at 11PM UK Time.
Date: Wednesday night (Nov 20th)
Time: 11pm GMT (3pm PDT)
Downtime:
Thank you for your understanding.
Affecting Server - [S05] Linux cPanel ~ Railo Server ~ Kansas City USA
UPDATE We have resolved the IP routing issue with the data centre and all services are running normally.
We are currently investigating a network issue on IP routing on our S5 server. We hope to have further updates shortly.
Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA
We have scheduled an Apache recompile for updates on the S6 server.
Expected Downtime: <10min
Date/Time: Thursday 31st October 2014 at 23:59 (UK/London timezone)
If you require any assistance or have any questions please do contact the accounts team who will be more than happy to assist you.
Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA
UPDATE: Services are coming back online and we are now running scans on the servers to ensure everything is running normally.
We are currently having network issues at the US data centre controlling our CF10 services and we hope to have this resolved shortly.
Thank you and sorry for this unexpected downtime.
Affecting System - DNS 1 Node Change
UPDATE: All services appear to be 100% stable and no reports of issues have come in. If you do have any issues please contact the Web Hosting support department.
-------------
UPDATE: We have made the changed internally and now monitoring all services to ensure the DNS takes effect.
-------------
Tonight (UK/London Time) we will be migrating the DNS1 cluster from it's current node to a new node. The clusters IP will change and this may cause some downtime for websites until all servers DNS zones are updated. No changes will need to be made on your domain/DNS as this is an internal change.
Affected Services
Customers using these nameservers will be affected by this change:
dns1.dnshostnetwork.com
dns2.dnshostnetwork.com
dns1.dnshostnetwork.com
If you are using A:Records to point your domain name to our services you will not be effected by this change.
Sorry for any downtime caused but we hope this will increase the overall performance of the DNS cluster. Please note no downtime may occur and this update is for your records.
Thank you,
Affecting Server - [S04] Linux cPanel ~ Railo Server ~ Nuremberg, Germany
We will be migrating the server 178.63.146.5 (s4.dnshostnetwork.com) to a new server. All customers on this server has been emailed so please check your inbox for the email from us. Make sure to check your SPAM/JUNK folder just in case you have any filtering on your email client.
Affecting Server - VM/VPS (SolusVM) Group
UPDATE: In the early hours of this morning (UK Time) the network was repaired and all services came fully back online. The data centre are looking further into the cause and how to prevent these issues from happening again. Thank you for your patience.
UPDATE: The network issue we are suffering is an external issue from a fiber cut, while we don't currently have an ETA they have found the problem and are working on fixing it as fast as possible.
ISSUE: We are currently investigating downtime on the nodes: AlphaUS2, BravoUS2 and US hosting services
Affecting Server - VM/VPS (SolusVM) Group
UPDATE 2: The servers are back online after the transit provider re-ran their filters.
UPDATE: It appears there has been a filter issue on the 78.157.192.0/19 subnet with our transit providers, we have requested they run manual filter updates asap, we expect this issue to be resolved shortly. We are sorry again for the downtime caused.
ISSUE:
We are currently investigating 2 server nodes that have become unavailable to pings. The nodes effected are BravoUK2 and CharlieUK2.
We will post further updates as soon as we can.
Affected Services
Affecting Server - VM/VPS (SolusVM) Group
UPDATE 5: We are now at 80% completed and should have the final data restored in the next 1-2 hours.
UPDATE 4: We have restored 40% of the data and working on the remaining 60% now. We hope to have most of the offline accounts backup of the next few hours.
UPDATE 3: Our team and a DC engineer has confirmed that both drives within the server and RAID had become faulty. We are now replacing both drives and will be restoring the data from backups. The backups available are from the date: 03/July/2013.
UPDATE 2: The migration of data on the servers failed due to the harddrives issues. We are now at the data centre replacing the hardware.
UPDATE: We are now migrating hosting services to a new node and will update clients shortly. Some VPS services are running normally and those clients will be contacted shortly to be migrated.
We are currently working on a drive issue on the Delta node but due to issues we will be looking at migrating instances to a different node or unracking the drives and replacing with upgraded hard drives. We will post further updates as soon as we can.
Affecting Server - VM/VPS (SolusVM) Group
UPDATE 2 09:47 - 26/June/2013: We have to the node and now checking the server for server side network issues. We hope to have everything checked and repaired within the next <90min.
UPDATE 1 09:17 - 26/June/2013: Network engineers are now at the data centre and servers to work on the networks connected to the node BRAVO. We are sorry for the downtime but we are doing everything we can to get the server back online asap.
ORIGINAL: We are currenrtly investigating an issue with the networks in our Germany data centre connecting to the node: BRAVO. We hope to have an update soon and resolved these issues.
Affecting Server - VM/VPS (SolusVM) Group
UPDATE 21:42 | 25/June/2013: All services have now come back online and have been monitored for the past hour.
UPDATE 16:29 | 25/June/2013: We now have the KVM connected at the DC and now working on resolving the issue. If we are unable to get the network issues resolved a reboot will be planned for tonight (UK Time).
ORIGINAL POST: We are currently looking into a connection issue with the Delta node in Germany. The server is online but failing to respond to external VPS control panel (SolusVM) commands. We are connecting a KVM and checking this further. We hope not to run a server reboot to ensure uptime of the VPS instances are maintained.
Affecting Server - [S04] Linux cPanel ~ Railo Server ~ Nuremberg, Germany
Update 1 - 16:01 19/June/2013: We have adjusted the Railo/Tomcat memory settings to help with performance and CPU issues seen. We will continue to monitor and update this status update.
Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA
Tonight we will be running the install of some PHP extensions and an Apache recompile + server reboot will be required. We expect a maximum downtime of <30min.
We are sorry for any inconvenience caused.
Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA
LATEST STATUS (16/May/2013 - 10:15AM): The server has been running well over the past 24 hours and only small adjustments have been made to CF settings without issues. We will soon mark this migration complete and the old server will be looked at being shut down in a week or so.
PREVIOUS 1 UPDATE (15/May/2013 - 9:09AM): The migration has been fully completed and we are seeing sites run really well on CF10.
PREVIOUS 2 UPDATE (14/May/2013 - 17:29PM): We have 100% completed the migration and now testing the CF services further. If you find any issues with your website please contact the ColdFusion support department - Direct URL: https://www.hostmedia.co.uk/client/submitticket.php?step=2&deptid=9
Note: If you are using our DNS/Nameservers and your website is showing as offline/down this is due to your account not yet being migrated. We are working on having all accounts migrated as soon as possible but this may still take sometime. Please see percentage above for details.
New Server Features: ColdFusion 10 Enterprise, CFManager (New version coming soon), CloudLinux (Based on CentOS 6 64Bit), Clustered DNS Network and Latest cPanel/WHM
New servers IP: 162.208.0.210
Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA
UPDATE: Upgrade completed and service rebooted.
A new ColdFusion 10 update was released late yesterday afternoon which we have scheduled to be installed tonight (15th of May) at 11:59PM. A ColdFusion service reboot will be required which will cause a small amount of downtime. Expected downtime <5min.
ColdFusion Update Information:
ColdFusion 10 Update 10 Tuesday, 14 May 2013
Update Level: 10
Update Type: Security
Update Description: The ColdFusion 10 Update 10 includes important security fixes.
Affecting Server - VM/VPS (SolusVM) Group
CURRENT STATUS: United States (Kansas City) - ALPHAUS2 IP's are still nulled by our DC ISP but we are working on this.
UPDATE 3 - 09/May/2013 | 17:31: We have started to see another attack on the same instances after the first set was successfully re-routered. We are working on this now and the DC is monitoring our routers for the BRAVO node.
UPDATE 2 - 09/May/2013 | 09:15: The attack on some of our clients servers has ended and we are reporting a normal service level. The clients who were effected was contacted and only those clients were effected by this attack all other services were running normally during this. Thank you.
UPDATE 1 - 08/May/2013 | 22:49: We have null routed the IP's but the attack is still continuing but the network appears to be handling the traffic fine now. We will continue to monitor and the data centres will also continue to monitor.
We are currently seeing an DDoS attack on some of our servers in Germany and the US. We have null routed the attacking IP but some connections are still carrying on the network.
UPDATE 1: We have started preping and the migration of the S9 server.
Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA
Update 1: The server is performing normally and management will be sending an email to all CF customers shortly.
Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA
We have a quick reboot planned in the next 30-40min for the CF Linux server to increase general performance of this server. We are sorry for the downtime caused but this should be no longer than <10min.
Affecting Server - [S02] Linux cPanel/WHM ~ US1
UPDATE 4 - 12:53 / 21/April/2013 : We have now completed the migrations and all sites are now responding well and running on the new servers. If you have any issues please do contact us and we will be more than happy to help.
UPDATE 3 - 22:52 / 19/April/2013 : We are still migrating over sites, due to the large amount of data it is taking some time but we will continue migrating sites and update all customers once completed. Thank you for bearing with us.
UPDATE 2 - 16:22 / 19/April/2013 : The migration continues but it is going well, we hope to have all sites migrated by the end of the day. Just a quick reminder if you are using our DNS/Nameservers (dns1.dnshostnetwork.com / dns2.dnshostnetwork.com / dns3.dnshostnetwork.com) you will not need to do anything. As soon as your site is migrated via cPanel the DNS will automatically point your site to the new servers. If you have any issues at all please do contact us via a support ticket to the WEB HOSTING department.
UPDATE 1 - 10:09 / 19/April/2013 : We have started the migrations after some setting changes and updates to the new server. Once the migration has been completed we will send all customers the new IP address again and to supply a general update. The new IP is: 173.208.236.229
Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA
We will be applying ColdFusion 9 security updates midnight tonight with a 10/15min reboot. If you have any questions please do let us know.
Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA
UPDATE 5 21:50AM: Due to the issues with the CF services we are now restoring the CFIDE and settings from a backup. We hope to have this resolved soon.
UPDATE 4 16:42AM: We are seeing some issues with the ColdFusion service after the updates earlier today. Our Adobe qualified partners are checking this now to see what the cause is. The server has been upgraded heavily and running on newer systems which will increase the general service level.
UPDATE 3 15:52AM: We will be performing short restarts in the next hour to ensure changes fully take effect on the server and that the memory increases are running correctly. We are sorry for any further downtime but after these reboots the server will be classed as stable. We will continue to montior the server to ensure any problems are worked on straight away. We are again very sorry for the downtime during this morning and hope the new improvements will allow your sites to run faster than ever before.
UPDATE 2 12:44AM: All services are running normally, we are checking the server for issues while it is now live. We will keep this status update open until we have run our reports.
UPDATE 2 12:26AM: We have completed 1 scan and a reboot which did not correct the issues and a new scan has already started. A number of issues were fixed from the first scan and we hope this second scan will correct the final issues for the server to boot correctly.
UPDATE 1 11:19AM: We are now running a fsck (disc check) to work out the issues on the server and why the drives won't boot up correctly. We hope to have this completed soon and the server back online. Again we are very sorry for the downtime and hope to have everything back online soon.
Affecting System - Gosport (Hampshire), UK Data Centre
UPDATE 2 28th/March/2013 22:48PM: We are currently delayed in the migration due to software installs. We will be working through the night to get the service up for tomorrow. We will keep posting updates here but any questions feel free to contact the sales team.
UPDATE 2 28th/March/2013 16:33PM: We are now working on getting the server racked, our team at the data centre are just putting all the bits together. Once we have the server racked we will get working on this. Sorry for the delay.
Affecting System - Coventry Subnet Issue
We are currently having an issue with the subnet 78.110.170.66 - 78.110.170.94 - Websites and virtual instances will appear down. We have tech team investigating and working to resolve this.
Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA
We will be running a quick reboot of our CF9 cPanel server to apply extra disk storage to that server. Expected downtime: <10min
Affecting System - Walla Walla, USA Data Centre
UPDATE 1 (11:24AM London Time): Full network access has been restored and all services were not effected by the network outage. The DC team are investigating further. Sorry for the downtime caused.
============
We are currently experiencing a network failure in the Walla Walla, USA Data Centre. This is being investigated and we hope to have further updates for you soon.
Sorry for the downtime caused and we hope to have this resolved asap.
Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA
This reboot has been rescheduled for tonight (7/March/2013):
Due to updates to our server instance we require to run a reboot to apply the performance features. This has been scheduled in for midnight tonight. The expected downtime is less than 15 minutes.
Affecting Server - Linux cPanel ~ Legacy Platform
UPDATE 1: The DNS cluster has been repaired and services are back online but we are still working on some fragments of the cluster. Hosting services should start resolving shortly and access to FTP/mail/web should be back soon.
---------------------------------------------------------------
ISSUE: We are currently investigating an issue with the DNS cluster on our shared hosting. Customers using these DNS reports may be effected:
dns1.dnshosted.co.uk ['208.43.81.114']
dns2.dnshosted.co.uk ['50.22.35.226']
dns3.dnshosted.co.uk ['174.37.183.108']
We are working on this now and hope to have resolved shortly.
Affecting Server - VM/VPS (SolusVM) Group
We will be running some updates on the Coventry Bravo node to ensure we are able to provide additional server features to you.
To ensure these features are done correctly we require to run a quick reboot of the server. This has been scheduled for Wednesday morning at 00:20 (6/March/2013). The expected downtime of the node during the reboot is <15min and our team will be checking this to ensure everything is rebooted correctly.
If you have any issues please do contact our support team and they will be happy to help you.
Our DE1 Windows server (Germany) has become unresponsive but our engineer is at the data centre now working on the issue to see if the issue is hardware related or software. We will post updates as soon as we can.
We will be performing a quick reboot of our DE1 Windows Server to enable some newly installed software to be activated. The downtime will be small and we only expect a downtime of <15min. Sorry for any inconvenience caused.
Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA
Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA
We are currently rebooting the CF services on this server. This may cause your websites to become slow or stop working during the reboot of the services. Sorry for any downtime caused.
Affecting Server - [S02] Linux cPanel/WHM ~ US1
We are currently preforming a server reboot of US1 to take upgrades and system changes in to effect.
Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA
Hi,
Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA
We have a scheduled reboot planned tonight of our cPanel US ColdFusion 9 servers to apply updates to the web services. We only expect a downtime of 5 minutes.
Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA
Affecting Server - [S01] Linux cPanel ~ Coventry UK
Affecting System - UK Cloud Rack
We are currently having issues after a reboot on our Windows server in the UK. Our team are working on this and should have an update shortly. We are sorry for this downtime and the issues caused. Thank you.
Affecting Server - [S08] Linux LxAdmin ~ Railo Server ~ Kansas City 1 USA
Affecting Other - UK Network
Affecting Server - [S08] Linux LxAdmin ~ Railo Server ~ Kansas City 1 USA
We are currently seeing a high load on our US Railo server (DC: Phoenix Server: 1). Our team are already investigating.
We are currently working on high CPU load from accounts on the server. We are performing updates to the server. We are sorry for any downtime caused but we are doing everything possible to resolve the CPU load issues and have the service to stable level.
Future/long term plans for this server: Directors have agreed plans to move to our new servers which run higher grade processors (Intel i7's). We will be ordering the hardware soon and running tests on this platform.
Thank you,
Update 1 : 10:30
All services appear to be coming back online, we are just checking the settings and CPU readings.
===========================
We are in the middle of a restart and performance scan on our Windows server to increase the performance and make the server more stable.
We are sorry for the downtime and hope to have everything back online asap.
Thank you,
Due to a high load from PHP and web services we are preforming a web service to clear temporary files. This will increase the performance of the service greatly.
Expected downtime <1
Thank you and sorry for the downtime.
UPDATE 1
We have corrected the apache issue but looking into what caused the fault and to make sure this does not happen again. If you have any questions do contact the team and ask for the level 3 tech team to help.
Thank you,
===
We are currently having issues on our CF1.HOSTMEDIAUK.COM server with Apache failing to start up. Our team are working on this now and we hope to have resolved shortly.
Thank you and very sorry for the inconvenience caused.
Update 1: 30/04/2012 13:12
We have made the changes to the ColdFusion services which included adjusting the MaxPermSize / Maximum JVM Heap and settings on the Windows services. This appears to have greatly increased the general performance of the ColdFusion service but requires monitoring. The server has enough resources to be updated again if required.
We are sorry for the down time and if you have any questions please contact the management department.
Thank you,
=====
We are currently looking into a ColdFusion service issue that is casuing the ColdFusion service to stop responding and requiring a restart. Our team are looking at increasing the general performance of the server and to allow ColdFusion to use more resources on the server.
We are sorry for the downtime and hope to have everything stable asap.
Management are looking into the possibility of moving the entire server to one of our custom Cloud services to greatly increase performance and reliability.
Thank you,
Affecting System - Nottingham Data Centre
investigation on the network issues seen last night at our Data Centre seemed to be a flow based attack against a particular IP. We have since located the server in question and secured it.
Apologies for any inconvenience this may have caused.
In order to keep the infrastructure up to date and provide the best service for our customers we are upgrading the switch connections in our racks on Sat 10th March 03:30. You may experience a few seconds loss of connectivity when we plug cf3.hostmediauk.com (Windows Plesk CF9 Server) in to the new switch.
Any dedicated & colocation service customers will be informed by email effected by this upgrade.
Thank you.
Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA
===
Update 1: All services are running and memory increased. We are keeping an eye on this but everything has now been resolved. Thank you and sorry for the issues.
===
We have had reports our CF9 cPanel Linux server has started to run slow and timeout, we are working on this now and hope to have everything running smoothly shortly. We are sorry for the issues and will resolve asap.
Thank you,
Affecting Server - [S08] Linux LxAdmin ~ Railo Server ~ Kansas City 1 USA
Our team are fixing the Kloxo service issue which has knocked out all web services on our Kloxo CP based servers. (CF & Railo).
We will have everything back online asap.
Thank you,
Our team are fixing the Kloxo service issue which has knocked out all web services on our Kloxo CP based servers. (CF & Railo).
We will have everything back online asap.
Thank you,
-----------------
UPDATE 2: We was correct in seeing it was a JVM memory issue, a restart made all sites come back online. We are keeping an eye on the service and checking documentation to see if this is a known issue.
-----------------
UPDATE 1: It appears there is a JVM memory issue within ColdFusion which means sites are loading but only after a long period of time. We are looking into this now.
-----------------
We are currently investigating a number of small outages on the CF3 server, our team are working on this to find out why these are happening and will update all clients soon.
Thank you,
UPDATE 11: Everyone seems to be using the new servers well, but we have a small number of accounts we are working on with the clients to make sure everything is working as it was on the old box. All services are stable. Any questions or problems do let us know.
----
UPDATE 10: We are currently running a number of tests due to issues with making Plesk 10 work with the very latest ColdFusion hot fixes. We hope to have this final element resolved soon ready to move all accounts to the new server. We are sorry for the delay, any questions do contact our team. Thank you,
----
UPDATE 9: Our team are doing everything they can to get our new server setup and secure. We have had a couple small delays but our current server appears to be stable at the present time but we are working as fast as we can to get the new server online. ETA for online is 09:30AM Friday (23/Sept) and sites being restored first thing. We will continue to do everything we can to ensure the current setup stays online and stable. Thank you,
----
UPDATE 8: We have emailed all our Windows customers regarding a plan that will be going through today. This will involve moving all accounts to a new larger server with the lates ColdFusion hot fixes pre-installed to safe guard from the issues we have been having. This upgrade is a huge investment for the company in the Windows hosting service we offer and will allow faster support (due to new staff being brought in), faster speeds (due to the increased port speeds) and faster performance on your websites & applications (due to the larger server specification).
This will be worked on today, we of course want to run as many tests as we can to make sure no issues appear on the servers. If you have any questions do let us know.
Thank you,
----
UPDATE 7: We are investigating more issues on our Windows servers relating to Sundays attempt to install ColdFusion hot fix 9.0.1. The issues seem to get resolved by our team and then ColdFusion does not seem to handle after a few hours and wants to crash. Our entire team is working on this to resolve asap.
I wish to take this time to thank all our customers for their support and patience. We can undertstand this is not what you want from a provider having its main service down but we will resolve this.
Thank you,
----
UPDATE 6: It appears the CF service is still having a number of issues which we are working as hard as we can to resolve. We are sorry for this constant issues on this server relating to Plesk & ColdFusion. We will update everyone asap.
----
UPDATE 5: We are having a number of minor issues on the server still but all services are working fully. We are looking into getting these remaining issues resolved asap.
----
UPDATE 4: All services are back online and running, we are currently montioring all services to ensure everything is stable.
Some clients may see an error for your ColdFusion DSN (Data Source Names), to resolve this please do the following:
Any questions do contact us.
----
UPDATE 3: We have been having some issues with Plesk connections to ColdFusion. If you are unable to access Plesk this is due to the connection issues. We are sorry for this long delay and downtime overnight and hope to have this resolved soon. We are contacting teams from Adobe & Plesk for extra support in this case.
----
UPDATE 2: Our team have ColdFusion reinstalled and working, we are currently working on the connections between Plesk and ColdFusion. We hope to have this fully resolved soon. Thank you,
----
UPDATE 1: After applying the update yesterday for ColdFusion 9.0.1 a number of the ColdFusion files were corrupted, the team are now reinstalling ColdFusion to the server and applying all settings. The team are still investigating the issues and hope to have everything resolved asap.
Thank you again for your patience.
----
We have been having a number of issues with our ColdFusion service today which our team is working hard to resolve fully. We will post updates as soon as we have more details on the issue.
We are very sorry for the inconvenience downtime on the CF services.
Thank you for your patience.
Affecting System - CF1 / CF2 / Railo Servers
We will be running a number of hardware updates to our US servers for ColdFusion & Railo, the update will include a number of benefits such as faster network connections (faster ports being opened up), larger backup drives, increased disk space due to new drives being added.
We do not expect much downtime, but with a min downtime of <20min.
Thank you,
Affecting Server - [S08] Linux LxAdmin ~ Railo Server ~ Kansas City 1 USA
Railo 3.2.3.000 has been released and we are running the update today, we expect minimal downtime and only a Railo restart required.
If you have any questions feel free to contact our team.
Thank you,
We are applying an update to our shared hosting server SERVER42 at 02:00 on the 18th September. Coldfusion sites may be unnavailable for a short period during this time.
We are sorry for any downtime that may occur.
Thank you,
Affecting Server - [S02] Linux cPanel/WHM ~ US1
Since the 16th of July our team has been working to fully restore all accounts on our media1 servers after a hack using accounts insecure scripting allowing access to commands on the server. This server security break has been fixed but the accounts where these scripting errors allowed the accounts file to be edited or settings changed are being fixed now.
We are sorry for any downtime or low connection occurs but our team is monitoring the situation and hope to have more updates soon.
Please make sure to use the support tickets to contact our support team.
Thank you for your time & patience.
We are currently having some issues with our Media 2 server in the US. Our data centre who we have contracts to maintain this server are looking into the problem now. It appears the semaphore arrays on the server were exhausted and is why the server keeps going offline. We are putting systems in place to handle this.
Very sorry for the downtime and hope to have this resolve shortly.
We are currently running some tests and looking into an issue on our ColdFusion Kloxo server in the US. We are sorry for any downtime and hope to have the server backup soon.
Thank you,
Host Media UK Tech Team
:: UPDATE
We have restored most of the servers systems and working on the rest of the services. Sorry for the downtime that has occurred.
Affecting Server - [S06] Linux cPanel ~ CF 10 Server ~ Washington DC USA
We are currently seeing a high usage on our CF2 server which we are investigating to resolve asap. The issue appears to be server wide, we hope to have an update soon and the server back online.
We are very sorry for the downtime.
UPDATE 11.47AM :: Data Centre team is helping with the issue, thanks to them this should be resolved quickly.
Affecting Server - [S08] Linux LxAdmin ~ Railo Server ~ Kansas City 1 USA
** Update
The issue has now been fixed, it appears the server was not taking in account cached memory so it showed using more memory than it really had.
Everything appears to be running fine now but our team will be keeping an eye on it to make sure nothing else comes of it.
-------
We are currently seeing a large increase in memory and CPU usage by apache for the Railo server even after our Upgrade of RAM last night. Our UK team are investigating this and our US server administrators will be checking the logs to see why this is. We believe an account is using a unsafe script which is creating the high server load.
We will post an update here asap!
Thank you.
Affecting Server - [S08] Linux LxAdmin ~ Railo Server ~ Kansas City 1 USA
Update 2
The Railo service is now backup and running while our team investigates the reasons for the service downtime. We will update this issue post with our results.
Update 1
We are currently updating our Railo service which we are sorry to say is taking a bit longer than we thought due to issues with the update. We are working on getting the service back to normal ASAP! The update will bring the Railo service to the latest version with all security and new features.
We will update all our customers once tests are finished.
Sorry for any downtime on Coldfusion / .cfm / .cfc files.
Affecting Server - [S02] Linux cPanel/WHM ~ US1
Update 3
All services have been tested over a period of 12 hours and appears to be running fine now. We are investigating the issue of the RAID controller.
Status: Resolved
----
Update 2
Our team has found the issue, it was an unexpected RAID hardware issue which is getting replaced / fixed now and our server will be up and running within the hour.
We are sorry for this issue, but was a RAID hardware issue.
Thank you and we will update you soon.
----
Update 1
We are currently having issues with our NY1 server which our US and UK team are working on.
This issue started after a restart due to a system clean to help performance on the server and speed up mail / POP3 systems. The server appears stalled on the main drives for an unknown reason.
We will update all our customers on this server asap!
Thank you
Affecting Server - [S02] Linux cPanel/WHM ~ US1
Our NY1 server has been going off and online over the night and we have been able to get partial access back to the server for some services but still working on the port 80 issue.
We are sorry for this issue but we have our looking into the issue and have our data centra investigating this.
We will update our reports here.
*** ISSUE RESOLVED ***
Affecting Server - [S02] Linux cPanel/WHM ~ US1
We have been seeing our servers getting hit with a high amount of CPU usage, we are working on this issue with our full team.
We hope to have news soon on why this is and make sure it does not happen again.
Sorry for any issues on your websites as reboots maybe required.
:: UPDATE 17/Sept/2009 ::
We have found the issues making our servers load higher than normal (Currently fixed but monitoring), we are in the process of moving up the deadline for our new systems which will offer customers new locations as well as newer servers. This is mainly for non FFMPEG/PHP Ming/Reseller customers. If you would like to know more please do open a support ticket to sales with your questions.
:: UPDATE 22/Sept/2009 ::
As many of the US media server customers will have seen all sites were offline while the servers main systems were running fine which included WHM/cPanel. We are investigating this and awaiting tests from our data centre.
We would like to take a moment to say sorry from all the team for any emails missed in this time and sorry for the down time. We will be offering customers the chance to move to our new servers in a wide range of locations. We hope our final tests today will allow customers to have new accounts setup around the world.
Affecting Server - [S02] Linux cPanel/WHM ~ US1
Our US media server had issues with port 25 sending and getting mail to work. Our US ISP changed their security without any warning and forced us to use port 26 instead for all mail.
The mail servers have now been restarted and all tests show mail working fully again. If you have any problems with your mail please contact us.
We are currently running some fixes for some issues we have found on the email server for our UK servers. The team there are working on this issue.
Our UK servers maybe running a bit slower than our normal fast speeds and some minor downtime may occur due to updates to our systems.
We are hoping these updates and restarts will improve our overal systems.
Sorry for any issues caused.
Server is now backup and running.
There is a planned restart of the Windows CF8 server for updates and install of FFmpeg.
This will be a short delay in the server.
Sorry for any inconvenience.
Affecting Other - Support and Sales Service
Over this weekend we will be performing some upgrades to our support and sales systems including our main mail for Host Media UK, all enquires will be answered as soon as this system upgrade has gone through and we are sorry for any inconvenience. All our server administrators will be on hand monitoring the servers as normal to make sure no down time occurs.
Thank you for your support.
Affecting Server - [S02] Linux cPanel/WHM ~ US1
Due to some network issues found by our server administrators we are working on upgrading our server connection to make sure no major down time appears and to keep our 99% uptime.
Some downtime may occur but we hope to have this issue sorted asap.
If you have any questions please do contact us either through Host Media UK or AeonCube Networks
Best regards
Host Media UK Server Team
Affecting Server - [S02] Linux cPanel/WHM ~ US1
Over night we had to work on the cPanel / WHM Linux server for some urgent maintenance operations on the of this server.
This is now complete and we are sorry for the down time.
Total down time: 150min
Reason for maintenance
With the maintenance preformed we have ensured future server speeds will stay fast and reliable.
If you have any questions regarding the server and our updates please contact us.
We are working on our ColdFusion / ASP services to run checks on accounts and setup of the server to run faster, some issues my occur on new setups and features maybe offline for a small amount of time (Pre-Installed script etc).
We are also looking into upgrading the server to a Plesk based server to allow faster and better systems.
Affecting Server - [S02] Linux cPanel/WHM ~ US1
Shared Hosting Server Upgrades
Our shared servers will be under upgrade but we website should not be done while these upgrades process. As our servers are also being moved in this upgrade we will be publishing new shared IP addresses. If your site uses A RECORDS for its domain names please change this asap. All name servers such as: dns1.hostmediauk.com / dns2.hostmediauk.com will not be effected.
Server upgrades includes:
New features coming soon
We will keep everyone updated on the progress of our upgrades.
Best regards
Host Media UK Management