GitHub header

All Systems Operational

About This Site

For the status of GitHub Enterprise Cloud - EU, please visit: eu.githubstatus.com
For the status of GitHub Enterprise Cloud - Australia, please visit: au.githubstatus.com
For the status of GitHub Enterprise Cloud - US, please visit: us.githubstatus.com

Git Operations ? Operational
Webhooks ? Operational
Visit www.githubstatus.com for more information Operational
API Requests ? Operational
Issues ? Operational
Pull Requests ? Operational
Actions ? Operational
Packages ? Operational
Pages ? Operational
Codespaces ? Operational
Copilot Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Aug 31, 2025

No incidents reported today.

Aug 30, 2025

No incidents reported.

Aug 29, 2025

No incidents reported.

Aug 28, 2025

No incidents reported.

Aug 27, 2025
Resolved - On August 27, 2025 between 20:35 and 21:17 UTC, Copilot, Web and REST API traffic experienced degraded performance. Copilot saw an average of 36% of requests fail with a peak failure rate of 77%. Approximately 2% of all non-Copilot Web and REST API traffic requests failed.

This incident occurred after a stale schema cache was used following a database migration. This led to a large number of failed queries. At 21:15 UTC, we applied a fix and by 21:17 UTC, all services had fully recovered.

We have implemented a block for this failure mode as an immediate solution and are actively working to add safeguards to prevent similar issues from occurring in the future.

Aug 27, 21:27 UTC
Update - API Requests and Issues are operating normally.
Aug 27, 21:27 UTC
Update - We've discovered the cause of the service disruption and applied a mitigation.
Aug 27, 21:25 UTC
Update - We are continuing to investigate this issue.
Aug 27, 21:13 UTC
Update - API Requests is experiencing degraded performance. We are continuing to investigate.
Aug 27, 20:58 UTC
Update - The team is aware of the root cause of this issue and is working to mitigate the issue quickly.
Aug 27, 20:55 UTC
Update - Issues is experiencing degraded performance. We are continuing to investigate.
Aug 27, 20:50 UTC
Update - API Requests is experiencing degraded availability. We are continuing to investigate.
Aug 27, 20:48 UTC
Investigating - We are currently investigating this issue.
Aug 27, 20:41 UTC
Aug 26, 2025

No incidents reported.

Aug 25, 2025

No incidents reported.

Aug 24, 2025

No incidents reported.

Aug 23, 2025

No incidents reported.

Aug 22, 2025

No incidents reported.

Aug 21, 2025
Resolved - On August 21, 2025, from approximately 15:37 UTC to 18:10 UTC, customers experienced increased delays and failures when starting jobs on GitHub Actions using standard hosted runners. This was caused by connectivity issues in our East US region, which prevented runners from retrieving jobs and sending progress updates. As a result, capacity was significantly reduced, especially for busier configurations, leading to queuing and service interruptions. Approximately 8.05% of jobs on public standard Ubuntu24 runners and 3.4% of jobs on private standard Ubuntu24 runners did not start as expected.

By 18:10 UTC, we had mitigated the issue by provisioning additional resources in the affected region and burning down the backlog of queued runner assignments. By the end of that day, we deployed changes to improve runner connectivity resilience and graceful degradation in similar situations. We are also taking further steps to improve system resiliency by enhancing observability of network connection health with runners and improving load distribution and failover handling to help prevent similar issues in the future.

Aug 21, 18:13 UTC
Update - We've applied a mitigation to fix the issues with queuing and running Actions jobs. We are seeing improvements in telemetry and are monitoring for full recovery.
Aug 21, 17:58 UTC
Update - The team continues to investigate issues with some Actions jobs on Hosted Runners being queued for a long time and a percentage of jobs failing. We are increasing runner capacity and will continue providing updates on the progress towards mitigation.
Aug 21, 17:21 UTC
Update - The team continues to investigate issues with some Actions jobs on Hosted Runners being queued for a long time and a percentage of jobs failing. We will continue providing updates on the progress towards mitigation.
Aug 21, 16:43 UTC
Update - We are investigating reports of slow queue times for Hosted Runners, leading to high wait times.
Aug 21, 16:05 UTC
Investigating - We are investigating reports of degraded performance for Actions
Aug 21, 15:54 UTC
Resolved - On August 21st, 2025, between 6:15am UTC and 6:25am UTC Git and Web operations were degraded and saw intermittent errors. On average, the error rate was 1% for API and Web requests. This was due to database infrastructure automated maintenance reducing capacity below our tolerated threshold.

The incident was resolved when the impacted infrastructure self-healed and returned to normal operating capacity.

We are adding guardrails to reduce the impact of this type of maintenance in the future.

Aug 21, 06:58 UTC
Update - Git Operations is operating normally.
Aug 21, 06:58 UTC
Update - Issues is operating normally.
Aug 21, 06:58 UTC
Update - The errors in our database infrastructure were related to some maintenance events that had more impact than expected. We will provide more details and follow ups when we post a public summary for this incident in the coming days. All impact to customers is resolved.
Aug 21, 06:58 UTC
Update - We saw a brief spike in failures related to some of our database infrastructure. Everything has recovered but we are continuing to investigate to ensure we don't see any reoccurrence.
Aug 21, 06:39 UTC
Update - Approximately 1% of API and web requests are seeing intermittent errors. Some customers may see some push errors. We are currently investigating.
Aug 21, 06:30 UTC
Investigating - We are investigating reports of degraded performance for Git Operations and Issues
Aug 21, 06:25 UTC
Aug 20, 2025
Resolved - Between 15:49 and 16:37 UTC on 20 Aug 2025, creating a new GitHub account via the web signup page consistently returned server errors, and users were unable to complete signup during this 48-minute window. We detected the issue at 16:04 UTC and restored normal signup functionality by 16:37 UTC. A recent change to signup flow logic caused all attempts to error. The change was rolled back to restore service. This exposed a gap in our test coverage that we are fixing.
Aug 20, 16:37 UTC
Update - We have verified that we fixed the sign up flow and are working to ensure we don't introduce an issue like this in the future.
Aug 20, 16:37 UTC
Update - Customers may experience issues when signing up for new GitHub accounts. We are actively working on a mitigation and will post an update within 30 minutes.
Aug 20, 16:24 UTC
Investigating - We are currently investigating this issue.
Aug 20, 16:14 UTC
Aug 19, 2025
Resolved - On August 19, 2025, between 13:35 UTC and 14:33 UTC, GitHub search was in a degraded state. When searching for pull requests, issues, and workflow runs, users would have seen some slow, empty or incomplete results. In some cases, pull requests failed to load.

The incident was triggered by intermittent connectivity issues between our load balancers and search hosts. While retry logic initially masked these problems, retry queues eventually overwhelmed the load balancers, causing failure. The incident was mitigated at 14:33 UTC by throttling our search index pipeline.

Our automated alerting and internal retries reduced the impact of this event significantly. As a result of this incident we believe we have identified a faster way to mitigate it in the future. We are also working on multiple solutions to resolve the underlying connectivity issues.

Aug 19, 14:46 UTC
Update - Actions is operating normally.
Aug 19, 14:46 UTC
Update - Issues is operating normally.
Aug 19, 14:46 UTC
Update - We were able to mitigate the slowness by throttling some search indexing and will work on the issues created by the increased search indexing so they do not have latency impact.
Aug 19, 14:45 UTC
Update - We are seeing slightly elevated latency on some Issues endpoints and searches for workflow runs in Actions may not return quickly.
Aug 19, 14:11 UTC
Update - Actions is experiencing degraded performance. We are continuing to investigate.
Aug 19, 13:45 UTC
Update - Issues is experiencing degraded performance. We are continuing to investigate.
Aug 19, 13:44 UTC
Update - Issues with timeouts when searching
Aug 19, 13:39 UTC
Investigating - We are currently investigating this issue.
Aug 19, 13:39 UTC
Aug 18, 2025

No incidents reported.

Aug 17, 2025

No incidents reported.