
GitHub Status
Real-time updates of GitHub issues and outages
GitHub status is Operational
GitHub Git Operations
GitHub Webhooks
GitHub API Requests
GitHub Issues
GitHub Pull Requests
GitHub Actions
GitHub Codespaces
Active Incidents
No active incidents
Recently Resolved Incidents
We are investigating reports of degraded performance for Webhooks
Resolved: This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Investigating: Webhooks is operating normally.
Investigating: We have deployed a fix and are observing a full recovery. The affected endpoint was the webhook deliveries API (https://docs.github.com/en/rest/repos/webhooks?apiVersion=2022-11-28#list-deliveries-for-a-repository-webhook) and its organization and integration variants. We will continue monitoring to confirm stability.
Investigating: We are preparing a new mitigation for the issue affecting the webhook deliveries API (https://docs.github.com/en/rest/repos/webhooks?apiVersion=2022-11-28#list-deliveries-for-a-repository-webhook) and its organization and integration variants. Overall impact remains low, with under 1% of requests failing for a subset of customers.
Investigating: The previous mitigation did not resolve the issue. We are investigating further. The affected endpoint is the webhook deliveries API (https://docs.github.com/en/rest/repos/webhooks?apiVersion=2022-11-28#list-deliveries-for-a-repository-webhook) and its organization and integration variants. Overall impact remains low, with under 1% of requests failing for a subset of customers.
Investigating: We have deployed a fix for the issue causing some users to experience intermittent failures when accessing the Webhooks API and configuration pages. We are monitoring to confirm full recovery.
Investigating: We continue working on mitigations to restore service.
Investigating: We continue working on mitigations to restore service.
Investigating: We continue working on mitigations to restore service.
Investigating: We continue working on mitigations to restore full service.
Investigating: Our engineers have identified the root cause and are actively implementing mitigations to restore full service.
Investigating: This problem is impacting less than 1% of UI and webhook API calls.
Investigating: We are investigating an issue affecting a subset of customers experiencing errors when viewing webhook delivery histories and retrying webhook deliveries. UI and webhook API is impacted. Engineers have identified the cause and are actively working on mitigation.
Investigating: We are investigating reports of degraded performance for Webhooks
We are investigating reports of degraded performance for Actions
Resolved: This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Investigating: We are close to full recovery. Actions and dependent services should be functioning normally now.
Investigating: Actions is experiencing degraded performance. We are continuing to investigate.
Investigating: Actions and dependent services, including Pages, are recovering.
Investigating: We applied a mitigation and we should see a recovery soon.
Investigating: Actions is experiencing degraded availability. We are continuing to investigate.
Investigating: We are investigating reports of degraded performance for Actions
We are investigating reports of degraded performance for Actions
Resolved: On Mar 5, 2026, between 16:24 UTC and 19:30 UTC, Actions was degraded. During this time, 95% of workflow runs failed to start within 5 minutes with an average delay of 30 minutes and 10% workflow runs failed with an infrastructure error. This was due to Redis infrastructure updates that were being rolled out to production to improve our resiliency. These changes introduced a set of incorrect configuration change into our Redis load balancer causing internal traffic to be routed to an incorrect host leading to two incidents.
We mitigated this incident by correcting the misconfigured load balancer. Actions jobs were running successfully starting at 17:24 UTC. The remaining time until we closed the incident was burning through the queue of jobs.
We immediately rolled back the updates that were a contributing factor and have frozen all changes in this area until we have completed follow-up work from this. We are working to improve our automation to ensure incorrect configuration changes are not able to propagate through our infrastructure. We are also working on improved alerting to catch misconfigured load balancers before it becomes an incident. Additionally, we are updating the Redis client configuration in Actions to improve resiliency to brief cache interruptions.
Investigating: Webhooks is operating normally.
Investigating: Actions is operating normally.
Investigating: Actions is now fully recovered.
Investigating: The queue of requested Actions jobs continues to make progress. Job delays are now approximately 6 minutes and continuing to decrease.
Investigating: We are back to queueing Actions workflow runs at nominal rates and we are monitoring the clearing of queued runs during the incident.
Investigating: We have applied mitigations for connection failures across backend resources and we are observing a recovery in queueing Actions workflow runs.
Investigating: We are observing delays in queuing Actions workflow runs. We’re still investigating the causes of these delays.
Investigating: Webhooks is experiencing degraded availability. We are continuing to investigate.
Investigating: Actions is experiencing degraded availability. We are continuing to investigate.
Investigating: We are investigating reports of degraded performance for Actions
We are investigating reports of degraded performance for Copilot
Resolved: This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Investigating: The issues with our upstream model provider have been resolved, and gpt-5.3-codex is once again available in Copilot Chat and across IDE integrations. We will continue monitoring to ensure stability, but mitigation is complete.
Investigating: We are experiencing degraded availability for the gpt-5.3-codex model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.
Investigating: We are investigating reports of degraded performance for Copilot
We are investigating reports of degraded performance for Copilot
Resolved: On March 3, 2026, between 19:44 UTC and 21:05 UTC, some GitHub Copilot users reported that the Claude Opus 4.6 Fast model was no longer available in their IDE model selection. After investigation, we confirmed that this was caused by enterprise administrators adjusting their organization's model policies, which correctly removed the model for users in those organizations. No users outside the affected organizations lost access.
We confirmed that the Copilot settings were functioning as designed, and all expected users retained access to the model. The incident was resolved once we verified that the change was intentional and no platform regression had occurred.
Investigating: We believe that all expected users still have access to Claude Opus 4.6. We confirm that no users have lost access.
Investigating: We are investigating reports of degraded performance for Copilot
We are investigating reports of degraded availability for Actions, Copilot and Issues
Resolved: On March 3, 2026, between 18:46 UTC and 20:09 UTC, GitHub experienced a period of degraded availability impacting GitHub.com, the GitHub API, GitHub Actions, Git operations, GitHub Copilot, and other dependent services. At the peak of the incident, GitHub.com request failures reached approximately 40%. During the same period, approximately 43% of GitHub API requests failed. Git operations over HTTP had an error rate of approximately 6%, while SSH was not impacted. GitHub Copilot requests had an error rate of approximately 21%. GitHub Actions experienced less than 1% impact.
This incident shared the same underlying cause as an incident in early February where we saw a large volume of writes to the user settings caching mechanism. While deploying a change to reduce the burden of these writes, a bug caused every user’s cache to expire, get recalculated, and get rewritten. The increased load caused replication delays that cascaded down to all affected services. We mitigated this issue by immediately rolling back the faulty deployment.
We understand these incidents disrupted the workflows of developers. While we have made substantial, long-term investments in how GitHub is built and operated to improve resilience, we acknowledge we have more work to do. Getting there requires deep architectural work that is already underway, as well as urgent, targeted improvements. We are taking the following immediate steps:
- We have added a killswitch and improved monitoring to the caching mechanism to ensure we are notified before there is user impact and can respond swiftly.
- We are moving the cache mechanism to a dedicated host, ensuring that any future issues will solely affect services that rely on it.
Investigating: We're seeing recovery across all services. We're continuing to monitor for full recovery.
Investigating: Actions is operating normally.
Investigating: Git Operations is operating normally.
Investigating: Git Operations is experiencing degraded availability. We are continuing to investigate.
Investigating: We are seeing recovery across multiple services. Impact is mostly isolated to git operations at this point, we continue to investigate
Investigating: Copilot is operating normally.
Investigating: Pull Requests is operating normally.
Investigating: Pull Requests is experiencing degraded performance. We are continuing to investigate.
Investigating: Issues is operating normally.
Investigating: Webhooks is operating normally.
Investigating: Codespaces is operating normally.
Investigating: Webhooks is experiencing degraded performance. We are continuing to investigate.
Investigating: Issues is experiencing degraded performance. We are continuing to investigate.
Investigating: We've identified the issue and have applied a mitigation. We're seeing recovery of services. We continue to montitor for full recovery.
Investigating: API Requests is operating normally.
Investigating: API Requests is experiencing degraded performance. We are continuing to investigate.
Investigating: Codespaces is experiencing degraded performance. We are continuing to investigate.
Investigating: Pull Requests is experiencing degraded availability. We are continuing to investigate.
Investigating: Webhooks is experiencing degraded availability. We are continuing to investigate.
Investigating: We're seeing some service degradation across GitHub services. We're currently investigating impact.
Investigating: Webhooks is experiencing degraded performance. We are continuing to investigate.
Investigating: Pull Requests is experiencing degraded performance. We are continuing to investigate.
Investigating: API Requests is experiencing degraded availability. We are continuing to investigate.
Investigating: We are investigating reports of degraded availability for Actions, Copilot and Issues
We are investigating reports of impacted performance for some GitHub services.
Resolved: Between March 2, 21:42 UTC and March 3, 05:54 UTC project board updates, including adding new issues, PRs, and draft items to boards, were delayed from 30 minutes to over 2 hours, as a large backlog of messages accumulated in the Projects data denormalization pipeline.
The incident was caused by an anomalously large event that required longer processing time than expected. Processing this message exceeded the Kafka consumer heartbeat timeout, triggering repeated consumer group rebalances. As a result, the consumer group was unable to make forward progress, creating head-of-line blocking that delayed processing of subsequent project board updates.
We mitigated the issue by deploying a targeted fix that safely bypassed the offending message and allowed normal message consumption to resume. Consumer group stability recovered at 04:10 UTC, after which the backlog began draining. All queued messages were fully processed by 05:53 UTC, returning project board updates to normal processing latency.
We have identified several follow-up improvements to reduce the likelihood and impact of similar incidents in the future, including improved monitoring and alerting, as well as introducing limits for unusually large project events.
Investigating: This incident has been resolved. Project board updates are now processing in near-real-time.
Investigating: The backlog of delayed updates is expected to fully clear within approximately 1 hour, after which project board updates will return to near-real-time.
Investigating: The fix has been deployed and processing speeds have returned to normal. There is a backlog of delayed updates that will continue to be worked through — we're estimating how long that will take and will provide an update in the next 60 minutes.
Investigating: The fix is still building and is expected to deploy within 60 minutes. The current delay for GitHub Projects updates has increased to up to 5 hours.
Investigating: We're deploying a fix targeting the increased delay in GitHub Projects updates. The rollout should complete within 60 minutes. If successful, the current delay of up to 4 hours should begin to decrease.
Investigating: The delay for project board updates has increased to up to 3 hours. We've identified a potential cause and are working on remediation.
Investigating: Project board updates — including adding issues, pull requests, and changing fields such as "Status" — are currently delayed by 1–2 hours. Normal behavior is near-real-time. We're actively investigating the root cause.
Investigating: The impact extends beyond adding issues to project boards. Adding pull requests and updating fields such as "Status" may also be affected. We're continuing to investigate the root cause.
Investigating: Newly added issues are taking 30–60 minutes to appear on project boards, compared to the normal near-real-time behavior. We're investigating the root cause and possible mitigations.
Investigating: Newly added issues can take up to 30 minutes to appear on project boards. We're investigating the cause of this delay.
Investigating: Issues is experiencing degraded performance. We are continuing to investigate.
Investigating: We are investigating reports of impacted performance for some GitHub services.
We are investigating reports of degraded performance for Pull Requests
Resolved: On March 2nd, 2026, between 7:10 UTC and 22:04 UTC the pull requests service was degraded. Users navigating between tabs on the pull requests dashboard were met with 404 errors or blank pages.
This was due to a configuration change deployed on February 27th at 11:03 PM UTC. We mitigated the incident by reverting the change.
We’re working to improve monitoring for the page to automatically detect and alert us to routing failures.
Investigating: The issue on https://github.com/pulls is now fully resolved. All tabs are working again.
Investigating: We're deploying a fix for pull request filtering. Full rollout across all regions is expected within 60 minutes.
Investigating: We are experiencing issues with the Pull Requests dashboard that prevent users from filtering their pull requests. We have identified a mitigation and are deploying a fix. We'll post another update by 21:00 UTC.
Investigating: We are seeing a degraded experience when attempting to filter the /pulls dashboard. We are working on a mitigation.
Investigating: We are investigating reports of degraded performance for Pull Requests
GitHub Outage Survival Guide
GitHub Components
GitHub Git Operations
We are investigating reports of degraded availability for Actions, Copilot and Issues
Resolved: On March 3, 2026, between 18:46 UTC and 20:09 UTC, GitHub experienced a period of degraded availability impacting GitHub.com, the GitHub API, GitHub Actions, Git operations, GitHub Copilot, and other dependent services. At the peak of the incident, GitHub.com request failures reached approximately 40%. During the same period, approximately 43% of GitHub API requests failed. Git operations over HTTP had an error rate of approximately 6%, while SSH was not impacted. GitHub Copilot requests had an error rate of approximately 21%. GitHub Actions experienced less than 1% impact.
This incident shared the same underlying cause as an incident in early February where we saw a large volume of writes to the user settings caching mechanism. While deploying a change to reduce the burden of these writes, a bug caused every user’s cache to expire, get recalculated, and get rewritten. The increased load caused replication delays that cascaded down to all affected services. We mitigated this issue by immediately rolling back the faulty deployment.
We understand these incidents disrupted the workflows of developers. While we have made substantial, long-term investments in how GitHub is built and operated to improve resilience, we acknowledge we have more work to do. Getting there requires deep architectural work that is already underway, as well as urgent, targeted improvements. We are taking the following immediate steps:
- We have added a killswitch and improved monitoring to the caching mechanism to ensure we are notified before there is user impact and can respond swiftly.
- We are moving the cache mechanism to a dedicated host, ensuring that any future issues will solely affect services that rely on it.
Investigating: We're seeing recovery across all services. We're continuing to monitor for full recovery.
Investigating: Actions is operating normally.
Investigating: Git Operations is operating normally.
Investigating: Git Operations is experiencing degraded availability. We are continuing to investigate.
Investigating: We are seeing recovery across multiple services. Impact is mostly isolated to git operations at this point, we continue to investigate
Investigating: Copilot is operating normally.
Investigating: Pull Requests is operating normally.
Investigating: Pull Requests is experiencing degraded performance. We are continuing to investigate.
Investigating: Issues is operating normally.
Investigating: Webhooks is operating normally.
Investigating: Codespaces is operating normally.
Investigating: Webhooks is experiencing degraded performance. We are continuing to investigate.
Investigating: Issues is experiencing degraded performance. We are continuing to investigate.
Investigating: We've identified the issue and have applied a mitigation. We're seeing recovery of services. We continue to montitor for full recovery.
Investigating: API Requests is operating normally.
Investigating: API Requests is experiencing degraded performance. We are continuing to investigate.
Investigating: Codespaces is experiencing degraded performance. We are continuing to investigate.
Investigating: Pull Requests is experiencing degraded availability. We are continuing to investigate.
Investigating: Webhooks is experiencing degraded availability. We are continuing to investigate.
Investigating: We're seeing some service degradation across GitHub services. We're currently investigating impact.
Investigating: Webhooks is experiencing degraded performance. We are continuing to investigate.
Investigating: Pull Requests is experiencing degraded performance. We are continuing to investigate.
Investigating: API Requests is experiencing degraded availability. We are continuing to investigate.
Investigating: We are investigating reports of degraded availability for Actions, Copilot and Issues
GitHub Webhooks
We are investigating reports of degraded availability for Actions, Copilot and Issues
Resolved: On March 3, 2026, between 18:46 UTC and 20:09 UTC, GitHub experienced a period of degraded availability impacting GitHub.com, the GitHub API, GitHub Actions, Git operations, GitHub Copilot, and other dependent services. At the peak of the incident, GitHub.com request failures reached approximately 40%. During the same period, approximately 43% of GitHub API requests failed. Git operations over HTTP had an error rate of approximately 6%, while SSH was not impacted. GitHub Copilot requests had an error rate of approximately 21%. GitHub Actions experienced less than 1% impact.
This incident shared the same underlying cause as an incident in early February where we saw a large volume of writes to the user settings caching mechanism. While deploying a change to reduce the burden of these writes, a bug caused every user’s cache to expire, get recalculated, and get rewritten. The increased load caused replication delays that cascaded down to all affected services. We mitigated this issue by immediately rolling back the faulty deployment.
We understand these incidents disrupted the workflows of developers. While we have made substantial, long-term investments in how GitHub is built and operated to improve resilience, we acknowledge we have more work to do. Getting there requires deep architectural work that is already underway, as well as urgent, targeted improvements. We are taking the following immediate steps:
- We have added a killswitch and improved monitoring to the caching mechanism to ensure we are notified before there is user impact and can respond swiftly.
- We are moving the cache mechanism to a dedicated host, ensuring that any future issues will solely affect services that rely on it.
Investigating: We're seeing recovery across all services. We're continuing to monitor for full recovery.
Investigating: Actions is operating normally.
Investigating: Git Operations is operating normally.
Investigating: Git Operations is experiencing degraded availability. We are continuing to investigate.
Investigating: We are seeing recovery across multiple services. Impact is mostly isolated to git operations at this point, we continue to investigate
Investigating: Copilot is operating normally.
Investigating: Pull Requests is operating normally.
Investigating: Pull Requests is experiencing degraded performance. We are continuing to investigate.
Investigating: Issues is operating normally.
Investigating: Webhooks is operating normally.
Investigating: Codespaces is operating normally.
Investigating: Webhooks is experiencing degraded performance. We are continuing to investigate.
Investigating: Issues is experiencing degraded performance. We are continuing to investigate.
Investigating: We've identified the issue and have applied a mitigation. We're seeing recovery of services. We continue to montitor for full recovery.
Investigating: API Requests is operating normally.
Investigating: API Requests is experiencing degraded performance. We are continuing to investigate.
Investigating: Codespaces is experiencing degraded performance. We are continuing to investigate.
Investigating: Pull Requests is experiencing degraded availability. We are continuing to investigate.
Investigating: Webhooks is experiencing degraded availability. We are continuing to investigate.
Investigating: We're seeing some service degradation across GitHub services. We're currently investigating impact.
Investigating: Webhooks is experiencing degraded performance. We are continuing to investigate.
Investigating: Pull Requests is experiencing degraded performance. We are continuing to investigate.
Investigating: API Requests is experiencing degraded availability. We are continuing to investigate.
Investigating: We are investigating reports of degraded availability for Actions, Copilot and Issues
We are investigating reports of degraded performance for Actions
Resolved: On Mar 5, 2026, between 16:24 UTC and 19:30 UTC, Actions was degraded. During this time, 95% of workflow runs failed to start within 5 minutes with an average delay of 30 minutes and 10% workflow runs failed with an infrastructure error. This was due to Redis infrastructure updates that were being rolled out to production to improve our resiliency. These changes introduced a set of incorrect configuration change into our Redis load balancer causing internal traffic to be routed to an incorrect host leading to two incidents.
We mitigated this incident by correcting the misconfigured load balancer. Actions jobs were running successfully starting at 17:24 UTC. The remaining time until we closed the incident was burning through the queue of jobs.
We immediately rolled back the updates that were a contributing factor and have frozen all changes in this area until we have completed follow-up work from this. We are working to improve our automation to ensure incorrect configuration changes are not able to propagate through our infrastructure. We are also working on improved alerting to catch misconfigured load balancers before it becomes an incident. Additionally, we are updating the Redis client configuration in Actions to improve resiliency to brief cache interruptions.
Investigating: Webhooks is operating normally.
Investigating: Actions is operating normally.
Investigating: Actions is now fully recovered.
Investigating: The queue of requested Actions jobs continues to make progress. Job delays are now approximately 6 minutes and continuing to decrease.
Investigating: We are back to queueing Actions workflow runs at nominal rates and we are monitoring the clearing of queued runs during the incident.
Investigating: We have applied mitigations for connection failures across backend resources and we are observing a recovery in queueing Actions workflow runs.
Investigating: We are observing delays in queuing Actions workflow runs. We’re still investigating the causes of these delays.
Investigating: Webhooks is experiencing degraded availability. We are continuing to investigate.
Investigating: Actions is experiencing degraded availability. We are continuing to investigate.
Investigating: We are investigating reports of degraded performance for Actions
We are investigating reports of degraded performance for Webhooks
Resolved: This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Investigating: Webhooks is operating normally.
Investigating: We have deployed a fix and are observing a full recovery. The affected endpoint was the webhook deliveries API (https://docs.github.com/en/rest/repos/webhooks?apiVersion=2022-11-28#list-deliveries-for-a-repository-webhook) and its organization and integration variants. We will continue monitoring to confirm stability.
Investigating: We are preparing a new mitigation for the issue affecting the webhook deliveries API (https://docs.github.com/en/rest/repos/webhooks?apiVersion=2022-11-28#list-deliveries-for-a-repository-webhook) and its organization and integration variants. Overall impact remains low, with under 1% of requests failing for a subset of customers.
Investigating: The previous mitigation did not resolve the issue. We are investigating further. The affected endpoint is the webhook deliveries API (https://docs.github.com/en/rest/repos/webhooks?apiVersion=2022-11-28#list-deliveries-for-a-repository-webhook) and its organization and integration variants. Overall impact remains low, with under 1% of requests failing for a subset of customers.
Investigating: We have deployed a fix for the issue causing some users to experience intermittent failures when accessing the Webhooks API and configuration pages. We are monitoring to confirm full recovery.
Investigating: We continue working on mitigations to restore service.
Investigating: We continue working on mitigations to restore service.
Investigating: We continue working on mitigations to restore service.
Investigating: We continue working on mitigations to restore full service.
Investigating: Our engineers have identified the root cause and are actively implementing mitigations to restore full service.
Investigating: This problem is impacting less than 1% of UI and webhook API calls.
Investigating: We are investigating an issue affecting a subset of customers experiencing errors when viewing webhook delivery histories and retrying webhook deliveries. UI and webhook API is impacted. Engineers have identified the cause and are actively working on mitigation.
Investigating: We are investigating reports of degraded performance for Webhooks
GitHub Visit www.githubstatus.com for more information
GitHub API Requests
We are investigating reports of degraded availability for Actions, Copilot and Issues
Resolved: On March 3, 2026, between 18:46 UTC and 20:09 UTC, GitHub experienced a period of degraded availability impacting GitHub.com, the GitHub API, GitHub Actions, Git operations, GitHub Copilot, and other dependent services. At the peak of the incident, GitHub.com request failures reached approximately 40%. During the same period, approximately 43% of GitHub API requests failed. Git operations over HTTP had an error rate of approximately 6%, while SSH was not impacted. GitHub Copilot requests had an error rate of approximately 21%. GitHub Actions experienced less than 1% impact.
This incident shared the same underlying cause as an incident in early February where we saw a large volume of writes to the user settings caching mechanism. While deploying a change to reduce the burden of these writes, a bug caused every user’s cache to expire, get recalculated, and get rewritten. The increased load caused replication delays that cascaded down to all affected services. We mitigated this issue by immediately rolling back the faulty deployment.
We understand these incidents disrupted the workflows of developers. While we have made substantial, long-term investments in how GitHub is built and operated to improve resilience, we acknowledge we have more work to do. Getting there requires deep architectural work that is already underway, as well as urgent, targeted improvements. We are taking the following immediate steps:
- We have added a killswitch and improved monitoring to the caching mechanism to ensure we are notified before there is user impact and can respond swiftly.
- We are moving the cache mechanism to a dedicated host, ensuring that any future issues will solely affect services that rely on it.
Investigating: We're seeing recovery across all services. We're continuing to monitor for full recovery.
Investigating: Actions is operating normally.
Investigating: Git Operations is operating normally.
Investigating: Git Operations is experiencing degraded availability. We are continuing to investigate.
Investigating: We are seeing recovery across multiple services. Impact is mostly isolated to git operations at this point, we continue to investigate
Investigating: Copilot is operating normally.
Investigating: Pull Requests is operating normally.
Investigating: Pull Requests is experiencing degraded performance. We are continuing to investigate.
Investigating: Issues is operating normally.
Investigating: Webhooks is operating normally.
Investigating: Codespaces is operating normally.
Investigating: Webhooks is experiencing degraded performance. We are continuing to investigate.
Investigating: Issues is experiencing degraded performance. We are continuing to investigate.
Investigating: We've identified the issue and have applied a mitigation. We're seeing recovery of services. We continue to montitor for full recovery.
Investigating: API Requests is operating normally.
Investigating: API Requests is experiencing degraded performance. We are continuing to investigate.
Investigating: Codespaces is experiencing degraded performance. We are continuing to investigate.
Investigating: Pull Requests is experiencing degraded availability. We are continuing to investigate.
Investigating: Webhooks is experiencing degraded availability. We are continuing to investigate.
Investigating: We're seeing some service degradation across GitHub services. We're currently investigating impact.
Investigating: Webhooks is experiencing degraded performance. We are continuing to investigate.
Investigating: Pull Requests is experiencing degraded performance. We are continuing to investigate.
Investigating: API Requests is experiencing degraded availability. We are continuing to investigate.
Investigating: We are investigating reports of degraded availability for Actions, Copilot and Issues
GitHub Issues
We are investigating reports of impacted performance for some GitHub services.
Resolved: Between March 2, 21:42 UTC and March 3, 05:54 UTC project board updates, including adding new issues, PRs, and draft items to boards, were delayed from 30 minutes to over 2 hours, as a large backlog of messages accumulated in the Projects data denormalization pipeline.
The incident was caused by an anomalously large event that required longer processing time than expected. Processing this message exceeded the Kafka consumer heartbeat timeout, triggering repeated consumer group rebalances. As a result, the consumer group was unable to make forward progress, creating head-of-line blocking that delayed processing of subsequent project board updates.
We mitigated the issue by deploying a targeted fix that safely bypassed the offending message and allowed normal message consumption to resume. Consumer group stability recovered at 04:10 UTC, after which the backlog began draining. All queued messages were fully processed by 05:53 UTC, returning project board updates to normal processing latency.
We have identified several follow-up improvements to reduce the likelihood and impact of similar incidents in the future, including improved monitoring and alerting, as well as introducing limits for unusually large project events.
Investigating: This incident has been resolved. Project board updates are now processing in near-real-time.
Investigating: The backlog of delayed updates is expected to fully clear within approximately 1 hour, after which project board updates will return to near-real-time.
Investigating: The fix has been deployed and processing speeds have returned to normal. There is a backlog of delayed updates that will continue to be worked through — we're estimating how long that will take and will provide an update in the next 60 minutes.
Investigating: The fix is still building and is expected to deploy within 60 minutes. The current delay for GitHub Projects updates has increased to up to 5 hours.
Investigating: We're deploying a fix targeting the increased delay in GitHub Projects updates. The rollout should complete within 60 minutes. If successful, the current delay of up to 4 hours should begin to decrease.
Investigating: The delay for project board updates has increased to up to 3 hours. We've identified a potential cause and are working on remediation.
Investigating: Project board updates — including adding issues, pull requests, and changing fields such as "Status" — are currently delayed by 1–2 hours. Normal behavior is near-real-time. We're actively investigating the root cause.
Investigating: The impact extends beyond adding issues to project boards. Adding pull requests and updating fields such as "Status" may also be affected. We're continuing to investigate the root cause.
Investigating: Newly added issues are taking 30–60 minutes to appear on project boards, compared to the normal near-real-time behavior. We're investigating the root cause and possible mitigations.
Investigating: Newly added issues can take up to 30 minutes to appear on project boards. We're investigating the cause of this delay.
Investigating: Issues is experiencing degraded performance. We are continuing to investigate.
Investigating: We are investigating reports of impacted performance for some GitHub services.
We are investigating reports of degraded availability for Actions, Copilot and Issues
Resolved: On March 3, 2026, between 18:46 UTC and 20:09 UTC, GitHub experienced a period of degraded availability impacting GitHub.com, the GitHub API, GitHub Actions, Git operations, GitHub Copilot, and other dependent services. At the peak of the incident, GitHub.com request failures reached approximately 40%. During the same period, approximately 43% of GitHub API requests failed. Git operations over HTTP had an error rate of approximately 6%, while SSH was not impacted. GitHub Copilot requests had an error rate of approximately 21%. GitHub Actions experienced less than 1% impact.
This incident shared the same underlying cause as an incident in early February where we saw a large volume of writes to the user settings caching mechanism. While deploying a change to reduce the burden of these writes, a bug caused every user’s cache to expire, get recalculated, and get rewritten. The increased load caused replication delays that cascaded down to all affected services. We mitigated this issue by immediately rolling back the faulty deployment.
We understand these incidents disrupted the workflows of developers. While we have made substantial, long-term investments in how GitHub is built and operated to improve resilience, we acknowledge we have more work to do. Getting there requires deep architectural work that is already underway, as well as urgent, targeted improvements. We are taking the following immediate steps:
- We have added a killswitch and improved monitoring to the caching mechanism to ensure we are notified before there is user impact and can respond swiftly.
- We are moving the cache mechanism to a dedicated host, ensuring that any future issues will solely affect services that rely on it.
Investigating: We're seeing recovery across all services. We're continuing to monitor for full recovery.
Investigating: Actions is operating normally.
Investigating: Git Operations is operating normally.
Investigating: Git Operations is experiencing degraded availability. We are continuing to investigate.
Investigating: We are seeing recovery across multiple services. Impact is mostly isolated to git operations at this point, we continue to investigate
Investigating: Copilot is operating normally.
Investigating: Pull Requests is operating normally.
Investigating: Pull Requests is experiencing degraded performance. We are continuing to investigate.
Investigating: Issues is operating normally.
Investigating: Webhooks is operating normally.
Investigating: Codespaces is operating normally.
Investigating: Webhooks is experiencing degraded performance. We are continuing to investigate.
Investigating: Issues is experiencing degraded performance. We are continuing to investigate.
Investigating: We've identified the issue and have applied a mitigation. We're seeing recovery of services. We continue to montitor for full recovery.
Investigating: API Requests is operating normally.
Investigating: API Requests is experiencing degraded performance. We are continuing to investigate.
Investigating: Codespaces is experiencing degraded performance. We are continuing to investigate.
Investigating: Pull Requests is experiencing degraded availability. We are continuing to investigate.
Investigating: Webhooks is experiencing degraded availability. We are continuing to investigate.
Investigating: We're seeing some service degradation across GitHub services. We're currently investigating impact.
Investigating: Webhooks is experiencing degraded performance. We are continuing to investigate.
Investigating: Pull Requests is experiencing degraded performance. We are continuing to investigate.
Investigating: API Requests is experiencing degraded availability. We are continuing to investigate.
Investigating: We are investigating reports of degraded availability for Actions, Copilot and Issues
GitHub Pull Requests
We are investigating reports of degraded performance for Pull Requests
Resolved: On March 2nd, 2026, between 7:10 UTC and 22:04 UTC the pull requests service was degraded. Users navigating between tabs on the pull requests dashboard were met with 404 errors or blank pages.
This was due to a configuration change deployed on February 27th at 11:03 PM UTC. We mitigated the incident by reverting the change.
We’re working to improve monitoring for the page to automatically detect and alert us to routing failures.
Investigating: The issue on https://github.com/pulls is now fully resolved. All tabs are working again.
Investigating: We're deploying a fix for pull request filtering. Full rollout across all regions is expected within 60 minutes.
Investigating: We are experiencing issues with the Pull Requests dashboard that prevent users from filtering their pull requests. We have identified a mitigation and are deploying a fix. We'll post another update by 21:00 UTC.
Investigating: We are seeing a degraded experience when attempting to filter the /pulls dashboard. We are working on a mitigation.
Investigating: We are investigating reports of degraded performance for Pull Requests
We are investigating reports of degraded availability for Actions, Copilot and Issues
Resolved: On March 3, 2026, between 18:46 UTC and 20:09 UTC, GitHub experienced a period of degraded availability impacting GitHub.com, the GitHub API, GitHub Actions, Git operations, GitHub Copilot, and other dependent services. At the peak of the incident, GitHub.com request failures reached approximately 40%. During the same period, approximately 43% of GitHub API requests failed. Git operations over HTTP had an error rate of approximately 6%, while SSH was not impacted. GitHub Copilot requests had an error rate of approximately 21%. GitHub Actions experienced less than 1% impact.
This incident shared the same underlying cause as an incident in early February where we saw a large volume of writes to the user settings caching mechanism. While deploying a change to reduce the burden of these writes, a bug caused every user’s cache to expire, get recalculated, and get rewritten. The increased load caused replication delays that cascaded down to all affected services. We mitigated this issue by immediately rolling back the faulty deployment.
We understand these incidents disrupted the workflows of developers. While we have made substantial, long-term investments in how GitHub is built and operated to improve resilience, we acknowledge we have more work to do. Getting there requires deep architectural work that is already underway, as well as urgent, targeted improvements. We are taking the following immediate steps:
- We have added a killswitch and improved monitoring to the caching mechanism to ensure we are notified before there is user impact and can respond swiftly.
- We are moving the cache mechanism to a dedicated host, ensuring that any future issues will solely affect services that rely on it.
Investigating: We're seeing recovery across all services. We're continuing to monitor for full recovery.
Investigating: Actions is operating normally.
Investigating: Git Operations is operating normally.
Investigating: Git Operations is experiencing degraded availability. We are continuing to investigate.
Investigating: We are seeing recovery across multiple services. Impact is mostly isolated to git operations at this point, we continue to investigate
Investigating: Copilot is operating normally.
Investigating: Pull Requests is operating normally.
Investigating: Pull Requests is experiencing degraded performance. We are continuing to investigate.
Investigating: Issues is operating normally.
Investigating: Webhooks is operating normally.
Investigating: Codespaces is operating normally.
Investigating: Webhooks is experiencing degraded performance. We are continuing to investigate.
Investigating: Issues is experiencing degraded performance. We are continuing to investigate.
Investigating: We've identified the issue and have applied a mitigation. We're seeing recovery of services. We continue to montitor for full recovery.
Investigating: API Requests is operating normally.
Investigating: API Requests is experiencing degraded performance. We are continuing to investigate.
Investigating: Codespaces is experiencing degraded performance. We are continuing to investigate.
Investigating: Pull Requests is experiencing degraded availability. We are continuing to investigate.
Investigating: Webhooks is experiencing degraded availability. We are continuing to investigate.
Investigating: We're seeing some service degradation across GitHub services. We're currently investigating impact.
Investigating: Webhooks is experiencing degraded performance. We are continuing to investigate.
Investigating: Pull Requests is experiencing degraded performance. We are continuing to investigate.
Investigating: API Requests is experiencing degraded availability. We are continuing to investigate.
Investigating: We are investigating reports of degraded availability for Actions, Copilot and Issues
GitHub Actions
We are investigating reports of degraded availability for Actions, Copilot and Issues
Resolved: On March 3, 2026, between 18:46 UTC and 20:09 UTC, GitHub experienced a period of degraded availability impacting GitHub.com, the GitHub API, GitHub Actions, Git operations, GitHub Copilot, and other dependent services. At the peak of the incident, GitHub.com request failures reached approximately 40%. During the same period, approximately 43% of GitHub API requests failed. Git operations over HTTP had an error rate of approximately 6%, while SSH was not impacted. GitHub Copilot requests had an error rate of approximately 21%. GitHub Actions experienced less than 1% impact.
This incident shared the same underlying cause as an incident in early February where we saw a large volume of writes to the user settings caching mechanism. While deploying a change to reduce the burden of these writes, a bug caused every user’s cache to expire, get recalculated, and get rewritten. The increased load caused replication delays that cascaded down to all affected services. We mitigated this issue by immediately rolling back the faulty deployment.
We understand these incidents disrupted the workflows of developers. While we have made substantial, long-term investments in how GitHub is built and operated to improve resilience, we acknowledge we have more work to do. Getting there requires deep architectural work that is already underway, as well as urgent, targeted improvements. We are taking the following immediate steps:
- We have added a killswitch and improved monitoring to the caching mechanism to ensure we are notified before there is user impact and can respond swiftly.
- We are moving the cache mechanism to a dedicated host, ensuring that any future issues will solely affect services that rely on it.
Investigating: We're seeing recovery across all services. We're continuing to monitor for full recovery.
Investigating: Actions is operating normally.
Investigating: Git Operations is operating normally.
Investigating: Git Operations is experiencing degraded availability. We are continuing to investigate.
Investigating: We are seeing recovery across multiple services. Impact is mostly isolated to git operations at this point, we continue to investigate
Investigating: Copilot is operating normally.
Investigating: Pull Requests is operating normally.
Investigating: Pull Requests is experiencing degraded performance. We are continuing to investigate.
Investigating: Issues is operating normally.
Investigating: Webhooks is operating normally.
Investigating: Codespaces is operating normally.
Investigating: Webhooks is experiencing degraded performance. We are continuing to investigate.
Investigating: Issues is experiencing degraded performance. We are continuing to investigate.
Investigating: We've identified the issue and have applied a mitigation. We're seeing recovery of services. We continue to montitor for full recovery.
Investigating: API Requests is operating normally.
Investigating: API Requests is experiencing degraded performance. We are continuing to investigate.
Investigating: Codespaces is experiencing degraded performance. We are continuing to investigate.
Investigating: Pull Requests is experiencing degraded availability. We are continuing to investigate.
Investigating: Webhooks is experiencing degraded availability. We are continuing to investigate.
Investigating: We're seeing some service degradation across GitHub services. We're currently investigating impact.
Investigating: Webhooks is experiencing degraded performance. We are continuing to investigate.
Investigating: Pull Requests is experiencing degraded performance. We are continuing to investigate.
Investigating: API Requests is experiencing degraded availability. We are continuing to investigate.
Investigating: We are investigating reports of degraded availability for Actions, Copilot and Issues
We are investigating reports of degraded performance for Actions
Resolved: On Mar 5, 2026, between 16:24 UTC and 19:30 UTC, Actions was degraded. During this time, 95% of workflow runs failed to start within 5 minutes with an average delay of 30 minutes and 10% workflow runs failed with an infrastructure error. This was due to Redis infrastructure updates that were being rolled out to production to improve our resiliency. These changes introduced a set of incorrect configuration change into our Redis load balancer causing internal traffic to be routed to an incorrect host leading to two incidents.
We mitigated this incident by correcting the misconfigured load balancer. Actions jobs were running successfully starting at 17:24 UTC. The remaining time until we closed the incident was burning through the queue of jobs.
We immediately rolled back the updates that were a contributing factor and have frozen all changes in this area until we have completed follow-up work from this. We are working to improve our automation to ensure incorrect configuration changes are not able to propagate through our infrastructure. We are also working on improved alerting to catch misconfigured load balancers before it becomes an incident. Additionally, we are updating the Redis client configuration in Actions to improve resiliency to brief cache interruptions.
Investigating: Webhooks is operating normally.
Investigating: Actions is operating normally.
Investigating: Actions is now fully recovered.
Investigating: The queue of requested Actions jobs continues to make progress. Job delays are now approximately 6 minutes and continuing to decrease.
Investigating: We are back to queueing Actions workflow runs at nominal rates and we are monitoring the clearing of queued runs during the incident.
Investigating: We have applied mitigations for connection failures across backend resources and we are observing a recovery in queueing Actions workflow runs.
Investigating: We are observing delays in queuing Actions workflow runs. We’re still investigating the causes of these delays.
Investigating: Webhooks is experiencing degraded availability. We are continuing to investigate.
Investigating: Actions is experiencing degraded availability. We are continuing to investigate.
Investigating: We are investigating reports of degraded performance for Actions
We are investigating reports of degraded performance for Actions
Resolved: This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Investigating: We are close to full recovery. Actions and dependent services should be functioning normally now.
Investigating: Actions is experiencing degraded performance. We are continuing to investigate.
Investigating: Actions and dependent services, including Pages, are recovering.
Investigating: We applied a mitigation and we should see a recovery soon.
Investigating: Actions is experiencing degraded availability. We are continuing to investigate.
Investigating: We are investigating reports of degraded performance for Actions
GitHub Packages
GitHub Pages
GitHub Codespaces
We are investigating reports of degraded availability for Actions, Copilot and Issues
Resolved: On March 3, 2026, between 18:46 UTC and 20:09 UTC, GitHub experienced a period of degraded availability impacting GitHub.com, the GitHub API, GitHub Actions, Git operations, GitHub Copilot, and other dependent services. At the peak of the incident, GitHub.com request failures reached approximately 40%. During the same period, approximately 43% of GitHub API requests failed. Git operations over HTTP had an error rate of approximately 6%, while SSH was not impacted. GitHub Copilot requests had an error rate of approximately 21%. GitHub Actions experienced less than 1% impact.
This incident shared the same underlying cause as an incident in early February where we saw a large volume of writes to the user settings caching mechanism. While deploying a change to reduce the burden of these writes, a bug caused every user’s cache to expire, get recalculated, and get rewritten. The increased load caused replication delays that cascaded down to all affected services. We mitigated this issue by immediately rolling back the faulty deployment.
We understand these incidents disrupted the workflows of developers. While we have made substantial, long-term investments in how GitHub is built and operated to improve resilience, we acknowledge we have more work to do. Getting there requires deep architectural work that is already underway, as well as urgent, targeted improvements. We are taking the following immediate steps:
- We have added a killswitch and improved monitoring to the caching mechanism to ensure we are notified before there is user impact and can respond swiftly.
- We are moving the cache mechanism to a dedicated host, ensuring that any future issues will solely affect services that rely on it.
Investigating: We're seeing recovery across all services. We're continuing to monitor for full recovery.
Investigating: Actions is operating normally.
Investigating: Git Operations is operating normally.
Investigating: Git Operations is experiencing degraded availability. We are continuing to investigate.
Investigating: We are seeing recovery across multiple services. Impact is mostly isolated to git operations at this point, we continue to investigate
Investigating: Copilot is operating normally.
Investigating: Pull Requests is operating normally.
Investigating: Pull Requests is experiencing degraded performance. We are continuing to investigate.
Investigating: Issues is operating normally.
Investigating: Webhooks is operating normally.
Investigating: Codespaces is operating normally.
Investigating: Webhooks is experiencing degraded performance. We are continuing to investigate.
Investigating: Issues is experiencing degraded performance. We are continuing to investigate.
Investigating: We've identified the issue and have applied a mitigation. We're seeing recovery of services. We continue to montitor for full recovery.
Investigating: API Requests is operating normally.
Investigating: API Requests is experiencing degraded performance. We are continuing to investigate.
Investigating: Codespaces is experiencing degraded performance. We are continuing to investigate.
Investigating: Pull Requests is experiencing degraded availability. We are continuing to investigate.
Investigating: Webhooks is experiencing degraded availability. We are continuing to investigate.
Investigating: We're seeing some service degradation across GitHub services. We're currently investigating impact.
Investigating: Webhooks is experiencing degraded performance. We are continuing to investigate.
Investigating: Pull Requests is experiencing degraded performance. We are continuing to investigate.
Investigating: API Requests is experiencing degraded availability. We are continuing to investigate.
Investigating: We are investigating reports of degraded availability for Actions, Copilot and Issues
GitHub Copilot
We are investigating reports of degraded performance for Copilot
Resolved: On March 3, 2026, between 19:44 UTC and 21:05 UTC, some GitHub Copilot users reported that the Claude Opus 4.6 Fast model was no longer available in their IDE model selection. After investigation, we confirmed that this was caused by enterprise administrators adjusting their organization's model policies, which correctly removed the model for users in those organizations. No users outside the affected organizations lost access.
We confirmed that the Copilot settings were functioning as designed, and all expected users retained access to the model. The incident was resolved once we verified that the change was intentional and no platform regression had occurred.
Investigating: We believe that all expected users still have access to Claude Opus 4.6. We confirm that no users have lost access.
Investigating: We are investigating reports of degraded performance for Copilot
We are investigating reports of degraded availability for Actions, Copilot and Issues
Resolved: On March 3, 2026, between 18:46 UTC and 20:09 UTC, GitHub experienced a period of degraded availability impacting GitHub.com, the GitHub API, GitHub Actions, Git operations, GitHub Copilot, and other dependent services. At the peak of the incident, GitHub.com request failures reached approximately 40%. During the same period, approximately 43% of GitHub API requests failed. Git operations over HTTP had an error rate of approximately 6%, while SSH was not impacted. GitHub Copilot requests had an error rate of approximately 21%. GitHub Actions experienced less than 1% impact.
This incident shared the same underlying cause as an incident in early February where we saw a large volume of writes to the user settings caching mechanism. While deploying a change to reduce the burden of these writes, a bug caused every user’s cache to expire, get recalculated, and get rewritten. The increased load caused replication delays that cascaded down to all affected services. We mitigated this issue by immediately rolling back the faulty deployment.
We understand these incidents disrupted the workflows of developers. While we have made substantial, long-term investments in how GitHub is built and operated to improve resilience, we acknowledge we have more work to do. Getting there requires deep architectural work that is already underway, as well as urgent, targeted improvements. We are taking the following immediate steps:
- We have added a killswitch and improved monitoring to the caching mechanism to ensure we are notified before there is user impact and can respond swiftly.
- We are moving the cache mechanism to a dedicated host, ensuring that any future issues will solely affect services that rely on it.
Investigating: We're seeing recovery across all services. We're continuing to monitor for full recovery.
Investigating: Actions is operating normally.
Investigating: Git Operations is operating normally.
Investigating: Git Operations is experiencing degraded availability. We are continuing to investigate.
Investigating: We are seeing recovery across multiple services. Impact is mostly isolated to git operations at this point, we continue to investigate
Investigating: Copilot is operating normally.
Investigating: Pull Requests is operating normally.
Investigating: Pull Requests is experiencing degraded performance. We are continuing to investigate.
Investigating: Issues is operating normally.
Investigating: Webhooks is operating normally.
Investigating: Codespaces is operating normally.
Investigating: Webhooks is experiencing degraded performance. We are continuing to investigate.
Investigating: Issues is experiencing degraded performance. We are continuing to investigate.
Investigating: We've identified the issue and have applied a mitigation. We're seeing recovery of services. We continue to montitor for full recovery.
Investigating: API Requests is operating normally.
Investigating: API Requests is experiencing degraded performance. We are continuing to investigate.
Investigating: Codespaces is experiencing degraded performance. We are continuing to investigate.
Investigating: Pull Requests is experiencing degraded availability. We are continuing to investigate.
Investigating: Webhooks is experiencing degraded availability. We are continuing to investigate.
Investigating: We're seeing some service degradation across GitHub services. We're currently investigating impact.
Investigating: Webhooks is experiencing degraded performance. We are continuing to investigate.
Investigating: Pull Requests is experiencing degraded performance. We are continuing to investigate.
Investigating: API Requests is experiencing degraded availability. We are continuing to investigate.
Investigating: We are investigating reports of degraded availability for Actions, Copilot and Issues
We are investigating reports of degraded performance for Copilot
Resolved: This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Investigating: The issues with our upstream model provider have been resolved, and gpt-5.3-codex is once again available in Copilot Chat and across IDE integrations. We will continue monitoring to ensure stability, but mitigation is complete.
Investigating: We are experiencing degraded availability for the gpt-5.3-codex model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.
Investigating: We are investigating reports of degraded performance for Copilot